зеркало из https://github.com/Azure/orkestra.git
Add instructions to authenticate with keptn (#400)
Signed-off-by: Nitish Malhotra <nitish.malhotra@gmail.com>
This commit is contained in:
Родитель
0bcfd6d3c2
Коммит
3e5b682a7b
|
@ -189,8 +189,8 @@ Source code for the Keptn executor is available [here](https://github.com/Azure/
|
|||
To get started you need the following:
|
||||
|
||||
- A Kubernetes cluster (AKS, GKE, EKS, Kind, Minikube, etc)
|
||||
- [`kubectl`](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [`helm`](https://helm.sh/docs/intro/install/)
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [helm](https://helm.sh/docs/intro/install/)
|
||||
- [`argo`](https://github.com/argoproj/argo-workflows/releases)
|
||||
|
||||
### Using Helm
|
||||
|
|
|
@ -19,8 +19,8 @@ For getting started, you will need:
|
|||
- [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/)
|
||||
- [GKE](https://cloud.google.com/kubernetes-engine)
|
||||
- [EKS](https://aws.amazon.com/eks/)
|
||||
- `kubectl` *v1.18* or higher - see this [Getting started](https://kubernetes.io/docs/tasks/tools/) guide for `kubectl`.
|
||||
- `helm` *v3.5.2* or higher - see this [Getting started](https://helm.sh/docs/intro/install/) guide for `helm`.
|
||||
- kubectl *v1.18* or higher - see this [Getting started](https://kubernetes.io/docs/tasks/tools/) guide for kubectl.
|
||||
- helm *v3.5.2* or higher - see this [Getting started](https://helm.sh/docs/intro/install/) guide for helm.
|
||||
- `kubebuilder` *v2.3.1* or higher - Install using `make setup-kubebuilder`.
|
||||
- `controller-gen` *v0.5.0* or higher - Install using `make controller-gen`. This is required to generate the ApplicationGroup CRDS.
|
||||
|
||||
|
|
|
@ -8,7 +8,10 @@ nav_order: 3
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- `kubectl`
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [helm](https://helm.sh/docs/intro/install/)
|
||||
- [Argo Workflow CLI](https://github.com/argoproj/argo-workflows/releases/tag/v3.0.0)
|
||||
- [Keptn CLI](https://keptn.sh/docs/0.9.x/operate/install/)
|
||||
|
||||
## 1. [Bookinfo](https://github.com/Azure/orkestra/tree/main/examples/simple) without Quality Gates
|
||||
|
||||
|
@ -103,6 +106,26 @@ We expect the `Workflow` and subsequently the `ApplicationGroup` to succeed.
|
|||
#### Keptn dashboard - Success
|
||||
|
||||
> ⚠️ monitoring failed is a known, benign issue when submitting the `ApplicationGroup` multiple times.
|
||||
Authenticate with Keptn Controller for the dashboard:
|
||||
|
||||
```shell
|
||||
export KEPTN_ENDPOINT=http://$(kubectl get svc api-gateway-nginx -n orkestra -ojsonpath='{.status.loadBalancer.ingress[0].ip}')/api \
|
||||
export KEPTN_ENDPOINT=http://$(kubectl get svc api-gateway-nginx -n orkestra -ojsonpath='{.status.loadBalancer.ingress[0].ip}')/api \
|
||||
keptn auth --endpoint=$KEPTN_ENDPOINT --api-token=$KEPTN_API_TOKEN
|
||||
```
|
||||
|
||||
Retrieve the dashboard URL, Username and Password:
|
||||
|
||||
> The IPs and password will differ for each cluster.
|
||||
|
||||
```shell
|
||||
keptn configure bridge --output
|
||||
Your Keptn Bridge is available under: http://20.75.119.32/bridge
|
||||
|
||||
These are your credentials
|
||||
user: keptn
|
||||
password: UxUqN6XvWMpsrLqp6BeL
|
||||
```
|
||||
|
||||
![Keptn Dashboard](./assets/keptn-dashboard.png)
|
||||
|
||||
|
@ -129,7 +152,7 @@ helm upgrade --install orkestra chart/orkestra -n orkestra --create-namespace --
|
|||
> helm upgrade --install orkestra chart/orkestra -n orkestra --create-namespace --set=keptn.enabled=true --set=keptn-addons.enabled=true --set=keptn-addons.prometheus.namespace=$PROM_NS
|
||||
> ```
|
||||
|
||||
#### Scenario 1 : Successful Reconciliation
|
||||
### Scenario 1 : Successful Reconciliation
|
||||
|
||||
The *bookinfo* application is deployed using the following Kubernetes manifests:
|
||||
|
||||
|
@ -145,7 +168,7 @@ kubectl create -f examples/keptn/bookinfo.yaml -n orkestra \
|
|||
kubectl create -f examples/keptn/bookinfo-keptn-cm.yaml -n orkestra
|
||||
```
|
||||
|
||||
#### Scenario 2 : Failed Reconciliation leading to Rollback
|
||||
### Scenario 2 : Failed Reconciliation leading to Rollback
|
||||
|
||||
```shell
|
||||
kubectl apply -f examples/keptn/bookinfo-with-faults.yaml -n orkestra
|
||||
|
@ -173,38 +196,14 @@ kubectl delete -f examples/keptn/bookinfo-keptn-cm.yaml -n orkestra
|
|||
|
||||
### Manual Testing
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
- keptn [CLI](https://keptn.sh/docs/0.9.x/operate/install/)
|
||||
|
||||
#### Authenticate with keptn
|
||||
|
||||
```terminal
|
||||
export KEPTN_API_TOKEN=$(kubectl get secret keptn-api-token -n orkestra -ojsonpath='{.data.keptn-api-token}' | base64 --decode)
|
||||
export KEPTN_ENDPOINT=http://$(kubectl get svc api-gateway-nginx -n orkestra -ojsonpath='{.status.loadBalancer.ingress[0].ip}')/api
|
||||
```
|
||||
|
||||
```terminal
|
||||
keptn auth --endpoint=$KEPTN_ENDPOINT --api-token=$KEPTN_API_TOKEN
|
||||
|
||||
Starting to authenticate
|
||||
Successfully authenticated against the Keptn cluster http://20.72.120.233/api
|
||||
```
|
||||
|
||||
#### Retrieve username and password for Keptn bridge (dashboard)
|
||||
|
||||
```terminal
|
||||
keptn configure bridge --output
|
||||
```
|
||||
|
||||
#### Trigger evaluation
|
||||
|
||||
```terminal
|
||||
keptn create project hey --shipyard=./shipyard.yaml
|
||||
keptn create service bookinfo --project=hey
|
||||
keptn configure monitoring prometheus --project=hey --service=bookinfo
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=slo.yaml --resourceUri=slo.yaml --stage=dev
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=prometheus/sli.yaml --resourceUri=prometheus/sli.yaml --stage=dev
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=job/config.yaml --resourceUri=job/config.yaml --stage=dev
|
||||
keptn trigger evaluation --project=hey --service=bookinfo --timeframe=5m --stage dev --start $(date -u +"%Y-%m-%dT%T")
|
||||
keptn create project bookinfo --shipyard=./shipyard.yaml
|
||||
keptn create service bookinfo --project=bookinfo
|
||||
keptn configure monitoring prometheus --project=bookinfo --service=bookinfo
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=slo.yaml --resourceUri=slo.yaml --stage=dev
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=prometheus/sli.yaml --resourceUri=prometheus/sli.yaml --stage=dev
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=job/config.yaml --resourceUri=job/config.yaml --stage=dev
|
||||
keptn trigger evaluation --project=bookinfo --service=bookinfo --timeframe=5m --stage dev --start $(date -u +"%Y-%m-%dT%T")
|
||||
```
|
||||
|
|
|
@ -199,8 +199,8 @@ Source code for the Keptn executor is available [here](https://github.com/Azure/
|
|||
To get started you need the following:
|
||||
|
||||
- A Kubernetes cluster (AKS, GKE, EKS, Kind, Minikube, etc)
|
||||
- [`kubectl`](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [`helm`](https://helm.sh/docs/intro/install/)
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [helm](https://helm.sh/docs/intro/install/)
|
||||
- [`argo`](https://github.com/argoproj/argo-workflows/releases)
|
||||
|
||||
### Using Helm
|
||||
|
|
|
@ -6,6 +6,11 @@
|
|||
|
||||
> ⚠️ Avoid using a cluster with a low number of nodes and low CPU/RAM or a KinD, Minikube or microk8s cluster
|
||||
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [helm](https://helm.sh/docs/intro/install/)
|
||||
- [Argo Workflow CLI](https://github.com/argoproj/argo-workflows/releases/tag/v3.0.0)
|
||||
- [Keptn CLI](https://keptn.sh/docs/0.9.x/operate/install/)
|
||||
|
||||
## Description
|
||||
|
||||
In this example we will show how to use Keptn to perform a promotion based on the evaluation of a Quality Gate.
|
||||
|
@ -31,6 +36,27 @@ We expect the `Workflow` and subsequently the `ApplicationGroup` to succeed.
|
|||
|
||||
> ⚠️ monitoring failed is a known, benign issue when submitting the `ApplicationGroup` multiple times.
|
||||
|
||||
Authenticate with Keptn Controller for the dashboard:
|
||||
|
||||
```shell
|
||||
export KEPTN_ENDPOINT=http://$(kubectl get svc api-gateway-nginx -n orkestra -ojsonpath='{.status.loadBalancer.ingress[0].ip}')/api \
|
||||
export KEPTN_ENDPOINT=http://$(kubectl get svc api-gateway-nginx -n orkestra -ojsonpath='{.status.loadBalancer.ingress[0].ip}')/api \
|
||||
keptn auth --endpoint=$KEPTN_ENDPOINT --api-token=$KEPTN_API_TOKEN
|
||||
```
|
||||
|
||||
Retrieve the dashboard URL, Username and Password:
|
||||
|
||||
> The IPs and password will differ for each cluster.
|
||||
|
||||
```shell
|
||||
keptn configure bridge --output
|
||||
Your Keptn Bridge is available under: http://20.75.119.32/bridge
|
||||
|
||||
These are your credentials
|
||||
user: keptn
|
||||
password: UxUqN6XvWMpsrLqp6BeL
|
||||
```
|
||||
|
||||
![Keptn Dashboard](./keptn-dashboard.png)
|
||||
|
||||
2. The *productpage* sidecar is configured to inject faults (return status code 500, 80% of the time) using the [`VirtualService`](https://istio.io/latest/docs/tasks/traffic-management/fault-injection/). We expect the `Workflow` and subsequently the `ApplicationGroup` to fail & rollback to the previous `ApplicationGroup` spec (i.e. Scenario 1).
|
||||
|
@ -56,7 +82,7 @@ helm upgrade --install orkestra chart/orkestra -n orkestra --create-namespace --
|
|||
> helm upgrade --install orkestra chart/orkestra -n orkestra --create-namespace --set=keptn.enabled=true --set=keptn-addons.enabled=true --set=keptn-addons.prometheus.namespace=$PROM_NS
|
||||
> ```
|
||||
|
||||
### Scenario 1 : Successful Reconciliation
|
||||
## Scenario 1 : Successful Reconciliation
|
||||
|
||||
The *bookinfo* application is deployed using the following Kubernetes manifests:
|
||||
|
||||
|
@ -72,7 +98,7 @@ kubectl create -f examples/keptn/bookinfo.yaml -n orkestra \
|
|||
kubectl create -f examples/keptn/bookinfo-keptn-cm.yaml -n orkestra
|
||||
```
|
||||
|
||||
### Scenario 2 : Failed Reconciliation leading to Rollback
|
||||
## Scenario 2 : Failed Reconciliation leading to Rollback
|
||||
|
||||
```shell
|
||||
kubectl apply -f examples/keptn/bookinfo-with-faults.yaml -n orkestra
|
||||
|
@ -119,11 +145,11 @@ keptn configure bridge --output
|
|||
### Trigger evaluation
|
||||
|
||||
```terminal
|
||||
keptn create project hey --shipyard=./shipyard.yaml
|
||||
keptn create service bookinfo --project=hey
|
||||
keptn configure monitoring prometheus --project=hey --service=bookinfo
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=slo.yaml --resourceUri=slo.yaml --stage=dev
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=prometheus/sli.yaml --resourceUri=prometheus/sli.yaml --stage=dev
|
||||
keptn add-resource --project=hey --service=bookinfo --resource=job/config.yaml --resourceUri=job/config.yaml --stage=dev
|
||||
keptn trigger evaluation --project=hey --service=bookinfo --timeframe=5m --stage dev --start $(date -u +"%Y-%m-%dT%T")
|
||||
keptn create project bookinfo --shipyard=./shipyard.yaml
|
||||
keptn create service bookinfo --project=bookinfo
|
||||
keptn configure monitoring prometheus --project=bookinfo --service=bookinfo
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=slo.yaml --resourceUri=slo.yaml --stage=dev
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=prometheus/sli.yaml --resourceUri=prometheus/sli.yaml --stage=dev
|
||||
keptn add-resource --project=bookinfo --service=bookinfo --resource=job/config.yaml --resourceUri=job/config.yaml --stage=dev
|
||||
keptn trigger evaluation --project=bookinfo --service=bookinfo --timeframe=5m --stage dev --start $(date -u +"%Y-%m-%dT%T")
|
||||
``` -->
|
||||
|
|
|
@ -7,7 +7,8 @@ In this example we deploy an application group consisting of two demo applicatio
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- `kubectl`
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
|
||||
- [helm](https://helm.sh/docs/intro/install/)
|
||||
|
||||
Install the `ApplicationGroup`:
|
||||
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: brigade-crb
|
||||
namespace: default
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
namespace: default
|
||||
name: brigade-worker
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
|
@ -1,50 +0,0 @@
|
|||
const { events, Job } = require('brigadier')
|
||||
|
||||
events.on("exec", (brigadeEvent, project) => {
|
||||
console.log("Running on exec")
|
||||
let test = new Job("test-runner")
|
||||
test.timeout = 1500000
|
||||
test.image = "ubuntu"
|
||||
test.shell = "bash"
|
||||
|
||||
test.tasks = [
|
||||
"apt-get update -y",
|
||||
"apt-get upgrade -y",
|
||||
"apt-get install curl -y",
|
||||
"apt-get install sudo -y",
|
||||
"apt-get install git -y",
|
||||
"apt-get install make -y",
|
||||
"apt-get install wget -y",
|
||||
"apt-get install jq -y",
|
||||
"apt-get install sed -y",
|
||||
"curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.17/bin/linux/amd64/kubectl",
|
||||
"chmod +x ./kubectl",
|
||||
"sudo mv ./kubectl /usr/local/bin/kubectl",
|
||||
"echo installed kubectl",
|
||||
"curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3",
|
||||
"chmod 700 get_helm.sh",
|
||||
"./get_helm.sh",
|
||||
"echo installed helm",
|
||||
"wget -c https://golang.org/dl/go1.16.3.linux-amd64.tar.gz",
|
||||
"tar -C /usr/local -xzf go1.16.3.linux-amd64.tar.gz",
|
||||
"export PATH=$PATH:/usr/local/go/bin",
|
||||
"go version",
|
||||
"curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.2/argo-linux-amd64.gz",
|
||||
"gunzip argo-linux-amd64.gz",
|
||||
"chmod +x argo-linux-amd64",
|
||||
"mv ./argo-linux-amd64 /usr/local/bin/argo",
|
||||
"argo version",
|
||||
"git clone https://github.com/Azure/orkestra",
|
||||
"echo cloned orkestra",
|
||||
"cd orkestra",
|
||||
"git checkout remotes/origin/danaya/addtesting",
|
||||
"kubectl apply -k ./config/crd",
|
||||
"helm install --wait orkestra chart/orkestra/ --namespace orkestra --create-namespace",
|
||||
"kubectl apply -f examples/simple/bookinfo.yaml",
|
||||
"sleep 30",
|
||||
"argo wait bookinfo -n orkestra",
|
||||
"make test-e2e"
|
||||
]
|
||||
|
||||
test.run()
|
||||
})
|
|
@ -1,5 +0,0 @@
|
|||
{
|
||||
"dependencies": {
|
||||
"@brigadecore/brigade-utils": "0.5.0"
|
||||
}
|
||||
}
|
|
@ -1,37 +0,0 @@
|
|||
FROM debian:latest
|
||||
|
||||
RUN apt-get update -y && \
|
||||
apt-get upgrade -y && \
|
||||
apt-get install sudo -y && \
|
||||
apt-get install curl -y && \
|
||||
apt-get install git -y && \
|
||||
apt-get install make -y && \
|
||||
apt-get install wget -y && \
|
||||
apt-get install jq -y && \
|
||||
# install kubectl
|
||||
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.17/bin/linux/amd64/kubectl && \
|
||||
chmod +x ./kubectl && \
|
||||
sudo mv ./kubectl /usr/local/bin/kubectl && \
|
||||
# install golang
|
||||
wget -c https://golang.org/dl/go1.16.3.linux-amd64.tar.gz && \
|
||||
rm -rf /usr/local/go && \
|
||||
tar -C /usr/local -xzf go1.16.3.linux-amd64.tar.gz && \
|
||||
export PATH=$PATH:/usr/local/go/bin && \
|
||||
# install argo
|
||||
curl -sLO https://github.com/argoproj/argo/releases/download/v3.0.2/argo-linux-amd64.gz && \
|
||||
gunzip argo-linux-amd64.gz && \
|
||||
chmod +x argo-linux-amd64 && \
|
||||
mv ./argo-linux-amd64 /usr/local/bin/argo && \
|
||||
# install helm
|
||||
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && \
|
||||
chmod 700 get_helm.sh && \
|
||||
./get_helm.sh && \
|
||||
# install kubebuilder
|
||||
os=$(go env GOOS) && \
|
||||
arch=$(go env GOARCH) && \
|
||||
curl -L https://go.kubebuilder.io/dl/2.3.1/${os}/${arch} | tar -xz -C /tmp/ && \
|
||||
sudo mv /tmp/kubebuilder_2.3.1_${os}_${arch} /usr/local/kubebuilder && \
|
||||
export PATH=$PATH:/usr/local/kubebuilder/bin && \
|
||||
git clone https://github.com/Azure/orkestra && \
|
||||
cd orkestra && \
|
||||
make setup-kubebuilder
|
|
@ -1,125 +0,0 @@
|
|||
# Testing Environment Setup
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Prerequisites
|
||||
* `Argo` - Argo workflow client (Follow the instructions to install the binary from [releases](https://github.com/argoproj/argo/releases))
|
||||
* `Brigade` - [brigade install guide](https://docs.brigade.sh/intro/install/)
|
||||
* `brig` - [brig guide](https://docs.brigade.sh/topics/brig/)
|
||||
* `kubectl` - [kubectl install guide](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
|
||||
* A Kubernetes Cluster
|
||||
|
||||
When testing I used a KIND cluster but Brigade should work for minikube as well. Brigade docs have a section about Minikube and AKS, [found here](https://docs.brigade.sh/intro/install/#notes-for-minikube).
|
||||
|
||||
Before you begin make sure docker and your cluster are running.
|
||||
|
||||
The Dockerfile will be the image the brigade job is run on, at the moment this Docker image is not used but should be uploaded to DockerHub so our brigade.js can download it as an image. For now the brigade.js does the setup and grabs all the dependencies. After installing Brigade, you should now see the following brigade pods running
|
||||
|
||||
```
|
||||
helm install brigade-server brigade/brigade
|
||||
kubectl get pods -A
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
default brigade-server-brigade-api-7656489497-xczb7 1/1 Running 0 3m23s
|
||||
default brigade-server-brigade-ctrl-9d678c8bc-4h6nf 1/1 Running 0 3m23s
|
||||
default brigade-server-brigade-vacuum-1619128800-q24dh 0/1 Completed 0 34s
|
||||
default brigade-server-kashti-6ff4d6c99c-2dg87 1/1 Running 0 3m23s
|
||||
```
|
||||
|
||||
Using brig we will create a sample project. For our testing we just use all the defaults. The brigade.js path for us would be `testing/brigade.js`.
|
||||
|
||||
```
|
||||
brig project create
|
||||
? VCS or no-VCS project? no-VCS
|
||||
? Project Name mysampleproject
|
||||
? Add secrets? No
|
||||
? Secret for the Generic Gateway (alphanumeric characters only). Press Enter if you want it to be auto-generated [? for ? Secret for the Generic Gateway (alphanumeric characters only). Press Enter if you want it to be auto-generated
|
||||
Auto-generated Generic Gateway Secret: FPK8O
|
||||
? Default script ConfigMap name
|
||||
? Upload a default brigade.js script <PATH_TO_BRIGADE.js>
|
||||
? Default brigade.json config ConfigMap name
|
||||
? Upload a default brigade.json config
|
||||
? Configure advanced options No
|
||||
```
|
||||
|
||||
Confirm your sample project was created,
|
||||
|
||||
```
|
||||
brig project list
|
||||
NAME ID REPO
|
||||
mysampleproject brigade-a50ed8c1dbd7fa803b75f009f893b56bfd12347cadb1e404c12 github.com/brigadecore/empty-testbed
|
||||
```
|
||||
|
||||
To give our brigade jobs the ability to access our kubectl commands we have to apply the binding.yml file onto our cluster. This file gives the brigade jobs permissions for various kubectl commands.
|
||||
|
||||
```
|
||||
cd testing
|
||||
kubectl apply -f binding.yml
|
||||
```
|
||||
|
||||
We also want to run the argo server so we can view the workflow, and so our validation tests can check if the workflow pods were deployed successfully.
|
||||
|
||||
```
|
||||
argo server
|
||||
```
|
||||
|
||||
Now we can run our brigade.js file on our cluster to verify orkestra is working.
|
||||
|
||||
```
|
||||
cd testing
|
||||
brig run -f brigade.js mysampleproject
|
||||
Event created. Waiting for worker pod named "brigade-worker-01f47mb971tp4f3k6erx8fxhrr".
|
||||
Build: 01f47mb971tp4f3k6erx8fxhrr, Worker: brigade-worker-01f47mb971tp4f3k6erx8fxhrr
|
||||
prestart: no dependencies file found
|
||||
[brigade] brigade-worker version: 1.2.1
|
||||
[brigade:k8s] Creating PVC named brigade-worker-01f47mb971tp4f3k6erx8fxhrr
|
||||
Running on exec
|
||||
[brigade:k8s] Creating secret test-runner-01f47mb971tp4f3k6erx8fxhrr
|
||||
[brigade:k8s] Creating pod test-runner-01f47mb971tp4f3k6erx8fxhrr
|
||||
[brigade:k8s] Timeout set at 1500000 milliseconds
|
||||
[brigade:k8s] Pod not yet scheduled
|
||||
[brigade:k8s] default/test-runner-01f47mb971tp4f3k6erx8fxhrr phase Pending
|
||||
[brigade:k8s] default/test-runner-01f47mb971tp4f3k6erx8fxhrr phase Running
|
||||
done
|
||||
[brigade:k8s] default/test-runner-01f47mb971tp4f3k6erx8fxhrr phase Running
|
||||
```
|
||||
|
||||
Upon completion of the test runner we should see,
|
||||
```
|
||||
[brigade:k8s] default/test-runner-01f47mb971tp4f3k6erx8fxhrr phase Running
|
||||
done
|
||||
[brigade:k8s] default/test-runner-01f47mb971tp4f3k6erx8fxhrr phase Succeeded
|
||||
done
|
||||
[brigade:app] after: default event handler fired
|
||||
[brigade:app] beforeExit(2): destroying storage
|
||||
[brigade:k8s] Destroying PVC named brigade-worker-01f47mb971tp4f3k6erx8fxhrr
|
||||
```
|
||||
|
||||
To check the logs of the test runner and validations,
|
||||
|
||||
```
|
||||
brig build logs --last --jobs
|
||||
```
|
||||
|
||||
Any errors will be output to a default log file, `log.txt` in the testing folder.
|
||||
|
||||
If you need to install the brigadecore-utils at runtime add the --config flag to brig run with the brigade.json file
|
||||
|
||||
```
|
||||
brig run <PROJECT_NAME> --file brigade.js --config brigade.json
|
||||
```
|
||||
|
||||
(Unnecessary since we are not using KindJob anymore) The KindJob object in the Brigade API requires you to allow mount hosts in the project. When creating your project with
|
||||
|
||||
```
|
||||
brig project create
|
||||
```
|
||||
|
||||
Enter Y when asked for advanced options, this will allow you to set allow mount hosts to true.
|
||||
|
||||
|
||||
## Known Issues
|
||||
|
||||
There is a docker related bug tracked here: [issue 5593](https://github.com/docker/for-win/issues/5593), which causes there to be time drift when using Docker for Windows. This prevents debian images from properly installing packages since the system clock is wrong.
|
||||
|
||||
Quick fix: Restart computer or restart docker
|
||||
|
|
@ -1,196 +0,0 @@
|
|||
#!/bin/bash
|
||||
ORKESTRA_RESOURCE_COUNT=6
|
||||
AMBASSADOR_VERSION="6.6.0"
|
||||
BAD_AMBASSADOR_VERSION="100.0.0"
|
||||
LOG_FILE="OrkestraValidation.log"
|
||||
OUTPUT_TO_LOG=0
|
||||
g_successCount=0
|
||||
g_failureCount=0
|
||||
|
||||
while getopts "f" flag; do
|
||||
case "${flag}" in
|
||||
f) OUTPUT_TO_LOG=1;;
|
||||
esac
|
||||
done
|
||||
|
||||
function outputMessage {
|
||||
if [ "$OUTPUT_TO_LOG" -eq 1 ]; then
|
||||
echo $1 &>> $LOG_FILE
|
||||
else
|
||||
echo $1
|
||||
fi
|
||||
}
|
||||
|
||||
function testSuiteMessage {
|
||||
if [ "$1" == "TEST_PASS" ]; then
|
||||
outputMessage "SUCCESS: $2"
|
||||
((g_successCount++))
|
||||
elif [ "$1" == "TEST_FAIL" ]; then
|
||||
outputMessage "FAIL: $2"
|
||||
((g_failureCount++))
|
||||
elif [ "$1" == "LOG" ]; then
|
||||
outputMessage "LOG: $2"
|
||||
fi
|
||||
}
|
||||
|
||||
function summary {
|
||||
outputMessage "Success Cases: $g_successCount"
|
||||
outputMessage "Failure Cases: $g_failureCount"
|
||||
}
|
||||
|
||||
function resetLogFile {
|
||||
> $LOG_FILE
|
||||
}
|
||||
|
||||
function validateOrkestraDeployment {
|
||||
resources=$(kubectl get pods --namespace orkestra 2>> $LOG_FILE | grep -i -c running)
|
||||
if [ $resources -ne $ORKESTRA_RESOURCE_COUNT ]; then
|
||||
testSuiteMessage "TEST_FAIL" "No running orkestra resources. Currently $resources running resources. Expected $ORKESTRA_RESOURCE_COUNT"
|
||||
else
|
||||
testSuiteMessage "TEST_PASS" "orkestra resources are running"
|
||||
fi
|
||||
|
||||
orkestraStatus=$(helm status orkestra -n orkestra 2>> $LOG_FILE | grep -c deployed)
|
||||
if [ $orkestraStatus -eq 1 ]; then
|
||||
testSuiteMessage "TEST_PASS" "orkestra deployed successfully"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "orkestra not deployed"
|
||||
fi
|
||||
}
|
||||
|
||||
function validateBookInfoDeployment {
|
||||
ambassadorStatus=$(helm status ambassador -n ambassador 2>> $LOG_FILE | grep -c deployed)
|
||||
if [ $ambassadorStatus -eq 1 ]; then
|
||||
testSuiteMessage "TEST_PASS" "ambassador deployed successfully"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "ambassador not deployed"
|
||||
fi
|
||||
|
||||
bookinfoReleaseNames=("details" "productpage" "ratings" "reviews" "bookinfo")
|
||||
|
||||
for var in "${bookinfoReleaseNames[@]}"
|
||||
do
|
||||
deployedStatus=$(helm status $var -n bookinfo 2>> $LOG_FILE | grep -c deployed)
|
||||
if [ $deployedStatus -eq 1 ]; then
|
||||
testSuiteMessage "TEST_PASS" "$var deployed successfully"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "$var not deployed"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
function validateArgoWorkflow {
|
||||
bookinfoStatus=$(curl -s --request GET --url http://localhost:2746/api/v1/workflows/orkestra/bookinfo | grep -c "not found")
|
||||
if [ "$bookinfoStatus" -eq 1 ]; then
|
||||
testSuiteMessage "TEST_FAIL" "No argo workflow found for bookinfo"
|
||||
else
|
||||
argoNodes=($(curl -s --request GET --url http://localhost:2746/api/v1/workflows/orkestra/bookinfo | jq -c '.status.nodes[] | {id: .id, name: .name, displayName: .displayName, phase: .phase}'))
|
||||
|
||||
requiredNodes=(
|
||||
"bookinfo"
|
||||
"bookinfo.bookinfo.ratings"
|
||||
"bookinfo.ambassador"
|
||||
"bookinfo.bookinfo.details"
|
||||
"bookinfo.bookinfo.productpage"
|
||||
"bookinfo.ambassador.ambassador"
|
||||
"bookinfo.bookinfo.reviews"
|
||||
"bookinfo.bookinfo.bookinfo"
|
||||
"bookinfo.bookinfo"
|
||||
)
|
||||
|
||||
for node in "${requiredNodes[@]}"
|
||||
do
|
||||
status=$(curl -s --request GET --url http://localhost:2746/api/v1/workflows/orkestra/bookinfo | jq --arg node "$node" -r '.status.nodes[] | select(.name==$node) | .phase')
|
||||
if [ "$status" == "Succeeded" ]; then
|
||||
testSuiteMessage "TEST_PASS" "argo node: $node has succeeded"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "$node status: $status, Expected Succeeded"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
function validateApplicationGroup {
|
||||
applicationGroupJson=$(kubectl get applicationgroup bookinfo -o json | jq '.status')
|
||||
echo $applicationGroupJson
|
||||
groupCondition=$(echo "$applicationGroupJson" | jq -r '.conditions[] | select(.reason=="Succeeded") | select(.type=="Ready")')
|
||||
if [ -n "$groupCondition" ]; then
|
||||
testSuiteMessage "TEST_PASS" "ApplicationGroup status correct"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "ApplicationGroup status expected: (Succeeded, Ready)"
|
||||
fi
|
||||
|
||||
applicationsJson=$(echo "$applicationGroupJson" | jq '.status')
|
||||
ambassadorReason=$(echo "$applicationsJson" | jq -r '.[0].conditions[] | select(.reason=="InstallSucceeded")')
|
||||
if [ -n "$ambassadorReason" ]; then
|
||||
testSuiteMessage "TEST_PASS" "Ambassador status correct"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "Ambassador status expected: InstallSucceeded"
|
||||
fi
|
||||
|
||||
bookInfoReason=$(echo "$applicationsJson" | jq -r '.[1].conditions[] | select(.reason=="InstallSucceeded")')
|
||||
if [ -n "$bookInfoReason" ]; then
|
||||
testSuiteMessage "TEST_PASS" "BookInfo status correct"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "BookInfo status expected: InstallSucceeded"
|
||||
fi
|
||||
|
||||
subcharts=("details" "productpage" "ratings" "reviews")
|
||||
for chart in "${subcharts[@]}"
|
||||
do
|
||||
applicationReason=$(echo "$applicationsJson" | jq -r --arg c "$chart" '.[1].subcharts[$c].conditions[] | select(.reason=="InstallSucceeded")')
|
||||
if [ -n "$applicationReason" ]; then
|
||||
testSuiteMessage "TEST_PASS" "$chart status correct"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "$chart status expected: InstallSucceeded"
|
||||
fi
|
||||
done
|
||||
|
||||
}
|
||||
|
||||
function applyFailureOnExistingDeployment {
|
||||
kubectl get deployments.apps orkestra -n orkestra -o json | jq '.spec.template.spec.containers[].args += ["--disable-remediation"]' | kubectl replace -f -
|
||||
kubectl get applicationgroup bookinfo -o json | jq --arg v "$BAD_AMBASSADOR_VERSION" '.spec.applications[0].spec.chart.version = $v' | kubectl replace -f -
|
||||
}
|
||||
|
||||
function deployFailure {
|
||||
kubectl delete applicationgroup bookinfo
|
||||
sed "s/${AMBASSADOR_VERSION}/${BAD_AMBASSADOR_VERSION}/g" ./examples/simple/bookinfo.yaml | kubectl apply -f -
|
||||
sleep 5
|
||||
}
|
||||
|
||||
function validateFailedApplicationGroup {
|
||||
applicationGroupJson=$(kubectl get applicationgroup bookinfo -o json | jq '.status')
|
||||
groupCondition=$(echo "$applicationGroupJson" | jq -r '.conditions[] | select(.reason=="Failed")')
|
||||
if [ -n "$groupCondition" ]; then
|
||||
testSuiteMessage "TEST_PASS" "ApplicationGroup status correct"
|
||||
else
|
||||
testSuiteMessage "TEST_FAIL" "ApplicationGroup status expected: (Failed)"
|
||||
fi
|
||||
}
|
||||
|
||||
function runFailureScenarios {
|
||||
echo Running Failure Scenarios
|
||||
applyFailureOnExistingDeployment
|
||||
validateFailedApplicationGroup
|
||||
deployFailure
|
||||
validateFailedApplicationGroup
|
||||
summary
|
||||
echo DONE
|
||||
}
|
||||
|
||||
function runValidation {
|
||||
if [ "$OUTPUT_TO_LOG" -eq 1 ]; then
|
||||
resetLogFile
|
||||
fi
|
||||
echo Running Validation
|
||||
validateOrkestraDeployment
|
||||
# validateBookInfoDeployment
|
||||
validateArgoWorkflow
|
||||
validateApplicationGroup
|
||||
summary
|
||||
echo DONE
|
||||
}
|
||||
|
||||
runValidation
|
||||
runFailureScenarios
|
|
@ -1,8 +0,0 @@
|
|||
registries:
|
||||
bitnami:
|
||||
url: "https://charts.bitnami.com/bitnami"
|
||||
insecureSkipVerify: true
|
||||
|
||||
staging:
|
||||
url: "http://localhost:8080"
|
||||
path: "charts/"
|
Загрузка…
Ссылка в новой задаче