ARO-RP/pkg/operator
Naveen Malik ad1c42b280 Use unix socket for br0 so detect container works in routefix DS 2021-05-06 15:51:01 -04:00
..
apis/aro.openshift.io/v1alpha1 clean prometheus 2021-05-03 13:25:58 +01:00
clientset/versioned generate 2021-01-19 10:11:02 -06:00
controllers Use unix socket for br0 so detect container works in routefix DS 2021-05-06 15:51:01 -04:00
deploy clean prometheus 2021-05-03 13:25:58 +01:00
README.md Add aro-operator 2020-07-29 15:46:23 +01:00
const.go use single configuration secret for operator 2020-07-29 15:46:36 +01:00
generate.go 4.6.12 2021-01-19 10:08:34 -06:00

README.md

Azure Red Hat OpenShift Operator

Responsibilities

Decentralizing service monitoring

This has the advantage of moving a proportion of monitoring effort to the edge, giving headroom (and the corresponding potential disadvantage of increased management complexity). Doing this helps avoid bloat and complexity risks in central monitoring as well as enabling additional and more complex monitoring use cases. Note that not all monitoring can be decentralised.

In all cases below the status.Conditions will be set.

  • periodically check for outbound internet connectivity from both the master and worker nodes.
  • periodically validate the cluster Service Principal permissions.
  • [TODO] Enumerate daemonset statuses, pod statuses, etc. We currently log diagnostic information associated with these checks in service logs; moving the checks to the edge will make these cluster logs, which is preferable.

Automatic service remediation

There will be use cases where we may want to remediate end user decisions automatically. Carrying out remediation locally is advantageous because it is likely to be simpler, more reliable, and with a shorter time to remediate.

End user warnings

Decentralizing ARO customization management

A cluster agent provides a centralized location to handle this use case. Many post-install configurations should probably move here.

  • monitor and repair mdsd as needed
  • set the alertmanager webhook

Developer documentation

How to Run a pre built operator image

Add the following to your "env" before running the rp

export ARO_IMAGE=arointsvc.azurecr.io/aro:latest

How to Run the operator locally (out of cluster)

Make sure KUBECONFIG is set:

make admin.kubeconfig
export KUBECONFIG=$(pwd)/admin.kubeconfig
oc scale -n openshift-azure-operator deployment/aro-operator-master --replicas=0
make generate
go run ./cmd/aro operator master

How to run a custom operator image

Add the following to your "env" before running the rp

export ARO_IMAGE=quay.io/asalkeld/aos-init:latest #(change to yours)
make publish-image-aro

#Then run an update
curl -X PATCH -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER?api-version=admin" --header "Content-Type: application/json" -d "{}"

#check on the deployment
oc -n openshift-azure-operator get all
oc -n openshift-azure-operator get clusters.aro.openshift.io/cluster -o yaml
oc -n openshift-azure-operator logs deployment.apps/aro-operator-master
oc -n openshift-config get secrets/pull-secret -o template='{{index .data ".dockerconfigjson"}}' | base64 -d

How to run operator e2e tests

go test ./test/e2e -v -ginkgo.v -ginkgo.focus="ARO Operator" -tags e2e