ARO-RP/pkg/operator
Yehor Naumenko f66e5ae555
Migrate Machine API definitions (#2224)
* Migrate Machine API definitions

* Update scheme to support backward compatibility

* Add backward compatibility test cases
2022-07-08 14:20:09 +01:00
..
apis add default ingress certificate check and default cluster dns check 2022-06-24 14:33:54 +10:00
clientset/versioned Defines a new CRD for the preview features 2021-10-11 15:22:47 +01:00
controllers Migrate Machine API definitions (#2224) 2022-07-08 14:20:09 +01:00
deploy now uses templating instead of custom stuff 2022-06-22 14:52:01 +02:00
README.md updated operator README to include instructions for running the ARO operator locally for a private cluster (#2045) 2022-04-08 11:58:33 -04:00
const.go use single configuration secret for operator 2020-07-29 15:46:36 +01:00
generate.go removed go-bindata from pkg/operator (#2119) 2022-06-02 11:05:32 +02:00

README.md

Azure Red Hat OpenShift Operator

Responsibilities

Decentralizing service monitoring

This has the advantage of moving a proportion of monitoring effort to the edge, giving headroom (and the corresponding potential disadvantage of increased management complexity). Doing this helps avoid bloat and complexity risks in central monitoring as well as enabling additional and more complex monitoring use cases. Note that not all monitoring can be decentralised.

In all cases below the status.Conditions will be set.

  • periodically check for outbound internet connectivity from both the master and worker nodes.
  • periodically validate the cluster Service Principal permissions.
  • [TODO] Enumerate daemonset statuses, pod statuses, etc. We currently log diagnostic information associated with these checks in service logs; moving the checks to the edge will make these cluster logs, which is preferable.

Automatic service remediation

There will be use cases where we may want to remediate end user decisions automatically. Carrying out remediation locally is advantageous because it is likely to be simpler, more reliable, and with a shorter time to remediate.

Remediations in place:

  • periodically reset NSGs in the master and worker subnets to the defaults (controlled by the reconcileNSGs feature flag)

End user warnings

Decentralizing ARO customization management

A cluster agent provides a centralized location to handle this use case. Many post-install configurations should probably move here.

  • monitor and repair mdsd as needed
  • set the alertmanager webhook

Controllers and Deployment

The full list of operator controllers with descriptions can be found in the README at the root of the repository.

The static pod resources can be found at pkg/operator/deploy/staticresources. The deploy operation kicks off two deployments in the openshift-azure-operator namespace, one for master and one for worker. The aro-operator-master deployment runs all controllers, while the aro-operator-worker deployment runs only the internet checker in the worker subnet.

Developer documentation

How to Run a pre built operator image

Add the following to your "env" before running the rp

export ARO_IMAGE=arointsvc.azurecr.io/aro:latest

How to Run the operator locally (out of cluster)

Make sure KUBECONFIG is set:

make admin.kubeconfig
export KUBECONFIG=$(pwd)/admin.kubeconfig

If you are using a private cluster, you need to connect to the respective VPN of your region. For example for eastus:

sudo openvpn --config secrets/vpn-eastus.ovpn

Then do:

oc scale -n openshift-azure-operator deployment/aro-operator-master --replicas=0
make generate
go run -tags aro ./cmd/aro operator master

How to create & publish ARO Operator image to ACR/Quay

  1. Login to AZ
az login
  1. Install Docker according to the steps outlined in Prepare Your Dev Environment

  2. Publish Image to ACR

    • Pre-requisite:

      ACR created in Azure Portal with Name ${USER}aro
      2GB+ of Free RAM
      
    • Setup environment variables

      export DST_ACR_NAME=${USER}aro
      export DST_AUTH=$(echo -n '00000000-0000-0000-0000-000000000000:'$(az acr login -n ${DST_ACR_NAME} --expose-token | jq -r .accessToken) | base64 -w0)
      
    • Login to the Azure Container Registry

      docker login -u 00000000-0000-0000-0000-000000000000 -p "$(echo $DST_AUTH | base64 -d | cut -d':' -f2)" "${DST_ACR_NAME}.azurecr.io"
      
  3. Publish Image to Quay

    • Pre-requisite:

      Quay account with repository created
      2GB+ of Free RAM
      
    • Setup mirroring environment variables

      export DST_QUAY=<quay-user-name>/<repository-name>
      
    • Login to the Quay Registry

      docker login quay.io/${DST_QUAY}
      
  4. Build and Push ARO Operator Image

make publish-image-aro-multistage

How to run a custom operator image

Add the following to your "env" before running the rp

export ARO_IMAGE=quay.io/asalkeld/aos-init:latest #(change to yours)
make publish-image-aro

#Then run an update
curl -X PATCH -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER?api-version=admin" --header "Content-Type: application/json" -d "{}"

#check on the deployment
oc -n openshift-azure-operator get all
oc -n openshift-azure-operator get clusters.aro.openshift.io/cluster -o yaml
oc -n openshift-azure-operator logs deployment.apps/aro-operator-master
oc -n openshift-config get secrets/pull-secret -o template='{{index .data ".dockerconfigjson"}}' | base64 -d

How to run operator e2e tests

go test ./test/e2e -v -ginkgo.v -ginkgo.focus="ARO Operator" -tags e2e