* bump cluster-credentials-operator
* add Get to roledefinitions client
* check script
* pipeline
* use parameters
* change target-version help message
* vendor
* fix role.go
* use candidate channel
* use operator names in RP-Config
* modify the output format
* changed to use quay.io API
* add some comments
* remove pipeline resource
* change role definition names
* Update openshift/api to release-4.12
* Add machinev1 resources to scheme
* Add CPMSDeactivatorEnabled flag
* Add CPMS Deactivator operator controller
* Add controlplanemachinesets to system:aro-sre ClusterRole
* Use better naming convention for CPMS controller flag
* Change debug log messages to info
* Make CPMS controller exit early if clusterversion < 4.12
* Only setup CPMS controller on clusters with machinev1 API
This is necessary in order to Watch the CPMS resource - this operation will fail on
clusters that do not support the Machine V1 API (OCP <= 4.11), causing controller
setup to fail. Since these clusters do not have a CPMS resource to manage, we can
safely skip running this controller on those clusters.
* Fix CPMS controller name
* Remove dependencies on console-operator and cluster-api-azure
* remove the forks that we don't use
* go mod updates
* go mod vendor
* stop relying on the providerspec being registered in tests
* cleanups
* update go sum
* test coverage fixes
* Update the cluster authorizer to use a DefaultAzureCredential
* Update the ARO operator to set and use DefaultAzureCredential via env vars
* Add a CredentialsRequest to the ARO operator deployment
* Restart the ARO operator upon `az aro update`
* Removed now unused AzCredentials function
* Changed ARO operator deployment wait time during `az aro update` from
20 minutes -> 5 minutes
* Refactor CliWithApply to generalize to different object types
* Updated Restart in pkg/util/kubernetes to use server-side apply
* Updated Restart in pkg/operator/deploy to only return an error after
at least attempting to restart all of the deployments passed in
* E2E test for ARO operator master deployment's restart upon cluster update
* Wait for the ARO operator's CredentialsRequest to be reconciled before
restarting
Hive needs to be vendored at the same commit level as it is deployed in
ARO. One reason, as described in the linked card, is that changes in
APIs can lead to unintended edits during round-trip Get()/Update()
flows.
ARO-3801
* ARO Cluster Operator Status derives the Cluster Operator's Available/Progressing/Degraded conditions from the state of its controllers
* Implements controller status conditions on the node operator controller
* Updates vendoring docs and scripts
* Makes use of `go mod tidy -compat=1.17`:
we do not have to be compatible with prior versions.
Saves a bit of headache when dealing with dependencies.
* Makes `hack/update-go-module-dependencies.sh` ignore `github.com/openshift/hive`:
it is not part of OCP dependencies and is not following `release-4.Y` branching.
We want to update it separately.
* Vendoring: update Hive to the latest version
* make generate