9.1 KiB
Deploy development RP
Prerequisites
- Your development environment is prepared according to the steps outlined in Prepare Your Dev Environment
Installing the extension
-
Build the development
az aro
extension:make az
-
Verify that the ARO extension path is in your
az
configuration:grep -q 'dev_sources' ~/.azure/config || cat >>~/.azure/config <<EOF [extension] dev_sources = $PWD/python EOF
-
Verify the ARO extension is registered:
az -v ... Extensions: aro 0.4.0 (dev) /path/to/rp/python/az/aro ... Development extension sources: /path/to/rp/python ...
Note: you will be able to update your development
az aro
extension in the future by simply runninggit pull
.
Prepare your environment
-
If you don't have access to a shared development environment and secrets, follow prepare a shared RP development environment.
-
Set SECRET_SA_ACCOUNT_NAME to the name of the storage account containing your shared development environment secrets and save them in
secrets
:SECRET_SA_ACCOUNT_NAME=rharosecretsdev make secrets
-
Copy, edit (if necessary) and source your environment file. The required environment variable configuration is documented immediately below:
cp env.example env vi env . ./env
LOCATION
: Location of the shared RP development environment (default:eastus
).RP_MODE
: Set todevelopment
to use a development RP running at https://localhost:8443/.
-
Create your own RP database:
az deployment group create \ -g "$RESOURCEGROUP" \ -n "databases-development-$USER" \ --template-file pkg/deploy/assets/databases-development.json \ --parameters \ "databaseAccountName=$DATABASE_ACCOUNT_NAME" \ "databaseName=$DATABASE_NAME" \ >/dev/null
Run the RP and create a cluster
-
Source your environment file.
. ./env
-
Run the RP
make runlocal-rp
-
To create a cluster, EITHER follow the instructions in Create, access, and manage an Azure Red Hat OpenShift 4.3 Cluster. Note that as long as the
RP_MODE
environment variable is set todevelopment
, theaz aro
client will connect to your local RP.OR use the create utility:
CLUSTER=<cluster-name> go run ./hack/cluster create
Later the cluster can be deleted as follows:
CLUSTER=<cluster-name> go run ./hack/cluster delete
By default, a public cluster will be created. In order to create a private cluster, set the
PRIVATE_CLUSTER
environment variable totrue
prior to creation. Internet access from the cluster can also be restricted by setting theNO_INTERNET
environment variable totrue
. -
The following additional RP endpoints are available but not exposed via
az aro
:-
Delete a subscription, cascading deletion to all its clusters:
curl -k -X PUT \ -H 'Content-Type: application/json' \ -d '{"state": "Deleted", "properties": {"tenantId": "'"$AZURE_TENANT_ID"'"}}' \ "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID?api-version=2.0"
-
List operations:
curl -k \ "https://localhost:8443/providers/Microsoft.RedHatOpenShift/operations?api-version=2020-04-30"
-
View RP logs in a friendly format:
journalctl _COMM=aro -o json --since "15 min ago" -f | jq -r 'select (.COMPONENT != null and (.COMPONENT | contains("access"))|not) | .MESSAGE'
-
Make Admin-Action API call(s) to a running local-rp
export CLUSTER=<cluster-name>
export AZURE_SUBSCRIPTION_ID=<subscription-id>
export RESOURCEGROUP=<resource-group-name>
[OR]
. ./env
-
Perform AdminUpdate on a dev cluster
curl -X PATCH -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER?api-version=admin" --header "Content-Type: application/json" -d "{}"
-
Get Cluster details of a dev cluster
curl -X GET -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER?api-version=admin" --header "Content-Type: application/json" -d "{}"
-
Get SerialConsole logs of a VM of dev cluster
VMNAME="aro-cluster-qplnw-master-0" curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/serialconsole?vmName=$VMNAME" --header "Content-Type: application/json" -d "{}"
-
List Clusters of a local-rp
curl -X GET -k "https://localhost:8443/admin/providers/microsoft.redhatopenshift/openshiftclusters"
-
List cluster Azure Resources of a dev cluster
curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/resources"
-
Perform Cluster Upgrade on a dev cluster
curl -X POST -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/upgrade"
-
Get container logs from an OpenShift pod in a cluster
NAMESPACE=<namespace-name> POD=<pod-name> CONTAINER=<container-name> curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/kubernetespodlogs?podname=$POD&namespace=$NAMESPACE&container=$CONTAINER"
Debugging OpenShift Cluster
- SSH to the bootstrap node:
NOTE: If you have a password-based
sudo
command, you must first authenticate before runningsudo
in the background
sudo openvpn secrets/vpn-$LOCATION.ovpn &
CLUSTER=cluster hack/ssh-agent.sh bootstrap
-
Get an admin kubeconfig:
CLUSTER=cluster make admin.kubeconfig export KUBECONFIG=admin.kubeconfig
-
"SSH" to a cluster node:
- Get the admin kubeconfig and
export KUBECONFIG
as detailed above. - Run the ssh-agent.sh script. This takes the argument is the name of the NIC attached to the VM you are trying to ssh to.
- Given the following nodes these commands would be used to connect to the respective node
$ oc get nodes NAME STATUS ROLES AGE VERSION aro-dev-abc123-master-0 Ready master 47h v1.19.0+2f3101c aro-dev-abc123-master-1 Ready master 47h v1.19.0+2f3101c aro-dev-abc123-master-2 Ready master 47h v1.19.0+2f3101c aro-dev-abc123-worker-eastus1-2s5rb Ready worker 47h v1.19.0+2f3101c aro-dev-abc123-worker-eastus2-php82 Ready worker 47h v1.19.0+2f3101c aro-dev-abc123-worker-eastus3-cbqs2 Ready worker 47h v1.19.0+2f3101c CLUSTER=cluster hack/ssh-agent.sh master0 # master node aro-dev-abc123-master-0 CLUSTER=cluster hack/ssh-agent.sh aro-dev-abc123-worker-eastus1-2s5rb # worker aro-dev-abc123-worker-eastus1-2s5rb CLUSTER=cluster hack/ssh-agent.sh eastus1 # worker aro-dev-abc123-worker-eastus1-2s5rb CLUSTER=cluster hack/ssh-agent.sh 2s5rb # worker aro-dev-abc123-worker-eastus1-2s5rb CLUSTER=cluster hack/ssh-agent.sh bootstrap # the bootstrap node used to provision cluster
- Get the admin kubeconfig and
Debugging AKS Cluster
- Connect to the VPN:
To access the cluster for oc / kubectl or SSH'ing into the cluster you need to connect to the VPN first.
NOTE: If you have a password-based
sudo
command, you must first authenticate before runningsudo
in the background
sudo openvpn secrets/vpn-aks-$LOCATION.ovpn &
-
Access the cluster via API (oc / kubectl):
make aks.kubeconfig export KUBECONFIG=aks.kubeconfig $ oc get nodes NAME STATUS ROLES AGE VERSION aks-systempool-99744725-vmss000000 Ready agent 9h v1.23.5 aks-systempool-99744725-vmss000001 Ready agent 9h v1.23.5 aks-systempool-99744725-vmss000002 Ready agent 9h v1.23.5
-
"SSH" into a cluster node:
- Run the ssh-aks.sh script, specifying the cluster name and the node number of the VM you are trying to ssh to.
hack/ssk-aks.sh aro-aks-cluster 0 # The first VM node in 'aro-aks-cluster' hack/ssk-aks.sh aro-aks-cluster 1 # The second VM node in 'aro-aks-cluster' hack/ssk-aks.sh aro-aks-cluster 2 # The third VM node in 'aro-aks-cluster'
-
Access via Azure Portal
Due to the fact that the AKS cluster is private, you need to be connected to the VPN in order to view certain AKS cluster properties, because the UI interrogates k8s via the VPN.
Metrics
To run fake metrics socket:
go run ./hack/monitor