ARO-RP/docs/deploy-full-rp-service-in-d...

9.5 KiB

Deploy an Entire RP Development Service

Prerequisites

  1. Your development environment is prepared according to the steps outlined in Prepare Your Dev Environment

Deploying an int-like Development RP

  1. Fetch the most up-to-date secrets with make secrets

  2. Copy and source your environment file.

    cp env.example env
    vi env
    . ./env
    
  3. Generate the development RP configuration

    make dev-config.yaml
    
  4. Create a full environment file, which overrides some default ./env options when sourced

    cp env-int.example env-int
    vi env-int
    . ./env-int
    
  5. Run make deploy

    NOTE: This will fail on the first attempt to run due to certificate and container mirroring requirements.

    NOTE: If the deployment fails with InvalidResourceReference due to the RP Network Security Groups not found, delete the gateway predeploy deployment, and re-run.

  6. Mirror the OpenShift images to your new ACR

    NOTE: Running the mirroring through a VM in Azure rather than a local workstation is recommended for better performance.

    1. Setup mirroring environment variables

      export DST_ACR_NAME=${USER}aro
      export SRC_AUTH_QUAY=$(echo $USER_PULL_SECRET | jq -r '.auths."quay.io".auth')
      export SRC_AUTH_REDHAT=$(echo $USER_PULL_SECRET | jq -r '.auths."registry.redhat.io".auth')
      export DST_AUTH=$(echo -n '00000000-0000-0000-0000-000000000000:'$(az acr login -n ${DST_ACR_NAME} --expose-token | jq -r .accessToken) | base64 -w0)
      
    2. Login to the Azure Container Registry

      docker login -u 00000000-0000-0000-0000-000000000000 -p "$(echo $DST_AUTH | base64 -d | cut -d':' -f2)" "${DST_ACR_NAME}.azurecr.io"
      
    3. Run the mirroring

      The latest argument will take the InstallStream from pkg/util/version/const.go and mirror that version

      go run -tags aro ./cmd/aro mirror latest
      
    4. Push the ARO and Fluentbit images to your ACR

      If running this step from a VM separate from your workstation, ensure the commit tag used to build the image matches the commit tag where make deploy is run.

      Due to security compliance requirements, make publish-image-* targets pull from arointsvc.azurecr.io. You can either authenticate to this registry using az acr login --name arointsvc to pull the image, or modify the $RP_IMAGE_ACR environment variable locally to point to registry.access.redhat.com instead.

      make publish-image-aro-multistage
      make publish-image-fluentbit
      
  7. Update the DNS Child Domains

    export PARENT_DOMAIN_NAME=osadev.cloud
    export PARENT_DOMAIN_RESOURCEGROUP=dns
    export GLOBAL_RESOURCEGROUP=$USER-global
    
    for DOMAIN_NAME in $USER-clusters.$PARENT_DOMAIN_NAME $USER-rp.$PARENT_DOMAIN_NAME; do
        CHILD_DOMAIN_PREFIX="$(cut -d. -f1 <<<$DOMAIN_NAME)"
        echo "########## Creating NS record to DNS Zone $CHILD_DOMAIN_PREFIX ##########"
        az network dns record-set ns create \
            --resource-group "$PARENT_DOMAIN_RESOURCEGROUP" \
            --zone "$PARENT_DOMAIN_NAME" \
            --name "$CHILD_DOMAIN_PREFIX" >/dev/null
        for ns in $(az network dns zone show \
            --resource-group "$GLOBAL_RESOURCEGROUP" \
            --name "$DOMAIN_NAME" \
            --query nameServers -o tsv); do
            az network dns record-set ns add-record \
            --resource-group "$PARENT_DOMAIN_RESOURCEGROUP" \
            --zone "$PARENT_DOMAIN_NAME" \
            --record-set-name "$CHILD_DOMAIN_PREFIX" \
            --nsdname "$ns" >/dev/null
        done
    done
    
  1. Update the certificates in keyvault

    NOTE: If you reuse an old name, you might run into soft-delete of the keyvaults. Run az keyvault recover --name to fix this.

    NOTE: Check to ensure that the $KEYVAULT_PREFIX environment variable set on workstation matches the prefix deployed into the resource group.

    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name rp-mdm \
        --file secrets/rp-metrics-int.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-gwy" \
        --name gwy-mdm \
        --file secrets/rp-metrics-int.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name rp-mdsd \
        --file secrets/rp-logging-int.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-gwy" \
        --name gwy-mdsd \
        --file secrets/rp-logging-int.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name cluster-mdsd \
        --file secrets/cluster-logging-int.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name dev-arm \
        --file secrets/arm.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name rp-firstparty \
        --file secrets/firstparty.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-svc" \
        --name rp-server \
        --file secrets/localhost.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-por" \
        --name portal-server \
        --file secrets/localhost.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-por" \
        --name portal-client \
        --file secrets/portal-client.pem >/dev/null
    az keyvault certificate import \
        --vault-name "$KEYVAULT_PREFIX-dbt" \
        --name dbtoken-server \
        --file secrets/localhost.pem >/dev/null
    
  2. Delete the existing VMSS

    NOTE: This needs to be deleted as deploying won't recreate the VMSS if the commit hash is the same.

    az vmss delete -g ${RESOURCEGROUP} --name rp-vmss-$(git rev-parse --short=7 HEAD)$([[ $(git status --porcelain) = "" ]] || echo -dirty) && az vmss delete -g $USER-gwy-$LOCATION --name gateway-vmss-$(git rev-parse --short=7 HEAD)$([[ $(git status --porcelain) = "" ]] || echo -dirty)
    
  3. Run make deploy

SSH to RP VMSS Instance

  1. Update the RP NSG to allow SSH

    az network nsg rule create \
        --name ssh-to-rp \
        --resource-group $RESOURCEGROUP \
        --nsg-name rp-nsg \
        --access Allow \
        --priority 500 \
        --source-address-prefixes "$(curl --silent ipecho.net/plain)/32" \
        --protocol Tcp \
        --destination-port-ranges 22
    
  2. SSH into the VM

    VMSS_PIP=$(az vmss list-instance-public-ips -g $RESOURCEGROUP --name rp-vmss-$(git rev-parse --short=7 HEAD)$([[ $(git status --porcelain) = "" ]] || echo -dirty) | jq -r '.[0].ipAddress')
    
    ssh cloud-user@${VMSS_PIP}
    

SSH to Gateway VMSS Instance

  1. Update the Gateway NSG to allow SSH

    az network nsg rule create \
        --name ssh-to-gwy \
        --resource-group $USER-gwy-$LOCATION \
        --nsg-name gateway-nsg \
        --access Allow \
        --priority 500 \
        --source-address-prefixes "$(curl --silent ipecho.net/plain)/32" \
        --protocol Tcp \
        --destination-port-ranges 22
    
  2. SSH into the VM

    VMSS_PIP=$(az vmss list-instance-public-ips -g $USER-gwy-$LOCATION --name gateway-vmss-$(git rev-parse --short=7 HEAD)$([[ $(git status --porcelain) = "" ]] || echo -dirty) | jq -r '.[0].ipAddress')
    
    ssh cloud-user@${VMSS_PIP}
    

Deploy a Cluster

  1. Run the tunnel program to tunnel to the RP

    make tunnel
    

    NOTE: make tunnel will print the public IP of your new RP VM NIC. Ensure that it's correct.

  2. Update environment variable to deploy in a different resource group

    export RESOURCEGROUP=myResourceGroup
    
  3. Create the resource group if it doesn't exist

    az group create --resource-group $RESOURCEGROUP --location $LOCATION
    
  4. Create VNets / Subnets

    az network vnet create \
        --resource-group $RESOURCEGROUP \
        --name aro-vnet \
        --address-prefixes 10.0.0.0/22
    
    az network vnet subnet create \
        --resource-group $RESOURCEGROUP \
        --vnet-name aro-vnet \
        --name master-subnet \
        --address-prefixes 10.0.0.0/23 \
        --service-endpoints Microsoft.ContainerRegistry
    
    az network vnet subnet create \
        --resource-group $RESOURCEGROUP \
        --vnet-name aro-vnet \
        --name worker-subnet \
        --address-prefixes 10.0.2.0/23 \
        --service-endpoints Microsoft.ContainerRegistry
    
  5. Create the cluster

    export NO_INTERNET=true
    export CLUSTER=$USER
    
    az aro create \
        --resource-group $RESOURCEGROUP \
        --name $CLUSTER \
        --vnet aro-vnet \
        --master-subnet master-subnet \
        --worker-subnet worker-subnet
    

    NOTE: The az aro CLI extension must be registered in order to run az aro commands against a local or tunneled RP. The usual hack script used to create clusters does not work due to keyvault mirroring requirements. The name of the cluster depends on the DNS zone that was created in an earlier step.