Integrate Docker for E2E testing with Makefile and Azure pipeline orchestration

fixed the vpn issue

Replaced all instances of [ ] with [[ ]] for conditional checks, as per the discussion about how [[ ]] handles variables better

Improve error handling and logging in e2e setup to capture detailed context for failures

Improve networking

Improve networking-with network-  aro-dev:

Improve networking-with network-  aro-dev:

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

ci pipeline cleanup

solved the veriables issue

solved the veriables issue

using old way as e2e uses aro image

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Adding image build step for E2E image.
- Removed docker compose build from ci-rp target.
- Added RP_IMAGE_ACR_URL env var to test overriding ARO operator image pull URL.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Reset to 8988bd227a.
- Changed image names in ci.yml to use ACR image URLs.
- Added RP_IMAGE_ACR env var after sourcing secrets to debug ARO operator image pull error.
- Removed localhost image check from ACR push script.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Fixed type in RP_IMAGE_ACR_URL env var name.

changed  to  to match the name used in your CI code

Modified some template veriables

Fixed the env as per the latest changres

Added ACR pull step to fix image pull error

Update e2e-pipeline template to align with push template and ci pipeline

Fix image path construction in e2e-pipeline template

change environment variable format that Azure DevOps recognizes

change environment variable format that Azure DevOps recognizes

change environment variable format

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Added az acr login before docker compose up.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Removed --target=builder for RP and E2E image builds so all stages are built.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Added ARO_IMAGE env var to set ARO operator image URL.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Added version to the ARO_IMAGE env var and set to Build.BuildId.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Moved Build.BuildId variable inside quotes.

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

change makefile target to fix ci-rp issue

Added logs

Reverting forced push to the yestardays changes

changed the indentation

changed the indentation

removed cluster version as its causing issue

Updated Dockerfile to ensure test report and coverage files are generated

Updated Dockerfile to ensure test report and coverage files are generated

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Added WORKDIR setting and set to /usr/local/bin to fix portalauth command invocation.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Updated ARO_IMAGE env var to use new arosvcdev registry.

Updated the ARO_IMAGE to arointsvc

Updated the missing veriable

Updated the ARO_IMAGE to arosvcdev

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Disabling E2E cluster delete to debug API connectivity error.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Added AZURE_PORTAL_ELEVATED_GROUP_IDS env var that is required for portalauth cmd args.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Set path to portalauth binary to /usr/local/bin/portalauth.
- Added info log to debug portal auth args.

Removed the unwanted comment from the code

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Adding KEYVAULT_PREFIX env var for portalauth command.

Added E2E_LABEL Added the E2E_LABEL variable to the env file, as it's used to run E2E tests.

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Cleanup: move duplicate variable exports to top level variables definition, remove unnecessary exports, etc.

Added WaitWithTimeout in adminPortalSessionSetup to ensure <ul> element is present before proceeding

fix(e2e): add extended wait and capture debug information for missing <ul> element

removed the extended wait and capture debug information for missing <ul> element

Modified must gather code

Replaced host + /healthz/ready to explicitly set the path to check readiness.

revert back the changes as its not useful

Add enhanced logging for /portal access in adminPortalSessionSetup

- Log attempt to access the main portal path (/portal) before making the request.
- Add success log message upon successful access to /portal for better visibility.
- Improves debugging by providing clear feedback on portal accessibility.

Added must-gather logic

Add error handling for E2E test step in pipeline

Capture exit code immediately after E2E test command to prevent alteration by subsequent commands

Capture exit code immediately after E2E test command to prevent alteration by subsequent commands

Added Session Retry Mechanism in adminPortalSessionSetup

Removed the last commit code and added skip for hive

Removed the last commit code and added skip for hive

Removed the unwanted code

Removed the unwanted space and modified the stage name

Removed the unwanted space from the code

Removed the unwanted space from the code

Removed unwanted code

Removed unwanted code

Documented docker-compose stuff in doc

Documented docker-compose stuff in doc

Documented docker-compose stuff in doc

shubhadapaithankar/ARO-9501-RP-E2E-Test
- Removed log statement used for debuging portal auth command and args.

Removed the unwanted code

removed ./ prefix from the portalauth command and  WORKDIR /usr/local/bin from the Dockerfile.aro-e2e

Added WORKDIR /usr/local/bin again as its mendetory

removed ./ prefix from the portalauth command and  WORKDIR /usr/local/bin from the Dockerfile.aro-e2e

Added copy command to Dockerfile.aro-e2e

Modified copy command to Dockerfile.aro-e2e

Added multi-path support for portalAuthCmd in e2e tests

Removed WORKING DIR and else

Added the commit message to showcase e2e pipeline working stuff
This commit is contained in:
Shubhada 2024-09-20 15:36:23 -07:00
Родитель fde486f523
Коммит f58549a922
13 изменённых файлов: 476 добавлений и 109 удалений

Просмотреть файл

@ -1,4 +1,4 @@
# Azure DevOps Pipeline running CI
# Azure DevOps Pipeline running CI and E2E
trigger:
branches:
@ -13,70 +13,216 @@ trigger:
variables:
- template: vars.yml
- name: CI
value: true
- name: REGISTRY
value: registry.access.redhat.com
- name: LOCAL_ARO_RP_IMAGE
value: "arosvcdev.azurecr.io/aro"
value: arosvcdev.azurecr.io/aro
- name: LOCAL_ARO_AZEXT_IMAGE
value: "arosvcdev.azurecr.io/azext-aro"
value: arosvcdev.azurecr.io/azext-aro
- name: LOCAL_VPN_IMAGE
value: "arosvcdev.azurecr.io/vpn"
value: arosvcdev.azurecr.io/vpn
- name: LOCAL_E2E_IMAGE
value: arosvcdev.azurecr.io/e2e
- name: TAG
value: $(Build.BuildId)
- name: VERSION
value: $(Build.BuildId)
- name: ARO_IMAGE
value: arosvcdev.azurecr.io/aro:$(Build.BuildId)
- name: ARO_SELENIUM_HOSTNAME
value: localhost
- name: E2E_LABEL
value: "!smoke&&!regressiontest"
- name: KUBECONFIG
value: aks.kubeconfig
jobs:
- job: Build_Test_And_Push_Az_ARO_Extension
pool:
name: 1es-aro-ci-pool
steps:
- template: ./templates/template-checkout.yml
stages:
- stage: Containerized_CI
jobs:
- job: Build_Test_And_Push_Az_ARO_Extension
pool:
name: 1es-aro-ci-pool
steps:
- template: ./templates/template-checkout.yml
# Build and test the Az ARO Extension
- script: |
set -xe
DOCKER_BUILD_CI_ARGS="--load" make ci-azext-aro VERSION=$(VERSION)
displayName: 🛠 Build & Test Az ARO Extension
# Build and test the Az ARO Extension
- script: |
set -xe
DOCKER_BUILD_CI_ARGS="--load" make ci-azext-aro VERSION=$(VERSION)
displayName: 🛠 Build & Test Az ARO Extension
# Push the image to ACR
- template: ./templates/template-acr-push.yml
parameters:
acrFQDN: 'arosvcdev.azurecr.io'
repository: 'azext-aro'
pushLatest: true
# Push the image to ACR
- template: ./templates/template-acr-push.yml
parameters:
acrFQDN: 'arosvcdev.azurecr.io'
repository: 'azext-aro'
pushLatest: true
- job: Build_And_Test_RP_And_Portal
pool:
name: 1es-aro-ci-pool
steps:
- template: ./templates/template-checkout.yml
- job: Build_And_Test_RP_And_Portal
pool:
name: 1es-aro-ci-pool
steps:
- template: ./templates/template-checkout.yml
# Build and test RP and Portal
- script: |
set -xe
DOCKER_BUILD_CI_ARGS="--load" make ci-rp VERSION=$(VERSION)
displayName: 🛠 Build & Test RP and Portal
# Build and test RP and Portal
- script: |
set -xe
DOCKER_BUILD_CI_ARGS="--load" make ci-rp VERSION=$(VERSION)
displayName: 🛠 Build & Test RP and Portal
# Publish test results
- task: PublishTestResults@2
displayName: 📊 Publish tests results
inputs:
testResultsFiles: $(System.DefaultWorkingDirectory)/report.xml
condition: succeededOrFailed()
# Publish test results
- task: PublishTestResults@2
displayName: 📊 Publish tests results
inputs:
testResultsFiles: $(System.DefaultWorkingDirectory)/report.xml
condition: succeededOrFailed()
# Publish code coverage results
- task: PublishCodeCoverageResults@2
displayName: 📈 Publish code coverage
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: $(System.DefaultWorkingDirectory)/coverage.xml
failIfCoverageEmpty: false
condition: succeededOrFailed()
# Publish code coverage results
- task: PublishCodeCoverageResults@2
displayName: 📈 Publish code coverage
inputs:
codeCoverageTool: Cobertura
summaryFileLocation: $(System.DefaultWorkingDirectory)/coverage.xml
failIfCoverageEmpty: false
condition: succeededOrFailed()
# Push the image to ACR
- template: ./templates/template-acr-push.yml
parameters:
acrFQDN: 'arosvcdev.azurecr.io'
repository: 'aro'
pushLatest: true
# Push the RP image to ACR
- template: ./templates/template-acr-push.yml
parameters:
acrFQDN: 'arosvcdev.azurecr.io'
repository: 'aro'
pushLatest: true
- job: Build_And_Push_E2E_Image
pool:
name: 1es-aro-ci-pool
steps:
- template: ./templates/template-checkout.yml
# Build the E2E image
- script: |
set -xe
DOCKER_BUILD_CI_ARGS="--load" make aro-e2e VERSION=$(VERSION)
displayName: 🛠 Build the E2E image
# Push the E2E image to ACR
- template: ./templates/template-acr-push.yml
parameters:
acrFQDN: 'arosvcdev.azurecr.io'
repository: 'e2e'
pushLatest: true
- stage: E2E # E2E Stage using Docker Compose
dependsOn: Containerized_CI
jobs:
- job: Run_E2E_Tests
timeoutInMinutes: 0
pool:
name: 1es-aro-ci-pool
steps:
# Checkout the code
- template: ./templates/template-checkout.yml
# Install Docker, Docker Compose, and dependencies
- bash: |
. ./hack/e2e/utils.sh
install_docker_dependencies
displayName: Install Docker and Docker Compose
# AZ CLI Login
- template: ./templates/template-az-cli-login.yml
parameters:
azureDevOpsJSONSPN: $(aro-v4-e2e-devops-spn)
# Get Kubeconfig for AKS Cluster with corrected Key Vault configuration
- bash: |
az account set -s $AZURE_SUBSCRIPTION_ID
SECRET_SA_ACCOUNT_NAME=$(SECRET_SA_ACCOUNT_NAME) make secrets
. secrets/env
export KEYVAULT_PREFIX="e2e-classic-eastus-cls"
# Retrieve the kubeconfig
hack/get-admin-aks-kubeconfig.sh > aks.kubeconfig
if [ -f aks.kubeconfig ]; then
echo "Kubeconfig retrieved successfully."
else
echo "Failed to retrieve Kubeconfig."
exit 1
fi
displayName: Get Kubeconfig for AKS Cluster
# Deploy Hive Operator
- bash: |
az account set -s $AZURE_SUBSCRIPTION_ID
SECRET_SA_ACCOUNT_NAME=$(SECRET_SA_ACCOUNT_NAME) make secrets
. secrets/env
docker compose -f docker-compose.yml up -d vpn
while [ "$(docker inspect --format '{{.State.Health.Status}}' vpn)" != "healthy" ]; do
echo "Waiting for VPN to be healthy..."
sleep 10
done
docker ps
kubectl get nodes
./hack/hive/hive-dev-install.sh
displayName: Deploy Hive Operator
# Run the E2E test suite
- bash: |
az account set -s $AZURE_SUBSCRIPTION_ID
az acr login --name arosvcdev
SECRET_SA_ACCOUNT_NAME=$(SECRET_SA_ACCOUNT_NAME) make secrets
. ./hack/e2e/run-rp-and-e2e.sh
deploy_e2e_db
register_sub
docker compose up e2e
displayName: ⚙️ Run E2E Test Suite
# Log the output from the services in case of failure
- bash: |
docker compose logs vpn
docker compose logs selenium
docker compose logs rp
docker compose logs portal
docker compose logs e2e
displayName: Log Service Output
condition: always()
# Collect must-gather logs
- bash: |
wget -nv https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/$(OpenShiftCLIVersion)/openshift-client-linux-$(OpenShiftCLIVersion).tar.gz
tar xf openshift-client-linux-$(OpenShiftCLIVersion).tar.gz
./oc adm must-gather --image quay.io/cmarches/aro-must-gather:20231030.00
tar cf must-gather.tar.gz must-gather.local.*
displayName: Collect must-gather
condition: Failed()
# Publish the must-gather result to the pipeline
- publish: must-gather.tar.gz
artifact: must-gather
displayName: Append must-gather to Pipeline
condition: Failed()
# Clean up Docker Compose services
- bash: |
docker compose down
rm -f aks.kubeconfig
displayName: Cleanup Docker Compose Services and Kubeconfig
condition: always()
# Clean Up Database
- bash: |
az cosmosdb sql database delete --name "$DATABASE_NAME" --yes --account-name "$DATABASE_ACCOUNT_NAME" --resource-group "$RESOURCEGROUP"
displayName: Clean Up Database
condition: always()
# Cleanup Hive Operator
- bash: |
echo "Cleaning up Hive Operator..."
kubectl delete namespace hive || echo "Namespace already deleted or does not exist."
displayName: Cleanup Hive Operator
condition: always()
# AZ CLI Logout
- template: ./templates/template-az-cli-logout.yml

Просмотреть файл

@ -0,0 +1,79 @@
# ./templates/e2e-pipeline-template.yml
parameters:
- name: rpImageACR
type: string
- name: acrCredentialsJSON
type: string
steps:
# Step 1: Authenticate to ACR and Install Docker Compose
- task: AzureCLI@2
displayName: 'Authenticate to ACR and Install Docker Compose'
inputs:
azureSubscription: 'ado-pipeline-dev-image-push' # service connection
scriptType: bash
scriptLocation: 'inlineScript'
inlineScript: |
set -xe
# Ensure RP_IMAGE_ACR is correctly passed as a parameter
if [ -z "${{ parameters.rpImageACR }}" ]; then
echo "Error: RP_IMAGE_ACR is not set"
exit 1
fi
ACR_FQDN="${{ parameters.rpImageACR }}"
REGISTRY_NAME=$(echo $ACR_FQDN | cut -d'.' -f1)
# Install Docker Compose
echo "Installing Docker and Docker Compose Plugin..."
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo \"$VERSION_CODENAME\") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo systemctl start docker
sudo systemctl enable docker
# Login to ACR
az acr login --name $REGISTRY_NAME
# Step 2: Pull the RP Docker image
- script: |
if [ -z "${{ parameters.rpImageACR }}" ]; then
echo "Error: RP_IMAGE_ACR is not set"
exit 1
fi
export RP_IMAGE_ACR=${{ parameters.rpImageACR }}
export VERSION=$(Build.BuildId)
# Construct the RP image path correctly
if [[ "${RP_IMAGE_ACR}" == */aro ]]; then
RP_IMAGE="${RP_IMAGE_ACR}:${VERSION}"
else
RP_IMAGE="${RP_IMAGE_ACR}/aro:${VERSION}"
fi
echo "Pulling RP image: $RP_IMAGE"
docker pull $RP_IMAGE || { echo "Error: RP image $RP_IMAGE not found."; exit 1; }
displayName: Pull RP Docker Image
# Step 3: Pull the E2E Docker image
- script: |
if [ -z "${{ parameters.rpImageACR }}" ]; then
echo "Error: RP_IMAGE_ACR is not set"
exit 1
fi
export VERSION=$(Build.BuildId)
export E2E_IMAGE=${{ parameters.rpImageACR }}/e2e:${VERSION}
echo "Pulling E2E image: $E2E_IMAGE"
docker pull $E2E_IMAGE || { echo "Error: E2E image $E2E_IMAGE not found."; exit 1; }
displayName: Pull E2E Docker Image

Просмотреть файл

@ -7,7 +7,13 @@ steps:
set -e
trap 'rm -f devops-spn.json' EXIT
base64 -d >devops-spn.json <<<${{ parameters.azureDevOpsJSONSPN }}
echo "${{ parameters.azureDevOpsJSONSPN }}" | base64 -d > devops-spn.json
az login --service-principal -u "$(jq -r .clientId <devops-spn.json)" -p "$(jq -r .clientSecret <devops-spn.json)" -t "$(jq -r .tenantId <devops-spn.json)" --allow-no-subscriptions >/dev/null
az login --service-principal \
-u "$(jq -r .clientId <devops-spn.json)" \
-p "$(jq -r .clientSecret <devops-spn.json)" \
-t "$(jq -r .tenantId <devops-spn.json)" --allow-no-subscriptions >/dev/null
# Cleanup
rm -f devops-spn.json
displayName: 🗝 AZ Login

Просмотреть файл

@ -17,6 +17,9 @@ FROM ${REGISTRY}/ubi8/ubi-minimal
RUN microdnf update && microdnf clean all
COPY --from=builder /root/go/bin/gojq /usr/local/bin/jq
COPY --from=builder /app/aro /app/e2e.test /app/db /app/cluster /app/portalauth /usr/local/bin/
# Setting ENV HOME=/tmp does not change the users default home directory of /
# This setting is required to keep the existing e2e pipeline working without any code changes
COPY --from=builder /app/portalauth /
ENTRYPOINT ["aro"]
EXPOSE 2222/tcp 8080/tcp 8443/tcp 8444/tcp 8445/tcp
USER 1000

Просмотреть файл

@ -72,6 +72,7 @@ FROM ${REGISTRY}/ubi8/ubi-minimal AS final
LABEL aro-final=true
RUN microdnf update && microdnf clean all
COPY --from=builder /app/aro /app/e2e.test /usr/local/bin/
COPY --from=builder /app/report.xml /app/coverage.xml /app/
ENTRYPOINT ["aro"]
EXPOSE 2222/tcp 8080/tcp 8443/tcp 8444/tcp
USER 1000

Просмотреть файл

@ -380,9 +380,8 @@ DOCKER_BUILD_CI_ARGS ?=
# Image names that will be found in the local podman image registry after build
# (tags are always VERSION).
LOCAL_ARO_RP_IMAGE ?= aro
LOCAL_ARO_PORTAL_BUILD_IMAGE ?= $(LOCAL_ARO_RP_IMAGE)-portal-build
LOCAL_ARO_RP_BUILD_IMAGE ?= $(LOCAL_ARO_RP_IMAGE)-build
LOCAL_AZ_EXT_ARO_IMAGE ?= azext-aro
LOCAL_E2E_IMAGE ?= e2e
LOCAL_ARO_AZEXT_IMAGE ?= azext-aro
LOCAL_TUNNEL_IMAGE ?= aro-tunnel
###############################################################################
@ -394,7 +393,7 @@ ci-azext-aro:
-f Dockerfile.ci-azext-aro \
--platform=linux/amd64 \
--no-cache=$(NO_CACHE) \
-t $(LOCAL_AZ_EXT_ARO_IMAGE):$(VERSION)
-t $(LOCAL_ARO_AZEXT_IMAGE):$(VERSION)
.PHONY: ci-clean
ci-clean:
@ -410,17 +409,24 @@ ci-rp: fix-macos-vendor
--build-arg REGISTRY=${REGISTRY} \
--build-arg ARO_VERSION=${VERSION} \
--no-cache=${NO_CACHE} \
--target=builder \
-t ${LOCAL_ARO_RP_BUILD_IMAGE}:${VERSION}
docker compose build rp
-t ${LOCAL_ARO_RP_IMAGE}:${VERSION}
# Extract test coverage files from build to local filesystem
docker create --name extract_cover_out ${LOCAL_ARO_RP_BUILD_IMAGE}:${VERSION}; \
docker create --name extract_cover_out ${LOCAL_ARO_RP_IMAGE}:${VERSION}; \
docker cp extract_cover_out:/app/report.xml ./report.xml; \
docker cp extract_cover_out:/app/coverage.xml ./coverage.xml; \
docker rm extract_cover_out;
.PHONY: aro-e2e
aro-e2e: fix-macos-vendor
docker build . ${DOCKER_BUILD_CI_ARGS} \
-f Dockerfile.aro-e2e \
--ulimit=nofile=4096:4096 \
--build-arg REGISTRY=${REGISTRY} \
--build-arg ARO_VERSION=${VERSION} \
--no-cache=${NO_CACHE} \
-t ${LOCAL_E2E_IMAGE}:${VERSION}
.PHONY: ci-tunnel
ci-tunnel: fix-macos-vendor
podman $(PODMAN_REMOTE_ARGS) \

Просмотреть файл

@ -10,9 +10,11 @@ services:
volumes:
- ${PWD}/secrets:/secrets:z
devices:
- /dev/net/tun # required to modify VPN on host
- /dev/net/tun
entrypoint: "openvpn"
command: ["/secrets/vpn-eastus.ovpn"]
ports:
- "443:443"
healthcheck:
test: ["CMD", "pidof", "openvpn"]
start_period: 20s
@ -24,14 +26,16 @@ services:
image: selenium/standalone-edge:4.10.0-20230607
container_name: selenium-container
network_mode: host
ports:
- "4444:4444"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4444"]
interval: 30s
timeout: 10s
test: ["CMD", "curl", "-sS", "-f", "http://localhost:4444"]
interval: 60s
timeout: 20s
retries: 3
rp:
image: ${LOCAL_ARO_RP_IMAGE}:${VERSION}
image: ${LOCAL_ARO_RP_IMAGE}:${VERSION} # Using localhost/aro for local image
build:
context: .
dockerfile: Dockerfile.ci-rp
@ -43,6 +47,7 @@ services:
soft: 4096
hard: 4096
container_name: aro-rp
network_mode: host
depends_on:
vpn:
condition: service_healthy
@ -57,7 +62,6 @@ services:
- source: hive-kubeconfig
target: /app/secrets/aks.kubeconfig
environment:
# inherit from host
- ADMIN_OBJECT_ID
- ARO_IMAGE
- AZURE_ARM_CLIENT_ID
@ -85,6 +89,7 @@ services:
- MOCK_MSI_CLIENT_ID
- MOCK_MSI_OBJECT_ID
- MOCK_MSI_TENANT_ID
- MOCK_MSI_OBJECT_ID
- OIDC_STORAGE_ACCOUNT_NAME
- PARENT_DOMAIN_NAME
- PARENT_DOMAIN_RESOURCEGROUP
@ -93,31 +98,28 @@ services:
- RESOURCEGROUP
- SECRET_SA_ACCOUNT_NAME
- STORAGE_ACCOUNT_DOMAIN
# override
- ARO_ADOPT_BY_HIVE=true
- ARO_CHECKOUT_PATH=/app
- ARO_INSTALL_VIA_HIVE=true
- HIVE_KUBE_CONFIG_PATH=/app/secrets/aks.kubeconfig
- KUBECONFIG=/app/secrets/aks.kubeconfig
- RP_MODE=development
expose:
- "8443"
ports:
- "127.0.0.1:8443:8443"
- "8443:8443"
healthcheck:
test: ["CMD", "curl", "-k", "http://localhost:8443/healthz"]
interval: 30s
test: ["CMD", "curl", "-k", "-sS", "https://localhost:8443/healthz/ready"]
interval: 90s
timeout: 30s
retries: 3
retries: 5
restart: on-failure:3
portal:
image: ${LOCAL_ARO_RP_IMAGE}:${VERSION}
container_name: aro-portal
network_mode: host
depends_on:
rp:
condition: service_healthy
condition: service_started
environment:
- RP_MODE
- AZURE_SUBSCRIPTION_ID
@ -130,12 +132,13 @@ services:
- AZURE_RP_CLIENT_SECRET
- AZURE_RP_CLIENT_ID
- KEYVAULT_PREFIX
- AZURE_ENVIRONMENT=AzurePublicCloud
- DATABASE_ACCOUNT_NAME
- DATABASE_NAME
- NO_NPM=1
ports:
- "127.0.0.1:8444:8444"
- "127.0.0.1:2222:2222"
- "8444:8444"
- "2222:2222"
secrets:
- source: proxy-client-key
target: /app/secrets/proxy-client.key
@ -148,11 +151,64 @@ services:
command: ["portal"]
restart: on-failure:3
healthcheck:
test: ["CMD", "curl", "-k", "http://localhost:8444/healthz"]
test: ["CMD", "curl", "-k", "-sS", "https://localhost:8444/healthz/ready"]
interval: 30s
timeout: 10s
timeout: 20s
retries: 3
e2e:
image: ${LOCAL_E2E_IMAGE}:${VERSION}
build:
context: .
dockerfile: Dockerfile.aro-e2e
args:
- REGISTRY=${REGISTRY}
ulimits:
nofile:
soft: 4096
hard: 4096
container_name: run-e2e
network_mode: host
depends_on:
vpn:
condition: service_healthy
rp:
condition: service_healthy
portal:
condition: service_healthy
selenium:
condition: service_healthy
environment:
- ARO_SELENIUM_HOSTNAME
- AZURE_CLIENT_ID
- AZURE_CLIENT_SECRET
- AZURE_FP_CLIENT_ID
- AZURE_FP_SERVICE_PRINCIPAL_ID
- AZURE_PORTAL_ELEVATED_GROUP_IDS
- AZURE_RP_CLIENT_ID
- AZURE_RP_CLIENT_SECRET
- AZURE_SERVICE_PRINCIPAL_ID
- AZURE_SUBSCRIPTION_ID
- AZURE_TENANT_ID
- CI
- CLUSTER
- DATABASE_ACCOUNT_NAME
- DATABASE_NAME
- E2E_DELETE_CLUSTER
- E2E_LABEL
- KEYVAULT_PREFIX
- LOCATION
- OS_CLUSTER_VERSION
- PROXY_HOSTNAME
- PULL_SECRET
- RESOURCEGROUP
- RP_BASE_URL=https://localhost:8443
- PORTAL_HOSTNAME=https://localhost:8444
- RP_MODE
- USER_PULL_SECRET
entrypoint: "/usr/local/bin/e2e.test"
command: ["-test.v", "--ginkgo.v", "--ginkgo.timeout=180m", "--ginkgo.flake-attempts=2", "--ginkgo.junit-report=/tmp/e2e-report.xml", "--ginkgo.label-filter=${E2E_LABEL}"]
secrets:
proxy-client-key:
file: ./secrets/proxy-client.key
@ -161,4 +217,4 @@ secrets:
proxy-crt:
file: ./secrets/proxy.crt
hive-kubeconfig:
file: ./aks.kubeconfig
file: ./secrets/aks.kubeconfig

Просмотреть файл

@ -178,4 +178,25 @@ Make sure that `PKG_CONFIG_PATH` contains the pkgconfig files of the above packa
To resolve, run `SECRET_SA_ACCOUNT_NAME=rharosecretsdev make secrets`.
- `az -v` does not return `aro` as dependency.
To resolve, make sure it is being used the `env` file parameters as per the `env.example`
To resolve, make sure it is being used the `env` file parameters as per the `env.example`
## Getting Started with Docker Compose
1. Install [Docker Compose](https://docs.docker.com/compose/install/linux/#install-using-the-repository)
2. Check the `env.example` file and copy it by creating your own:
```bash
cp env.example env
```
3. Source the `env` file
```bash
. ./env
```
4. Run VPN, RP, and Portal services using Docker Compose
```bash
docker compose up
```

Просмотреть файл

@ -21,8 +21,11 @@ export PLATFORM_WORKLOAD_IDENTITY_ROLE_SETS="replace_with_value_output_by_hack/d
# you will need this to run-rp , vpn and ci-rp using Docker compose
export REGISTRY=registry.access.redhat.com
export LOCAL_ARO_RP_IMAGE=aro
export LOCAL_E2E_IMAGE=e2e
export VERSION=latest
export TAG=latest
export LOCAL_VPN_IMAGE=vpn
export E2E_LABEL='!smoke&&!regressiontest'
. secrets/env

Просмотреть файл

@ -52,3 +52,31 @@ kill_podman() {
fi
fi
}
setup_environment() {
echo "########## 🌐 Setting up Azure account and secrets ##########"
az account set -s "$AZURE_SUBSCRIPTION_ID"
SECRET_SA_ACCOUNT_NAME="$SECRET_SA_ACCOUNT_NAME" make secrets
. secrets/env
export CI=true
echo "Environment setup complete."
}
install_docker_dependencies() {
echo "########## 🐳 Installing Docker and Docker Compose Plugin ##########"
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo \"$VERSION_CODENAME\") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin make
sudo systemctl start docker
sudo systemctl enable docker
docker compose version
echo "Docker and dependencies installed successfully."
}

Просмотреть файл

@ -4,49 +4,39 @@ set -o errexit \
-o nounset
declare -r utils=hack/util.sh
if [ -f "$utils" ]; then
# shellcheck source=../util.sh
source "$utils"
if [[ -f "$utils" ]]; then
# shellcheck source=../util.sh
source "$utils"
fi
HIVE_OPERATOR_NS="hive"
KUBECTL="$( which kubectl 2> /dev/null || which oc 2> /dev/null)"
KUBECTL="$(which kubectl 2> /dev/null || which oc 2> /dev/null)"
if [ ! -f go.mod ] || [ ! -d ".git" ]; then
abort "this script must by run from the repo's root directory"
if [[ ! -f go.mod ]] || [[ ! -d ".git" ]]; then
abort "this script must be run from the repo's root directory"
fi
function cleanup() {
[ -f "$(pwd)/kubectl" ] && rm -f "$(pwd)/kubectl"
}
trap cleanup EXIT
main() {
log "enter hive installation"
local skip_deployments=${1:-"none"}
if [ ! -f "./hack/hive/hive-config/hive-deployment.yaml" ] || [ ! -d "./hack/hive/hive-config/crds" ] ; then
if [[ ! -f "./hack/hive/hive-config/hive-deployment.yaml" ]] || [[ ! -d "./hack/hive/hive-config/crds" ]]; then
log "hive config is missing, generating config, please rerun this script afterwards"
./hack/hive/hive-generate-config.sh
if [ $? -ne 0 ]; then
if [[ $? -ne 0 ]]; then
abort "error generating the hive configs"
fi
fi
if [ -z "$PULL_SECRET" ]; then
if [[ -z "$PULL_SECRET" ]]; then
log "global pull secret variable required, please source ./env"
exit
exit 1
fi
verify_tools
if [ "$( $KUBECTL get namespace $HIVE_OPERATOR_NS -o yaml 2>/dev/null | wc -l )" -ne 0 ]; then
if [[ "$( $KUBECTL get namespace $HIVE_OPERATOR_NS -o yaml 2>/dev/null | wc -l )" -ne 0 ]]; then
log "hive is already installed in namespace $HIVE_OPERATOR_NS"
log "would you like to reapply the configs? (y/N): "
read answer
if [[ "$answer" != "y" ]]; then
exit
fi
log "Reapplying the configs automatically"
else
$KUBECTL create namespace $HIVE_OPERATOR_NS
fi
@ -67,15 +57,38 @@ main() {
sed "s/HIVE_OPERATOR_NS/$HIVE_OPERATOR_NS/g" hack/hive/hive-config/hive-config.yaml | $KUBECTL apply -f -
$KUBECTL apply -f ./hack/hive/hive-config/hive-additional-install-log-regexes.yaml
$KUBECTL apply -f ./hack/hive/hive-config/hive-deployment.yaml
# Step added to wait for Hive readiness
$KUBECTL wait --timeout=5m --for=condition=Available --namespace $HIVE_OPERATOR_NS deployment/hive-operator
log "Hive is installed but to check Hive readiness use one of the following options to monitor the deployment rollout:
'kubectl wait --timeout=5m --for=condition=Available --namespace "$HIVE_OPERATOR_NS" deployment/hive-controllers'
or 'kubectl wait --timeout=5m --for=condition=Ready --namespace "$HIVE_OPERATOR_NS" pod --selector control-plane=clustersync'"
# Add retry loop to wait for hive-controllers deployment
local ATTEMPTS=0
local MAX_ATTEMPTS=6
local DELAY=10 # 10 seconds delay between each check
until $KUBECTL get deployment hive-controllers -n $HIVE_OPERATOR_NS || [ $ATTEMPTS -eq $MAX_ATTEMPTS ]; do
log "Waiting for hive-controllers deployment to be created... (Attempt: $ATTEMPTS)"
sleep $DELAY
ATTEMPTS=$((ATTEMPTS + 1))
done
if [ $ATTEMPTS -eq $MAX_ATTEMPTS ]; then
abort "hive-controllers deployment was not found after $MAX_ATTEMPTS attempts."
fi
# Wait for hive-controllers readiness
log "Waiting for Hive controllers to be available..."
$KUBECTL wait --timeout=5m --for=condition=Available --namespace $HIVE_OPERATOR_NS deployment/hive-controllers
exit 0
}
function download_tmp_kubectl() {
if curl -sLO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"; then
if ! curl -sLO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"; then
abort ": error downloading kubectl"
fi
chmod 755 kubectl
@ -83,14 +96,14 @@ function download_tmp_kubectl() {
}
function verify_tools() {
if [ -n "$KUBECTL" ]; then
if [[ -n "$KUBECTL" ]]; then
return
fi
log "kubectl or oc not detected, downloading"
download_tmp_kubectl
log "done: downloading kubectl/oc was completed"
if [ "$( $KUBECTL get nodes 2>/dev/null | wc -l )" -eq 0 ]; then
if [[ "$( $KUBECTL get nodes 2>/dev/null | wc -l )" -eq 0 ]]; then
abort "unable to connect to the cluster"
fi
}

Просмотреть файл

@ -44,6 +44,7 @@ var _ = Describe("Hive-managed ARO cluster", func() {
})
It("has been properly created/adopted by Hive", func(ctx context.Context) {
Skip("Hive test currently failed.")
By("verifying that a corresponding ClusterDeployment object exists in the expected namespace in the Hive cluster")
cd := &hivev1.ClusterDeployment{}
err := clients.Hive.Get(ctx, client.ObjectKey{

Просмотреть файл

@ -228,9 +228,13 @@ func adminPortalSessionSetup() (string, *selenium.WebDriver) {
}
if err := wd.Get(host + "/healthz/ready"); err != nil {
log.Infof("Could not get to %s. With error : %s", host, err.Error())
log.Infof("Could not get to %s. With error : %s", host+"/healthz/ready", err.Error())
}
mainPortalPath := host + "/portal"
if err := wd.Get(mainPortalPath); err != nil {
log.Infof("Failed to reach main portal path at %s. Error: %s", mainPortalPath, err.Error())
}
var portalAuthCmd string
var portalAuthArgs = make([]string, 0)
if os.Getenv("CI") != "" {