enable reconciling azuresubnets/NSGs by default

refector e2e for removing dependency.

Update 2

removed old code.

make test to fail on getting error.

Expect(err).NotTo(HaveOccurred())

Formating done

White-spaces removed.

handle the use of the AddressPrefixes field alongside AddressPrefix

improved ValidateCIDRRanges test

add vnet names to help with debugging if needed in the future

comment improvement

Bump follow-redirects from 1.14.0 to 1.14.7 in /portal

Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.14.0 to 1.14.7.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.14.0...v1.14.7)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Store downloaded cert only when it differs

When systemd downloader downloads fresh certificate
check whether it differs from the stored one.

Replace old one with fresh when there is a difference.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Restart mdm service on cert change

Forces MDM container to pick up changed certificate.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

doc: Document fp cert rotation

Add doc file with information how the first party certificate is
rotated in the RP and on the host VM.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Replace artifacts with direct code checkout

Replaces configuration fetching via build pipeline with
direct code checkout.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Update .pipelines/int-release.yml

Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com>

provide the ability to specify an overridden fluentbit image in operator feature flags

Download aro deployer from tagged image

Pull aro deployer from tagged container instead of pipeline artifact.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Add deploy pipelines using tag

Add new pipelines using tagged deployment

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Set XDG_RUNTIME_DIR explicitly on CI VMs

Add tagged aro image

Add annotated tag build and push into makefile.
Without annotation, the TAG is empty and
action is not performed.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Build and push tagged aro image into ACR

When annotated TAG is not set the new step fails.
Otherwise it builds the tagged image and pushes it
to the ACR.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Build release on tag

When CI started from tag build image and push to registry.
Extract annotation from the tag and use it as summary
for changelog. Automated summary is extracted from commits
titles.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

mdm/mdsd++

make generate

Revert "[PIPELINES 4] Create release based on annotated git tag"

Fix: Broken pull path

The original path is not working as it is blocked for writing,
Using the pipeline default instead

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Fix: Broken checkout code path

The checkout behaves differently when checking out single repository.
It checkout to /s

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Update prod pipeline params to be consistent

Enable SBOM on all OneBranch pipelines

Fixing typo in paths

Add Documentation and Scripts for ARO Monitor Metric testing

Fix typo

Co-authored-by: Caden Marchese <56140267+cadenmarchese@users.noreply.github.com>

Handle cleanup of spawned processes.

Clarify a few things in the procdure.

Add example script to directly inject test data

Revert "Revert "[PIPELINES 4] Create release based on annotated git tag""

Fix: Remove build to run after e2e

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Bump nanoid from 3.1.22 to 3.2.0 in /portal

Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.22 to 3.2.0.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.1.22...3.2.0)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Add uaenorth to non-zonal regions

imageconfig controller

Fixing bug where incorrect ACR domain name was being generated

added doc for cert rotation

Signed-off-by: Karan.Magdani <kmagdani@redhat.com>

Vendor installer release 4.9

This also forces the RP from Go 1.14 to Go 1.16.

Aside from requiring OCP 4.9 / Kubernetes 1.22 modules, the
other go.mod changes are all manual workarounds from failed
"make vendor" runs.

Automated updates from "make vendor"

Alter client-gen command to stay within repo

The way this is written seems to assume the ARO-RP repo is cloned
under the user's $GOPATH tree.  That's not where I typically clone
git repos for development.

Use relative paths in the client-gen command and arguments to stay
within the ARO-RP git repo.

Automated updates from "make generate"

Set InstallStream to OCP 4.9.8

Automated updates from "make discoverycache"

pipelines: Demand agents with go-1.16 capability for CI/E2E

Update documentation for Go 1.16 and installer 4.9

Fix: Remove the wrong git pull path

Removes the wrong git pull path for ADO RP-config
Removes unused parameter

Signed-off-by: Petr Kotas <pkotas@redhat.com>

fix: Add go1.16 requirement to run pipelines

With addition of 4.9 release, the go build
have to run with go1.16

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Add geneva action to reconcile a failed NIC

Suppress stderr within Makefile command

Do not overwrite FIPs environment variable in CI VMs

fix: fix service connection to the github

existing service connection does not meet requirement
for the github release

Signed-off-by: Petr Kotas <pkotas@redhat.com>

ADO Pipelines make no sense

Ensure TAG environment var is consistent case

Incorrect quoting on variables in pipeline

Clean up debug print statement in pipelines

Add INT/Prod variable group requirements

Update correct directory path for pipeline template files

Update release tag pipeline parameters

Vendor updated autorest adal to fix nil pointer exception in MSI

add fl to owners :-)

Fix: use the correct variable syntax for updated variables in pipelines

Bump 4.9.8 to 4.9.9 as it contains a bugfix that prevents cluster creation success

Vendor openshift installer carry patch

Bump golang version to 1.16 in CI VMs

Fix wrongly updated parameters and variables in prod release

Feedback follow up on image config controller

Use INT E2E Creds in Prod pipeline as we pull from the INT image registry and spin up our resources in our INT sub

clean temporary gomock folders (#1912)

Signed-off-by: Karan.Magdani <kmagdani@redhat.com>

fix 2 cred scan findings by adding suppression settings (#1960)

add tsaoptions json file, enable tsa in build rp official pipeline (#1959)

chore: removed logging onebranch pipelines files from aro-rp repo (#1942)

quick fixes in docs (#1956)

Removes unneeded field (#1962)

Updated linux container image for build (#1964)

Updating go-toolset tag to 1.16.12 (#1965)

Bump follow-redirects from 1.14.7 to 1.14.8 in /portal

Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.14.7 to 1.14.8.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.14.7...v1.14.8)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

add fips validation scripts and ci step

drop net_raw and make generate

Adding norwaywest to deploy from tag ALL regions Pipeline. (#1968)

Include variable groups for prod single region release (#1957)

Add Central US EUAP to nonZonalRegions (#1927)

remove network acceleration due to issues discovered

reapply the primary tag

make generate

Add metric gauge for nohost present on request to gateway

Fix net_raw caps, make generate (#1971)

Refactors operator requeues

* Adds the clarifying comment on requeues into the checker controller
* Removes `Requeue: true` in places where we use `RequeueAfter`
  as it is has no effect.

add a field to indicate spotInstances in node.conditions metric (#1928)

Bump url-parse from 1.5.3 to 1.5.7 in /portal

Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.7.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.7)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

docs: add cleaner info to shared env docs

add westus3 to pipeline manifests

add additional logging to redeploy to help understand state when this job fails in e2e

Re-enable Egress Lockdown

Enable egress lockdown feature by default on new clusters while also
allowing current clusters to be admin-upgraded with the new feature

Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com>

fix: use the tag/commit as the aro version

ARO uses both tags and commits as its version.
The commits are used for the development scenario,
tags are used when building and deploing to
production.

add: copy ARO iamge to integration

Signed-off-by: Petr Kotas <petr@kotas.tech>

add: release pipeline documentation

Signed-off-by: Petr Kotas <petr@kotas.tech>

fix: HTTP 500 from "List cluster Azure resource" Geneva Action for unknown resource types (#1978)

* If don't have an apiVersion defined for a resource, then skip over it instead of returning an error.

* Reword the comment.

* Double quote the resource type in the log warning message.

Co-authored-by: Mikalai Radchuk <509198+m1kola@users.noreply.github.com>

add operator storage acc and endpoints reconcilers

operator tests

storageacc handling for install/update

generate

vendor

review feedback

Add dev env rules exception

Comply with the Authorizer changes

Fix tests

Fix merge conflicts

Add operator flags

Fix tests

Change operator flags

Addressing feedback

generate

Operator flag tests

Addressing feedback

FIx

update cluster spec

Add an Operator controller for Managed Upgrade Operator

add MUO deployment manifests

run go generate

add a mocks directory in the operator

make dynamichelper produce less spurious changes for MUO

fix: move int mirroring to separate pipelines

integration requires it own set of credentials,
this can only by provided in a separate pipeline

Signed-off-by: Petr Kotas <pkotas@redhat.com>

fix: provide the correct dependent pipeline (#1982)

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Update mirror-aro-to-int.yml for Azure Pipelines

Remove unused parameter

fix: replace parameter with variable (#1984)

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Update mirror-aro-to-int.yml for Azure Pipelines

Fix typo

Cleans up unused args in `muo.NewReconciler`

Bump url-parse from 1.5.7 to 1.5.10 in /portal

Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.7 to 1.5.10.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.7...1.5.10)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

Removes a explicit `gomock.Eq()` matcher calls (#1983)

`gomock.Eq()` is a default matcher in gomock
so it doesn't have to be explicitly called in these cases

Docs: Set GOPATH (#1987)

- A few developers on various OS flavors have seen make generate fail after the upgrade to golang 1.16 due to client-gen updates. This appears to fix.

Adds extra fields to the PreviewFeature CRD

Adds the controller implementation

It currently implements only one feature: NSG flow logs

preview feature controller and NSG flow log feature implementation

L series support - RP changes (#1751)

* add L-series SKUs to internal, admin, validate api

* make client

Add SKU availability and restriction checks to dynamic validation (#1790)

* add sku filtering and restriction checks

* add install-time instance validation

Minor ARO operator refactoring

* Gets rid of exported constants like `ENABLED` where exported constants are not required
* Gets rid of constant concatenations like `CONFIG_NAMESPACE + ".enabled"` to make search easier
* Removes unnecessary `Copy` method of `OperatorFlags` stuct as well as package level `DefaultOperatorFlags` variable.
  Introduces `DefaultOperatorFlags()` instead.

Removing call to listByResourceGroup due to flakyness in the Azure API

add validate-fips step into onebranch build rp template

exclude vuln protobuf

exclude vulnerable containerd versions

Changed CloudErrorCodes from vars to consts. (#1997)

Co-authored-by: Jeremy Facchetti <jfacchet@jfacchet.remote.csb>

Add sourcebranchname to build_tag (#1996)

adding a way to pass additional flags to E2E tests (#1998)

Fix typo in deploy-development-rp doc (#2005)

Better documentation support for multiple envs (#1932)

- Now there are two env files: standard, and int-like files
  - Instructions modified for int envs to create the new file and source it
  - Fixed a small typo in the instructions that was being masked by indentation

vendor: fake operator client

Signed-off-by: Petr Kotas <pkotas@redhat.com>

feature: add autosizednodes reconciler

Introduce autosizednodes reconciler which watches aro cluster object
feature flags for ReconcileAutoSizedNodes.

When feature flag is present new KubeletConfig is created enabling the
AutoSizingReserver feature which auto computes the system reserved
for nodes.

feature: add aro cluster to workaround

Adds aro cluster instance to IsRequires check
to allow for feature flags checking.

Signed-off-by: Petr Kotas <pkotas@redhat.com>

feature: disable systemreserved when autosizednodes enabled

Signed-off-by: Petr Kotas <pkotas@redhat.com>

Avoid AdminUpdate panic when Nodes are down (#1972)

* Skip ensureAROOperator and aroDeploymentReady when the IngressProfiles data is missing, esp after cluster VM restarts as part of the update call
* Refactor Cluster Manager code to make ensureAROOperator code testable
* Add unit test for ensureAROOperator code

Co-authored-by: Ulrich Schlueter <uschlueter@redhat.com>

update go-cosmosdb version to incorporate the latest change (#2006)

Filter out unwanted data from azure list geneva action (#1969)

* filter our Microsoft.Compute/snapshots from azure list geneva action

* change filter input for test

Doc to create & push ARO Operator image to ACR/Quay (#1888)

* Doc to create/push AROOperator image ACR/Quay

A document on How to create & publish ARO Operator image to ACR/Quay.

Added alternative to go get command (#2015)

Update Makefile (#2020)

The ARO-RP returns special characters in color encoding special character, which is not decoded as of now. This change removes the color encoding characters by default in e2e tests

Update node-selector on muo namespace

Dockerfile for MUO image (#1993)

Update OB Build Pipeline to Pass Build Tag as Var (#2011)

* adding release_tag functionality to support releasing by tag or commit

add managed upgrade operator configuration settings and connected MUO if allowed and a pullsecret exists

add muo config yaml

add openshift-azure-logging to the ignored namespaces

run go generate

Fix VM Redeploy Test Flake

- Removing test to check k8s Events for Node readiness
- Adding test for Azure VM readiness (power state)
- Adding test for Linux Kernel uptime to guarantee reboot

disable ipv6 router advertisements on rp/gateway vmss

Install python3 on RP and gateway VMs

make pullspec an optional flag

add enabled and managed by default
This commit is contained in:
Amber Brown 2021-08-17 14:41:51 +10:00 коммит произвёл Ellis Johnson
Родитель 485c17117f
Коммит 08e48b51b6
469 изменённых файлов: 24799 добавлений и 39078 удалений

2
.github/CODEOWNERS поставляемый
Просмотреть файл

@ -1 +1 @@
* @jewzaam @m1kola @bennerv @hawkowl @rogbas @petrkotas @ross-bryan @darthhexx @jharrington22 @cblecker
* @jewzaam @m1kola @bennerv @hawkowl @mwoodson @rogbas @petrkotas @bryanro92

Просмотреть файл

@ -9,7 +9,7 @@ jobs:
- job: Build_and_push_images
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
steps:
- template: ./templates/template-checkout.yml

Просмотреть файл

@ -22,6 +22,7 @@ variables:
- template: vars.yml
jobs:
- job: Python_Unit_Tests
pool:
name: ARO-CI
@ -36,7 +37,7 @@ jobs:
- job: Golang_Unit_Tests
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
steps:
- template: ./templates/template-checkout.yml
@ -46,6 +47,18 @@ jobs:
[[ -z "$(git status -s)" ]]
displayName: ⚙️ Run Golang code generate
- script: |
set -xe
make validate-go
[[ -z "$(git status -s)" ]]
displayName: 🕵️ Validate Golang code
- script: |
set -xe
make lint-go
[[ -z "$(git status -s)" ]]
displayName: 🕵️ Lint Golang code
- script: |
set -xe
make build-all
@ -82,11 +95,3 @@ jobs:
failIfCoverageEmpty: false
condition: succeededOrFailed()
- job: Lint_Admin_Portal
pool:
name: ARO-CI
steps:
- script: |
set -xe
make lint-admin-portal
displayName: 🧹 Lint Admin Portal

Просмотреть файл

@ -15,7 +15,7 @@ jobs:
timeoutInMinutes: 180
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
steps:
- template: ./templates/template-checkout.yml
- template: ./templates/template-az-cli-login.yml

Просмотреть файл

@ -1,19 +1,23 @@
# Azure DevOps Pipeline for generating release notes
# Azure DevOps Pipeline building rp images and pushing to int acr
trigger: none
pr: none
variables:
- template: vars.yml
- name: TAG
- group: PROD CI Credentials
jobs:
- job: Generate_release_notes
- job: Build_and_push_images
condition: startsWith(variables['build.sourceBranch'], 'refs/tags/v2')
displayName: Generate release notes
displayName: Build release
pool:
name: ARO-CI
demands: go-1.16
steps:
- template: ./templates/template-checkout.yml
- template: ./templates/template-az-cli-login.yml
parameters:
azureDevOpsJSONSPN: $(aro-v4-ci-devops-spn)
@ -27,6 +31,19 @@ jobs:
## set the variable
echo "##vso[task.setvariable variable=TAG]${TAG}"
- template: ./templates/template-push-images-to-acr-tagged.yml
parameters:
rpImageACR: $(RP_IMAGE_ACR)
imageTag: $(TAG)
- template: ./templates/template-az-cli-logout.yml
- script: |
cp -a --parents aro "$(Build.ArtifactStagingDirectory)"
displayName: Copy artifacts
- task: PublishBuildArtifacts@1
displayName: Publish Artifacts
name: aro_deployer
- script: |
set -xe
MESSAGE="$(git for-each-ref refs/tags/${TAG} --format='%(contents)')"

Просмотреть файл

@ -20,7 +20,7 @@ jobs:
condition: startsWith(variables['build.sourceBranch'], 'refs/tags/v2')
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
steps:
- template: ./templates/template-checkout.yml
@ -37,14 +37,8 @@ jobs:
## set the variable
echo "##vso[task.setvariable variable=TAG]${TAG}"
- script: |
USERNAME=`echo "$(aro-v4-ci-pd-pull)" | base64 -d | cut -d':' -f1`
PASSWORD=`echo "$(aro-v4-ci-pd-pull)" | base64 -d | cut -d':' -f2-`
az acr login --name "${{variables.rpIntImageAcr}}"
az acr import \
--force \
--name "${{variables.rpIntImageAcr}}" \
--source "${{variables.rpProdImageAcr}}.azurecr.io/aro:${TAG}" \
--username "$USERNAME" \
--password "$PASSWORD"
--source "${{variables.rpProdImageAcr}}.azurecr.io/aro:${TAG}"
- template: ./templates/template-az-cli-logout.yml

Просмотреть файл

@ -13,8 +13,7 @@ pr: none
variables:
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
ONEBRANCH_AME_ACR_LOGIN: cdpxb8e9ef87cd634085ab141c637806568c00.azurecr.io
LinuxContainerImage: $(ONEBRANCH_AME_ACR_LOGIN)/b8e9ef87-cd63-4085-ab14-1c637806568c/official/ubi8/go-toolset:1.17.7-13 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.16.12-4:20220202 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
Debian_Frontend: noninteractive
resources:

Просмотреть файл

@ -12,9 +12,8 @@ trigger: none
pr: none
variables:
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
ONEBRANCH_AME_ACR_LOGIN: cdpxb8e9ef87cd634085ab141c637806568c00.azurecr.io
LinuxContainerImage: $(ONEBRANCH_AME_ACR_LOGIN)/b8e9ef87-cd63-4085-ab14-1c637806568c/official/ubi8/go-toolset:1.17.7-13 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.16.12-4:20220202 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
Debian_Frontend: noninteractive
resources:

Просмотреть файл

@ -45,8 +45,6 @@ stages:
rpMode: ''
aroVersionStorageAccount: $(aro-version-storage-account)
locations:
- australiacentral
- australiacentral2
- australiaeast
- australiasoutheast
- centralindia
@ -109,7 +107,6 @@ stages:
- northeurope
- norwayeast
- norwaywest
- swedencentral
- switzerlandnorth
- switzerlandwest
- westeurope

Просмотреть файл

@ -21,7 +21,7 @@ jobs:
- template: ../vars.yml
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
environment: ${{ parameters.environment }}
strategy:
runOnce:
@ -36,7 +36,7 @@ jobs:
azureDevOpsJSONSPN: $(aro-v4-ci-devops-spn)
- script: |
set -e
trap 'set +e; for c in $(docker ps -aq); do docker rm -f $c; done; docker image prune -af ; rm -rf ~/.docker/config.json; rm -rf /run/user/$(id -u $USERNAME)/containers/auth.json' EXIT
trap 'set +e; for c in $(docker ps -aq); do docker rm -f $c; done; docker image prune -af ; rm -rf ~/.docker/config.json' EXIT
export TAG=${{ parameters.imageTag }}
export RP_IMAGE_ACR=${{ parameters.rpImageAcr }}

Просмотреть файл

@ -22,7 +22,7 @@ jobs:
- template: ../vars.yml
pool:
name: ARO-CI
demands: go-1.17
demands: go-1.16
environment: ${{ parameters.environment }}
strategy:
runOnce:

Просмотреть файл

@ -4,7 +4,7 @@ parameters:
steps:
- script: |
set -e
trap 'set +e; for c in $(docker ps -aq); do docker rm -f $c; done; docker image prune -af ; rm -rf ~/.docker/config.json; rm -rf /run/user/$(id -u $USERNAME)/containers/auth.json' EXIT
trap 'set +e; for c in $(docker ps -aq); do docker rm -f $c; done; docker image prune -af ; rm -rf ~/.docker/config.json' EXIT
export RP_IMAGE_ACR=${{ parameters.rpImageACR }}
export TAG=${{ parameters.imageTag }}
az acr login --name "$RP_IMAGE_ACR"

Просмотреть файл

@ -4,8 +4,7 @@
# Currently the docker version on our RHEL7 VMSS uses a version which
# does not support multi-stage builds. This is a temporary stop-gap
# until we get podman working without issue
ARG REGISTRY
FROM ${REGISTRY}/ubi8/go-toolset:1.17.7 AS builder
FROM registry.access.redhat.com/ubi7/go-toolset:1.16.12 AS builder
ENV GOOS=linux \
GOPATH=/go/
WORKDIR ${GOPATH}/src/github.com/Azure/ARO-RP
@ -14,7 +13,7 @@ RUN yum update -y
COPY . ${GOPATH}/src/github.com/Azure/ARO-RP/
RUN make aro && make e2e.test
FROM ${REGISTRY}/ubi8/ubi-minimal
FROM registry.access.redhat.com/ubi7/ubi-minimal
RUN microdnf update && microdnf clean all
COPY --from=builder /go/src/github.com/Azure/ARO-RP/aro /go/src/github.com/Azure/ARO-RP/e2e.test /usr/local/bin/
ENTRYPOINT ["aro"]

Просмотреть файл

@ -1,5 +1,4 @@
ARG REGISTRY
FROM ${REGISTRY}/ubi8/go-toolset:1.16.12 AS builder
FROM registry.access.redhat.com/ubi8/go-toolset:1.16.12 AS builder
ARG MUOVERSION
ENV DOWNLOAD_URL=https://github.com/openshift/managed-upgrade-operator/archive/${MUOVERSION}.tar.gz
ENV GOOS=linux \
@ -13,7 +12,7 @@ RUN curl -Lq $DOWNLOAD_URL | tar -xz --strip-components=1
RUN go build -gcflags="all=-trimpath=/go/" -asmflags="all=-trimpath=/go/" -tags mandate_fips -o build/_output/bin/managed-upgrade-operator ./cmd/manager
#### Runtime container
FROM ${REGISTRY}/ubi8/ubi-minimal:latest
FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
ENV USER_UID=1001 \
USER_NAME=managed-upgrade-operator

Просмотреть файл

@ -5,10 +5,10 @@ ARO_IMAGE_BASE = ${RP_IMAGE_ACR}.azurecr.io/aro
E2E_FLAGS ?= -test.v --ginkgo.v --ginkgo.timeout 180m --ginkgo.flake-attempts=2
# fluentbit version must also be updated in RP code, see pkg/util/version/const.go
FLUENTBIT_VERSION = 1.9.4-1
FLUENTBIT_VERSION = 1.7.8-1
FLUENTBIT_IMAGE ?= ${RP_IMAGE_ACR}.azurecr.io/fluentbit:$(FLUENTBIT_VERSION)
AUTOREST_VERSION = 3.6.2
AUTOREST_IMAGE = quay.io/openshift-on-azure/autorest:${AUTOREST_VERSION}
AUTOREST_VERSION = 3.3.2
AUTOREST_IMAGE = "quay.io/openshift-on-azure/autorest:${AUTOREST_VERSION}"
ifneq ($(shell uname -s),Darwin)
export CGO_CFLAGS=-Dgpgme_off_t=off_t
@ -20,17 +20,6 @@ else
VERSION = $(TAG)
endif
# default to registry.access.redhat.com for build images on local builds and CI builds without $RP_IMAGE_ACR set.
ifeq ($(RP_IMAGE_ACR),arointsvc)
REGISTRY = arointsvc.azurecr.io
else ifeq ($(RP_IMAGE_ACR),arosvc)
REGISTRY = arosvc.azurecr.io
else ifeq ($(RP_IMAGE_ACR),)
REGISTRY = registry.access.redhat.com
else
REGISTRY = $(RP_IMAGE_ACR)
endif
ARO_IMAGE ?= $(ARO_IMAGE_BASE):$(VERSION)
build-all:
@ -40,7 +29,7 @@ aro: generate
go build -tags aro,containers_image_openpgp,codec.safe -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro
runlocal-rp:
go run -tags aro,containers_image_openpgp -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro rp
go run -tags aro -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro rp
az: pyenv
. pyenv/bin/activate && \
@ -56,12 +45,12 @@ clean:
find -type d -name 'gomock_reflect_[0-9]*' -exec rm -rf {} \+ 2>/dev/null
client: generate
hack/build-client.sh "${AUTOREST_IMAGE}" 2020-04-30 2021-09-01-preview 2022-04-01 2022-09-04
hack/build-client.sh "${AUTOREST_IMAGE}" 2020-04-30 2021-09-01-preview
# TODO: hard coding dev-config.yaml is clunky; it is also probably convenient to
# override COMMIT.
deploy:
go run -tags aro,containers_image_openpgp -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro deploy dev-config.yaml ${LOCATION}
go run -tags aro -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro deploy dev-config.yaml ${LOCATION}
dev-config.yaml:
go run ./hack/gendevconfig >dev-config.yaml
@ -75,21 +64,23 @@ generate:
go generate ./...
image-aro: aro e2e.test
docker pull $(REGISTRY)/ubi8/ubi-minimal
docker build --platform=linux/amd64 --network=host --no-cache -f Dockerfile.aro -t $(ARO_IMAGE) --build-arg REGISTRY=$(REGISTRY) .
docker pull registry.access.redhat.com/ubi8/ubi-minimal
docker build --network=host --no-cache -f Dockerfile.aro -t $(ARO_IMAGE) .
image-aro-multistage:
docker build --platform=linux/amd64 --network=host --no-cache -f Dockerfile.aro-multistage -t $(ARO_IMAGE) --build-arg REGISTRY=$(REGISTRY) .
docker build --network=host --no-cache -f Dockerfile.aro-multistage -t $(ARO_IMAGE) .
image-autorest:
docker build --platform=linux/amd64 --network=host --no-cache --build-arg AUTOREST_VERSION="${AUTOREST_VERSION}" --build-arg REGISTRY=$(REGISTRY) -f Dockerfile.autorest -t ${AUTOREST_IMAGE} .
docker build --network=host --no-cache --build-arg AUTOREST_VERSION="${AUTOREST_VERSION}" \
-f Dockerfile.autorest -t ${AUTOREST_IMAGE} .
image-fluentbit:
docker build --platform=linux/amd64 --network=host --no-cache --build-arg VERSION=$(FLUENTBIT_VERSION) --build-arg REGISTRY=$(REGISTRY) -f Dockerfile.fluentbit -t $(FLUENTBIT_IMAGE) .
docker build --network=host --no-cache --build-arg VERSION=$(FLUENTBIT_VERSION) \
-f Dockerfile.fluentbit -t $(FLUENTBIT_IMAGE) .
image-proxy: proxy
docker pull $(REGISTRY)/ubi8/ubi-minimal
docker build --platform=linux/amd64 --no-cache -f Dockerfile.proxy -t $(REGISTRY)/proxy:latest --build-arg REGISTRY=$(REGISTRY) .
docker pull registry.access.redhat.com/ubi8/ubi-minimal
docker build --no-cache -f Dockerfile.proxy -t ${RP_IMAGE_ACR}.azurecr.io/proxy:latest .
publish-image-aro: image-aro
docker push $(ARO_IMAGE)
@ -118,17 +109,16 @@ proxy:
go build -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./hack/proxy
run-portal:
go run -tags aro,containers_image_openpgp -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro portal
go run -tags aro -ldflags "-X github.com/Azure/ARO-RP/pkg/util/version.GitCommit=$(VERSION)" ./cmd/aro portal
build-portal:
cd portal/v1 && npm install && npm run build && cd ../v2 && npm install && npm run build
make generate
cd portal && npm install && npm run build
pyenv:
python3 -m venv pyenv
. pyenv/bin/activate && \
pip install -U pip && \
pip install -r requirements.txt && \
pip install autopep8 azdev azure-mgmt-loganalytics==0.2.0 colorama ruamel.yaml wheel && \
azdev setup -r . && \
sed -i -e "s|^dev_sources = $(PWD)$$|dev_sources = $(PWD)/python|" ~/.azure/config
@ -142,7 +132,7 @@ secrets:
secrets-update:
@[ "${SECRET_SA_ACCOUNT_NAME}" ] || ( echo ">> SECRET_SA_ACCOUNT_NAME is not set"; exit 1 )
tar -czf secrets.tar.gz secrets
az storage blob upload -n secrets.tar.gz -c secrets -f secrets.tar.gz --overwrite --account-name ${SECRET_SA_ACCOUNT_NAME} >/dev/null
az storage blob upload -n secrets.tar.gz -c secrets -f secrets.tar.gz --account-name ${SECRET_SA_ACCOUNT_NAME} >/dev/null
rm secrets.tar.gz
tunnel:
@ -164,50 +154,29 @@ validate-go:
@[ -z "$$(ls pkg/util/*.go 2>/dev/null)" ] || (echo error: go files are not allowed in pkg/util, use a subpackage; exit 1)
@[ -z "$$(find -name "*:*")" ] || (echo error: filenames with colons are not allowed on Windows, please rename; exit 1)
@sha256sum --quiet -c .sha256sum || (echo error: client library is stale, please run make client; exit 1)
go vet -tags containers_image_openpgp ./...
go vet ./...
go test -tags e2e -run ^$$ ./test/e2e/...
validate-go-action:
go run ./hack/licenses -validate -ignored-go vendor,pkg/client,.git -ignored-python python/client,vendor,.git
go run ./hack/validate-imports cmd hack pkg test
@[ -z "$$(ls pkg/util/*.go 2>/dev/null)" ] || (echo error: go files are not allowed in pkg/util, use a subpackage; exit 1)
@[ -z "$$(find -name "*:*")" ] || (echo error: filenames with colons are not allowed on Windows, please rename; exit 1)
@sha256sum --quiet -c .sha256sum || (echo error: client library is stale, please run make client; exit 1)
validate-fips:
hack/fips/validate-fips.sh
unit-test-go:
go run ./vendor/gotest.tools/gotestsum/main.go --format pkgname --junitfile report.xml -- -tags=aro,containers_image_openpgp -coverprofile=cover.out ./...
go run ./vendor/gotest.tools/gotestsum/main.go --format pkgname --junitfile report.xml -- -tags=aro -coverprofile=cover.out ./...
lint-go:
hack/lint-go.sh
lint-admin-portal:
docker build --platform=linux/amd64 --build-arg REGISTRY=$(REGISTRY) -f Dockerfile.portal_lint . -t linter:latest --no-cache
docker run --platform=linux/amd64 -t --rm linter:latest
go run ./vendor/github.com/golangci/golangci-lint/cmd/golangci-lint run
test-python: pyenv az
. pyenv/bin/activate && \
azdev linter && \
azdev style && \
hack/unit-test-python.sh
shared-cluster-login:
@oc login ${SHARED_CLUSTER_API} -u kubeadmin -p ${SHARED_CLUSTER_KUBEADMIN_PASSWORD}
unit-test-python:
hack/unit-test-python.sh
hack/format-yaml/format-yaml.py .pipelines
admin.kubeconfig:
hack/get-admin-kubeconfig.sh /subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${RESOURCEGROUP}/providers/Microsoft.RedHatOpenShift/openShiftClusters/${CLUSTER} >admin.kubeconfig
aks.kubeconfig:
hack/get-admin-aks-kubeconfig.sh
vendor:
# See comments in the script for background on why we need it
hack/update-go-module-dependencies.sh
.PHONY: admin.kubeconfig aks.kubeconfig aro az clean client deploy dev-config.yaml discoverycache generate image-aro image-aro-multistage image-fluentbit image-proxy lint-go runlocal-rp proxy publish-image-aro publish-image-aro-multistage publish-image-fluentbit publish-image-proxy secrets secrets-update e2e.test tunnel test-e2e test-go test-python vendor build-all validate-go unit-test-go coverage-go validate-fips
.PHONY: admin.kubeconfig aro az clean client deploy dev-config.yaml discoverycache generate image-aro image-aro-multistage image-fluentbit image-proxy lint-go runlocal-rp proxy publish-image-aro publish-image-aro-multistage publish-image-fluentbit publish-image-proxy secrets secrets-update e2e.test tunnel test-e2e test-go test-python vendor build-all validate-go unit-test-go coverage-go validate-fips

Просмотреть файл

@ -11,9 +11,8 @@ import (
configclient "github.com/openshift/client-go/config/clientset/versioned"
consoleclient "github.com/openshift/client-go/console/clientset/versioned"
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
operatorclient "github.com/openshift/client-go/operator/clientset/versioned"
securityclient "github.com/openshift/client-go/security/clientset/versioned"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
"k8s.io/client-go/kubernetes"
@ -32,7 +31,6 @@ import (
"github.com/Azure/ARO-RP/pkg/operator/controllers/genevalogging"
"github.com/Azure/ARO-RP/pkg/operator/controllers/imageconfig"
"github.com/Azure/ARO-RP/pkg/operator/controllers/machine"
"github.com/Azure/ARO-RP/pkg/operator/controllers/machinehealthcheck"
"github.com/Azure/ARO-RP/pkg/operator/controllers/machineset"
"github.com/Azure/ARO-RP/pkg/operator/controllers/monitoring"
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo"
@ -93,7 +91,7 @@ func operator(ctx context.Context, log *logrus.Entry) error {
if err != nil {
return err
}
maocli, err := machineclient.NewForConfig(restConfig)
maocli, err := maoclient.NewForConfig(restConfig)
if err != nil {
return err
}
@ -109,10 +107,6 @@ func operator(ctx context.Context, log *logrus.Entry) error {
if err != nil {
return err
}
operatorcli, err := operatorclient.NewForConfig(restConfig)
if err != nil {
return err
}
// TODO (NE): dh is sometimes passed, sometimes created later. Can we standardize?
dh, err := dynamichelper.New(log, restConfig)
if err != nil {
@ -222,15 +216,11 @@ func operator(ctx context.Context, log *logrus.Entry) error {
mgr)).SetupWithManager(mgr); err != nil {
return fmt.Errorf("unable to create controller %s: %v", autosizednodes.ControllerName, err)
}
if err = (machinehealthcheck.NewReconciler(
arocli, dh)).SetupWithManager(mgr); err != nil {
return fmt.Errorf("unable to create controller %s: %v", machinehealthcheck.ControllerName, err)
}
}
if err = (checker.NewReconciler(
log.WithField("controller", checker.ControllerName),
arocli, kubernetescli, maocli, operatorcli, configcli, role)).SetupWithManager(mgr); err != nil {
arocli, kubernetescli, maocli, role)).SetupWithManager(mgr); err != nil {
return fmt.Errorf("unable to create controller %s: %v", checker.ControllerName, err)
}

Двоичные данные
docs/img/AROMonitor.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 104 KiB

После

Ширина:  |  Высота:  |  Размер: 51 KiB

Просмотреть файл

@ -27,7 +27,7 @@ locations.
Set SECRET_SA_ACCOUNT_NAME to the name of the storage account:
```bash
SECRET_SA_ACCOUNT_NAME=rharosecretsdev
SECRET_SA_ACCOUNT_NAME=rharosecrets
```
1. You will need an AAD object (this could be your AAD user, or an AAD group of
@ -35,7 +35,7 @@ locations.
development environment key vault(s). Set ADMIN_OBJECT_ID to the object ID.
```bash
ADMIN_OBJECT_ID="$(az ad group show -g 'aro-engineering' --query id -o tsv)"
ADMIN_OBJECT_ID="$(az ad group show -g 'ARO v4 RP Engineering' --query objectId -o tsv)"
```
1. You will need the ARO RP-specific pull secret (ask one of the
@ -45,7 +45,7 @@ locations.
PULL_SECRET=...
```
1. Install [Go 1.17](https://golang.org/dl) or later, if you haven't already.
1. Install [Go 1.16](https://golang.org/dl) or later, if you haven't already.
1. Install the [Azure
CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli), if you
@ -88,9 +88,9 @@ locations.
```
```bash
> __NOTE:__: for macos change the -w0 option for base64 to -b0
AZURE_ARM_CLIENT_ID="$(az ad app create \
--display-name aro-v4-arm-shared \
--identifier-uris "https://$(uuidgen)/" \
--query appId \
-o tsv)"
az ad app credential reset \
@ -117,9 +117,9 @@ locations.
Now create the application:
```bash
> __NOTE:__: for macos change the -w0 option for base64 to -b0
AZURE_FP_CLIENT_ID="$(az ad app create \
--display-name aro-v4-fp-shared \
--identifier-uris "https://$(uuidgen)/" \
--query appId \
-o tsv)"
az ad app credential reset \
@ -141,6 +141,7 @@ locations.
AZURE_RP_CLIENT_ID="$(az ad app create \
--display-name aro-v4-rp-shared \
--end-date '2299-12-31T11:59:59+00:00' \
--identifier-uris "https://$(uuidgen)/" \
--key-type password \
--password "$AZURE_RP_CLIENT_SECRET" \
--query appId \
@ -161,6 +162,7 @@ locations.
AZURE_GATEWAY_CLIENT_ID="$(az ad app create \
--display-name aro-v4-gateway-shared \
--end-date '2299-12-31T11:59:59+00:00' \
--identifier-uris "https://$(uuidgen)/" \
--key-type password \
--password "$AZURE_GATEWAY_CLIENT_SECRET" \
--query appId \
@ -175,6 +177,7 @@ locations.
AZURE_CLIENT_ID="$(az ad app create \
--display-name aro-v4-tooling-shared \
--end-date '2299-12-31T11:59:59+00:00' \
--identifier-uris "https://$(uuidgen)/" \
--key-type password \
--password "$AZURE_CLIENT_SECRET" \
--query appId \
@ -191,27 +194,27 @@ locations.
* Go into the Azure Portal
* Go to Azure Active Directory
* Navigate to the `aro-v4-tooling-shared` app registration page
* Navigate to the `aro-v4-tooling-shared` app page
* Click 'API permissions' in the left side pane
* Click 'Add a permission'.
* Click 'Microsoft Graph'
* Click 'Add a permission'.
* Select 'Application permissions'
* Search for 'Application' and select `Application.ReadWrite.OwnedBy`
* Click 'Add permissions'
* This request will need to be approved by a tenant administrator. If you are one, you can click the `Grant admin consent for <name>` button to the right of the `Add a permission` button on the app page
1. Set up the RP role definitions and subscription role assignments in your Azure subscription. The usage of "uuidgen" for fpRoleDefinitionId is simply there to keep from interfering with any linked resources and to create the role net new. This mimics the RBAC that ARM sets up. With at least `User Access Administrator` permissions on your subscription, do:
1. Set up the RP role definitions and subscription role assignments in your
Azure subscription. This mimics the RBAC that ARM sets up. With at least
`User Access Administrator` permissions on your subscription, do:
```bash
LOCATION=<YOUR-REGION>
az deployment sub create \
-l $LOCATION \
--template-file pkg/deploy/assets/rbac-development.json \
--template-file deploy/rbac-development.json \
--parameters \
"armServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_ARM_CLIENT_ID'" --query '[].id' -o tsv)" \
"fpServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_FP_CLIENT_ID'" --query '[].id' -o tsv)" \
"fpRoleDefinitionId"="$(uuidgen)" \
"devServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_CLIENT_ID'" --query '[].id' -o tsv)" \
"armServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_ARM_CLIENT_ID'" --query '[].objectId' -o tsv)" \
"fpServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_FP_CLIENT_ID'" --query '[].objectId' -o tsv)" \
"devServicePrincipalId=$(az ad sp list --filter "appId eq '$AZURE_CLIENT_ID'" --query '[].objectId' -o tsv)" \
>/dev/null
```
@ -227,9 +230,9 @@ locations.
```
```bash
> __NOTE:__: for macos change the -w0 option for base64 to -b0
AZURE_PORTAL_CLIENT_ID="$(az ad app create \
--display-name aro-v4-portal-shared \
--identifier-uris "https://$(uuidgen)/" \
--reply-urls "https://localhost:8444/callback" \
--query appId \
-o tsv)"
@ -238,6 +241,8 @@ locations.
--cert "$(base64 -w0 <secrets/portal-client.crt)" >/dev/null
```
TODO: more steps are needed to configure aro-v4-portal-shared.
1. Create an AAD application which will fake up the dbtoken client.
1. Create the application and set `requestedAccessTokenVersion`
@ -248,9 +253,8 @@ locations.
--query appId \
-o tsv)"
OBJ_ID="$(az ad app show --id $AZURE_DBTOKEN_CLIENT_ID --query id)"
OBJ_ID="$(az ad app show --id $AZURE_DBTOKEN_CLIENT_ID --query objectId)"
> __NOTE:__: the graph API requires this to be done from a managed machine
az rest --method PATCH \
--uri https://graph.microsoft.com/v1.0/applications/$OBJ_ID/ \
--body '{"api":{"requestedAccessTokenVersion": 2}}'
@ -352,7 +356,7 @@ Variable | Certificate Client | Subscription Type | AAD App Nam
# Import firstparty.pem to keyvault v4-eastus-svc
az keyvault certificate import --vault-name <kv_name> --name rp-firstparty --file firstparty.pem
# Rotate certificates for SPs ARM, FP, and PORTAL (wherever applicable)
# Rotate certificates for SPs ARM, FP, and PORTAL (wherever applicable)
az ad app credential reset \
--id "$AZURE_ARM_CLIENT_ID" \
--cert "$(base64 -w0 <secrets/arm.crt)" >/dev/null
@ -368,13 +372,13 @@ az ad app credential reset \
5. The RP makes API calls to kubernetes cluster via a proxy VMSS agent. For the agent to get the updated certificates, this vm needs to be redeployed. Proxy VM is currently deployed by the `deploy_env_dev` function in `deploy-shared-env.sh`. It makes use of `env-development.json`
6. Run `[rharosecretsdev|aroe2esecrets] make secrets-update` to upload it to your
6. Run `[rharosecrets|aroe2esecrets] make secrets-update` to upload it to your
storage account so other people on your team can access it via `make secrets`
# Environment file
1. Choose the resource group prefix. The resource group location will be
The resource group location will be appended to the prefix to make the resource group name. If a v4-prefixed environment exists in the subscription already, use a unique prefix.
appended to the prefix to make the resource group name.
```bash
RESOURCEGROUP_PREFIX=v4
@ -395,18 +399,18 @@ storage account so other people on your team can access it via `make secrets`
export AZURE_SUBSCRIPTION_ID='$AZURE_SUBSCRIPTION_ID'
export AZURE_ARM_CLIENT_ID='$AZURE_ARM_CLIENT_ID'
export AZURE_FP_CLIENT_ID='$AZURE_FP_CLIENT_ID'
export AZURE_FP_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_FP_CLIENT_ID'" --query '[].id' -o tsv)'
export AZURE_FP_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_FP_CLIENT_ID'" --query '[].objectId' -o tsv)'
export AZURE_DBTOKEN_CLIENT_ID='$AZURE_DBTOKEN_CLIENT_ID'
export AZURE_PORTAL_CLIENT_ID='$AZURE_PORTAL_CLIENT_ID'
export AZURE_PORTAL_ACCESS_GROUP_IDS='$ADMIN_OBJECT_ID'
export AZURE_PORTAL_ELEVATED_GROUP_IDS='$ADMIN_OBJECT_ID'
export AZURE_CLIENT_ID='$AZURE_CLIENT_ID'
export AZURE_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_CLIENT_ID'" --query '[].id' -o tsv)'
export AZURE_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_CLIENT_ID'" --query '[].objectId' -o tsv)'
export AZURE_CLIENT_SECRET='$AZURE_CLIENT_SECRET'
export AZURE_RP_CLIENT_ID='$AZURE_RP_CLIENT_ID'
export AZURE_RP_CLIENT_SECRET='$AZURE_RP_CLIENT_SECRET'
export AZURE_GATEWAY_CLIENT_ID='$AZURE_GATEWAY_CLIENT_ID'
export AZURE_GATEWAY_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_GATEWAY_CLIENT_ID'" --query '[].id' -o tsv)'
export AZURE_GATEWAY_SERVICE_PRINCIPAL_ID='$(az ad sp list --filter "appId eq '$AZURE_GATEWAY_CLIENT_ID'" --query '[].objectId' -o tsv)'
export AZURE_GATEWAY_CLIENT_SECRET='$AZURE_GATEWAY_CLIENT_SECRET'
export RESOURCEGROUP="$RESOURCEGROUP_PREFIX-\$LOCATION"
export PROXY_HOSTNAME="vm0.$PROXY_DOMAIN_NAME_LABEL.\$LOCATION.cloudapp.azure.com"
@ -478,7 +482,7 @@ each of the bash functions below.
import_certs_secrets
```
> __NOTE:__: in production, three additional keys/certificates (rp-mdm, rp-mdsd, and
Note: in production, three additional keys/certificates (rp-mdm, rp-mdsd, and
cluster-mdsd) are also required in the $KEYVAULT_PREFIX-svc key vault. These
are client certificates for RP metric and log forwarding (respectively) to
Geneva.
@ -510,12 +514,10 @@ each of the bash functions below.
--file secrets/cluster-logging-int.pem
```
> __NOTE:__: in development, if you don't have valid certs for these, you can just
Note: in development, if you don't have valid certs for these, you can just
upload `localhost.pem` as a placeholder for each of these. This will avoid an
error stemming from them not existing, but it will result in logging pods
crash looping in any clusters you make. Additionally, no gateway resources are
created in development so you should not need to execute the cert import statement
for the "-gwy" keyvault.
crash looping in any clusters you make.
1. In pre-production (int, e2e) certain certificates are provisioned via keyvault
integration. These should be rotated and generated in the keyvault itself:
@ -546,4 +548,4 @@ Development value: secrets/cluster-logging-int.pem
## Append Resource Group to Subscription Cleaner DenyList
* We have subscription pruning that takes place routinely and need to add our resource group for the shared rp environment to the `denylist` of the cleaner:
* [https://github.com/Azure/ARO-RP/blob/e918d1b87be53a3b3cdf18b674768a6480fb56b8/hack/clean/clean.go#L29](https://github.com/Azure/ARO-RP/blob/e918d1b87be53a3b3cdf18b674768a6480fb56b8/hack/clean/clean.go#L29)
* [https://github.com/Azure/ARO-RP/blob/e918d1b87be53a3b3cdf18b674768a6480fb56b8/hack/clean/clean.go#L29](https://github.com/Azure/ARO-RP/blob/e918d1b87be53a3b3cdf18b674768a6480fb56b8/hack/clean/clean.go#L29)

Просмотреть файл

@ -3,13 +3,16 @@
This document goes through the development dependencies one requires in order to build the RP code.
## Software Required
1. Install [Go 1.17](https://golang.org/dl) or later, if you haven't already.
1. Install [Go 1.16](https://golang.org/dl) or later, if you haven't already.
1. Configure `GOPATH` as an OS environment variable in your shell (a requirement of some dependencies for `make generate`). If you want to keep the default path, you can add something like `GOPATH=$(go env GOPATH)` to your shell's profile/RC file.
1. Install [Python 3.6+](https://www.python.org/downloads), if you haven't already. You will also need `python-setuptools` installed, if you don't have it installed already.
1. Install the [az client](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli), if you haven't already.
1. Install `virtualenv`, a tool for managing Python virtual environments.
> The package is called `python-virtualenv` on both Fedora and Debian-based systems.
1. Install the [az client](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli), if you haven't already. You will need `az` version 2.0.72 or greater, as this version includes the `az network vnet subnet update --disable-private-link-service-network-policies` flag.
1. Install [OpenVPN](https://openvpn.net/community-downloads) if it is not already installed
@ -18,55 +21,54 @@ This document goes through the development dependencies one requires in order to
1. Install [Podman](https://podman.io/getting-started/installation) and [podman-docker](https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users#) if you haven't already, used for building container images.
1. Run for `az acr login` compatability
```bash
sudo touch /etc/containers/nodocker
```
1. Install [golangci-lint](https://golangci-lint.run/) and [yamllint](https://yamllint.readthedocs.io/en/stable/quickstart.html#installing-yamllint) (optional but your code is required to comply to pass the CI)
```bash
sudo touch /etc/containers/nodocker
```
### Fedora Packages
1. Install the `gpgme-devel`, `libassuan-devel`, and `openssl` packages.
> `sudo dnf install -y gpgme-devel libassuan-devel openssl`
> `sudo dnf install -y gpgme-devel libassuan-devel openssl`
1. Install [Docker 17.05+](https://docs.docker.com/engine/install/fedora/) or later, used as an alternative to podman.
### Debian Packages
Install the `libgpgme-dev` package.
1. Install the `libgpgme-dev` package.
### MacOS Packages
1. We are open to developers on MacOS working on this repository. We are asking MacOS users to setup GNU utils on their machines.
We are aiming to limit the amount of shell scripting, etc. in the repository, installing the GNU utils on MacOS will minimise the chances of unexpected differences in command line flags, usages, etc., and make it easier for everyone to ensure compatibility down the line.
We are aiming to limit the amount of shell scripting, etc. in the repository, installing the GNU utils on MacOS will minimise the chances of unexpected differences in command line flags, usages, etc., and make it easier for everyone to ensure compatibility down the line.
Install the following packages on MacOS:
```bash
# GNU Utils
brew install coreutils findutils gnu-tar grep
Install the following packages on MacOS:
```bash
# GNU Utils
brew install coreutils
brew install findutils
brew install gnu-tar
brew install grep
# Install envsubst (provided with gettext)
brew install gettext
brew link gettext
# Install envsubst
brew install gettext
brew link --force gettext
# Install gpgme
brew install gpgme
```
# Install
brew install gpgme
1. Modify your `~/.zshrc` (or `~/.bashrc` for Bash): this prepends `PATH` with GNU Utils paths;
# GNU utils
# Ref: https://web.archive.org/web/20190704110904/https://www.topbug.net/blog/2013/04/14/install-and-use-gnu-command-line-tools-in-mac-os-x
# gawk, diffutils, gzip, screen, watch, git, rsync, wdiff
export PATH="/usr/local/bin:$PATH"
# coreutils
export PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
# findutils
export PATH="/usr/local/opt/findutils/libexec/gnubin:$PATH"
```bash
echo "export PATH=$(find $(brew --prefix)/opt -type d -follow -name gnubin -print | paste -s -d ':' -):\$PATH" >> ~/.zshrc
```
#grep
export PATH="/usr/local/opt/grep/libexec/gnubin:$PATH"
1. Add the following into your `~/.zshrc`/`~/.bashrc` file:
```bash
export LDFLAGS="-L$(brew --prefix)/lib"
export CFLAGS="-I$(brew --prefix)/include"
export CGO_LDFLAGS=$LDFLAGS
export CGO_CFLAGS=$CFLAGS
```
#python-virtualenv
sudo pip3 install virtualenv
```
## Getting Started
1. Login to Azure:
@ -82,8 +84,9 @@ Install the `libgpgme-dev` package.
```bash
git clone https://github.com/Azure/ARO-RP.git $GOPATH/src/github.com/Azure/ARO-RP
```
1. Go to project:
```bash
cd ${GOPATH:-$HOME/go}/src/github.com/Azure/ARO-RP
```

Просмотреть файл

@ -10,37 +10,43 @@ The ARO monitor component (the part of the aro binary you activate when you exec
![Aro Monitor Architecture](img/AROMonitor.png "Aro Monitor Architecture")
To send data to Geneva the monitor uses an instance of a Geneva MDM container as a proxy of the Geneva API. The MDM container accepts statsd formatted data (the Azure Geneva version of statsd, that is) over a UNIX (Domain) socket. The MDM container then forwards the metric data over a https link to the Geneva API. Please note that a Unix socket can only be accessed from the same machine.
To send data to Geneva the monitor uses an instance of a Geneva MDM container as a proxy of the Geneva API. The MDM container accepts statsd formatted data (the Azure Geneva version of statsd, that is) over a UNIX (Domain) socket. The MDM container then forwards the metric data over a https link to the Geneva API. Please note that using a Unix socket can only be accessed from the same machine.
The monitor picks the required information about which clusters should actually monitor from its corresponding Cosmos DB. If multiple monitor instances run in parallel (i.e. connect to the same database instance) as is the case in production, they negotiate which instance monitors what cluster (see : [monitoring.md](./monitoring.md)).
# Unit Testing Setup
## Unit Testing Setup
If you work on monitor metrics in local dev mode (RP_MODE=Development) you most likely want to see your data somewhere in Geneva INT (https://jarvis-west-int.cloudapp.net/) before you ship your code.
There are two ways to set to achieve this:
- Run the Geneva MDM container locally
There are two ways to set to acchieve this:
- Run the Geneva MDM container locally (won't work on macOS, see Remote Container section below)
- Spawn a VM, start the Geneva container there and connect/tunnel to it.
and two protocols to chose from:
- Unix Domain Sockets, which is the way production is currently (April 2022) run
- or UDP, which is much easier to use and is the way it will be used on kubernetes clusters in the future
## Local Container Setup
### Local Container Setup
Before you start, make sure :
- to run `source ./env`
- you ran `SECRET_SA_ACCOUNT_NAME=rharosecretsdev make secrets` before
- know which "account" and "namespace" value you want to use on Geneva INT for your metric data and update your env to set the following variables before you start the monitor:
- you ran `SECRET_SA_ACCOUNT_NAME=rharosecrets make secrets` before
- know which "account" and "namespace" value you want to use on Geneva INT for your metric data and
update your env to set the
- CLUSTER_MDM_ACCOUNT
- CLUSTER_MDM_NAMESPACE
- CLUSTER_MDM_NAMESPACE
The container needs to be provided with the Geneva key and certificate. For the INT instance that is the rp-metrics-int.pem you find in the secrets folder after running the `make secrets` command above.
variables before you start the monitor.
An example docker command to start the container locally is here (you may need to adapt some parameters):
[Example](../hack/local-monitor-testing/sample/dockerStartCommand.sh). The script will configure the mdm container to connect to Geneva INT
Two things to be aware of :
* The container needs to be provided with the Geneva key and certificate. For the INT instance that is the rp-metrics-int.pem you find in the secrets folder after running `make secrets`. The sample scripts tries to copy it to /etc/mdm.pem (to mimic production).
* When you start the montitor locally in local dev mode, the monitor looks for the Unix Socket file mdm_statsd.socket in the current directory. Adapt the path in the start command accordingly, if it's not `./cmd/aro folder`'
### Remote Container Setup
## Remote Container Setup
If you can't run the container locally (because you run on macOS and your container tooling does not support Unix Sockets, which is true both for Docker for Desktop or podman) and or don't want to, you can bring up the container on a Linux VM and connect via a socat/ssh chain:
![alt text](img/SOCATConnection.png "SOCAT chain")
@ -48,7 +54,7 @@ Before you start make sure:
- you can ssh into the cloud-user on your VM without ssh prompting you for anything
- run `source ./env`
- you `az login` into your subscription
- you ran `SECRET_SA_ACCOUNT_NAME=rharosecretsdev make secrets` before
- you ran `SECRET_SA_ACCOUNT_NAME=rharosecrets make secrets` before
- know which "account" and "namespace" value you want to use on Geneva INT for your metric data and
update your env to set the
- CLUSTER_MDM_ACCOUNT
@ -78,13 +84,12 @@ socat -v UNIX-LISTEN:$SOCKETFILE,fork TCP-CONNECT:127.0.0.1:12345
For debugging it might be useful to run these commands manually in three different terminals to see where the connection might break down. The docker log file should show if data flows through or not, too.
#### Stopping the Network script
### Stopping the Network script
Stop the script with Ctrl-C. The script then will do its best to stop the ssh and socal processes it spawned.
## Starting the monitor
### Starting the monitor
When starting the monitor , make sure to have your
@ -110,22 +115,23 @@ A VS Code launch config that does the same would look like.
"monitor",
],
"env": {"CLUSTER_MDM_ACCOUNT": "<PUT YOUR ACCOUNT HERE>",
"CLUSTER_MDM_NAMESPACE":"<PUT YOUR NAMESPACE HERE>"
}
"CLUSTER_MDM_NAMESPACE":"<PUT YOUR NAMESPACE HERE>" }
},
````
## Finding your data
### Finding your data
If all goes well, you should see your metric data in the Jarvis metrics list (Geneva INT (https://jarvis-west-int.cloudapp.net/) -> Manage -> Metrics) under the account and namespace you specified in CLUSTER_MDM_ACCOUNT and CLUSTER_MDM_NAMESPACE and also be available is the dashboard settings.
## Injecting Test Data into Geneva INT
### Injecting Test Data into Geneva INT
Once your monitor code is done you will want to create pre-aggregates, dashboards and alert on the Geneva side and test with a variety of data.
Your end-2-end testing with real cluster will generate some data and cover many test scenarios, but if that's not feasible or too time-consuming you can inject data directly into the Genava mdm container via the socat/ssh network chain.
An example metric script is shown below.
An example metric script is shown below, you can connect it to
````
myscript.sh | socat TCP-CONNECT:127.0.0.1:12345 -
@ -137,7 +143,8 @@ myscript.sh | socat UNIX-CONNECT:$SOCKETFILE -
(see above of the $SOCKETFILE )
### Sample metric script
#### Sample metric script
````
#!/bin/bash
@ -159,7 +166,6 @@ DIM_RESOURCENAME=$CLUSTER
data="10 11 12 13 13 13 13 15 16 19 20 21 25"
SLEEPTIME=60
for MET in $data ;do
DATESTRING=$( date -u +'%Y-%m-%dT%H:%M:%S.%3N' )
OUT=$( cat << EOF
{"Metric":"$METRIC",
"Account":"$ACCOUNT",

Просмотреть файл

@ -5,12 +5,10 @@ upstream OCP.
## Installer carry patches
See https://github.com/openshift/installer/compare/release-4.10...jewzaam:release-4.10-azure.
See https://github.com/openshift/installer/compare/release-4.9...jewzaam:release-4.9-azure.
## Installation differences
* ARO does not use Terraform to create clusters, and instead uses ARM templates directly
* ARO persists the install graph in the cluster storage account in a new "aro"
container / "graph" blob.
@ -40,61 +38,19 @@ See https://github.com/openshift/installer/compare/release-4.10...jewzaam:releas
# Introducing new OCP release into ARO RP
To support a new version of OpenShift on ARO, you will need to reconcile [upstream changes](https://github.com/openshift/installer) with our [forked installer](https://github.com/jewzaam/installer-aro). This will not be a merge, but a cherry-pick of patches we've implemented.
## Update installer fork
To bring new OCP release branch into ARO installer fork:
1. Assess and document differences in X.Y and X.Y-1 in upstream
```sh
# clone our forked installer
git clone https://github.com/jewzaam/installer-aro.git
cd installer-aro
# add the upstream as a remote source
git remote add upstream https://github.com/openshift/installer.git
git fetch upstream -a
# diff the upstream X.Y with X.Y-1 and search for architecture changes
git diff upstream/release-X.Y-1 upstream/release-X.Y
# pay particular attention to Terraform files, which may need to be moved into ARO's ARM templates
git diff upstream/release-X.Y-1 upstream/release-X.Y */azure/*.tf
```
2. Create a new X.Y release branch in our forked installer
```sh
# create a new release branch in the fork based on the upstream
git checkout upstream/release-X.Y
git checkout -b release-X.Y-azure
```
3. If there is a golang version bump in this release, modify `./hack/build.sh` and `./hack/go-test.sh` with the new version, then verify these scripts still work and commit them
4. Determine the patches you need to cherry-pick, based on the last (Y-1) release
```sh
# find commit shas to cherry-pick from last time
git checkout release-X.Y-1-azure
git log
```
5. For every commit you need to cherry-pick (in-order), do:
```sh
# WARNING: when you reach the commit for `commit data/assets_vfsdata.go`, look ahead
git cherry-pick abc123 # may require manually fixing a merge
./hack/build.sh # fix any failures
./hack/go-test.sh # fix any failures
# if you had to manually merge, you can now `git cherry-pick --continue`
```
- When cherry-picking the specific patch `commit data/assets_vfsdata.go`, instead run:
```sh
git cherry-pick abc123 # may require manually fixing a merge
./hack/build.sh # fix any failures
./hack/go-test.sh # fix any failures
# if you had to manually merge, you can now `git cherry-pick --continue`
pushd ./hack/assets && go run ./assets.go && popd
./hack/build.sh # fix any failures
./hack/go-test.sh # fix any failures
git add data/assets_vfsdata.go
git commit --amend
```
1. Check git diff between the target release branch release-X.Y and previous one release-X.Y-1
to see if any resources changed and/or architecture changed.
These changes might require more modifications on ARO-RP side later on.
1. Create a new release-X.Y-azure branch in the ARO installer fork from upstream release-X.Y branch.
1. Cherry-pick all commits from the previous release-X.Y-1-azure branch into the new one & fix conflicts.
* While cherry-picking `commit data/assets_vfsdata.go` commit, run `cd ./hack/assets/ && go run ./assets.go`
to generate assets and then add them to this commit.
1. Run `./hack/build.sh` and `./hack/go-test.sh` as part of every commit (`git rebase` with `-x` can help with this).
* Fix build and test failures.
**Note:** If any changes are required during the process, make sure to amend the relevant patch or create a new one.
Each commit should be atomic/complete - you should be able to cherry-pick it into the upstream installer and bring
@ -171,5 +127,3 @@ Once installer fork is ready:
1. After this point, you should be able to create a dev cluster using the RP and it should use the new release.
1. `make discoverycache`.
* This command requires a running cluster with the new version.
1. The list of the hard-coded namespaces in `pkg/util/namespace/namespace.go` needs to be updated regularly as every
minor version of upstream OCP introduces a new namespace or two.

497
go.mod
Просмотреть файл

@ -1,357 +1,130 @@
module github.com/Azure/ARO-RP
go 1.17
go 1.16
require (
cloud.google.com/go/compute v1.1.0 // indirect
github.com/AlecAivazis/survey/v2 v2.3.2 // indirect
github.com/AlekSi/gocov-xml v0.0.0-20190121064608-3a14fb1c4737
github.com/Azure/azure-sdk-for-go v63.1.0+incompatible
github.com/Azure/go-autorest/autorest v0.11.25
github.com/Azure/azure-sdk-for-go v61.3.0+incompatible
github.com/Azure/go-autorest/autorest v0.11.24
github.com/Azure/go-autorest/autorest/adal v0.9.18
github.com/Azure/go-autorest/autorest/azure/auth v0.5.11
github.com/Azure/go-autorest/autorest/date v0.3.0
github.com/Azure/go-autorest/autorest/to v0.4.0
github.com/Azure/go-autorest/autorest/validation v0.3.1
github.com/Azure/go-autorest/tracing v0.6.0
github.com/IBM-Cloud/bluemix-go v0.0.0-20220119131246-2af2dee48688 // indirect
github.com/IBM/go-sdk-core/v5 v5.9.1 // indirect
github.com/IBM/networking-go-sdk v0.24.0 // indirect
github.com/IBM/platform-services-go-sdk v0.22.7 // indirect
github.com/alvaroloes/enumer v1.1.2
github.com/apparentlymart/go-cidr v1.1.0
github.com/aws/aws-sdk-go v1.42.40 // indirect
github.com/axw/gocov v1.0.0
github.com/clarketm/json v1.17.1 // indirect
github.com/codahale/etm v0.0.0-20141003032925-c00c9e6fb4c9
github.com/containers/image/v5 v5.21.0
github.com/containers/image/v5 v5.18.0
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a // indirect
github.com/containers/storage v1.38.1 // indirect
github.com/coreos/go-oidc v2.2.1+incompatible
github.com/coreos/go-semver v0.3.0
github.com/coreos/go-systemd/v22 v22.3.2
github.com/coreos/ignition/v2 v2.14.0
github.com/coreos/stream-metadata-go v0.2.0
github.com/coreos/ignition/v2 v2.13.0
github.com/coreos/stream-metadata-go v0.1.6
github.com/davecgh/go-spew v1.1.1
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/form3tech-oss/jwt-go v3.2.5+incompatible
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32
github.com/go-bindata/go-bindata v3.1.2+incompatible
github.com/go-logr/logr v1.2.3
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-logr/logr v1.2.2
github.com/go-openapi/errors v0.20.2 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-playground/validator/v10 v10.10.0 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/go-test/deep v1.0.8
github.com/gofrs/uuid v4.2.0+incompatible
github.com/golang/mock v1.6.0
github.com/golangci/golangci-lint v1.42.1
github.com/golangci/golangci-lint v1.32.2
github.com/google/go-cmp v0.5.7
github.com/googleapis/gnostic v0.6.8
github.com/googleapis/gnostic v0.6.6
github.com/gophercloud/gophercloud v0.24.0 // indirect
github.com/gophercloud/utils v0.0.0-20210909165623-d7085207ff6d // indirect
github.com/gorilla/csrf v1.7.1
github.com/gorilla/mux v1.8.0
github.com/gorilla/securecookie v1.1.1
github.com/gorilla/sessions v1.2.1
github.com/h2non/filetype v1.1.3 // indirect
github.com/jewzaam/go-cosmosdb v0.0.0-20220315232836-282b67c5b234
github.com/jstemmer/go-junit-report v0.9.1
github.com/onsi/ginkgo/v2 v2.1.3
github.com/onsi/gomega v1.19.0
github.com/openshift/api v3.9.1-0.20191111211345-a27ff30ebf09+incompatible
github.com/openshift/client-go v0.0.0-20220525160904-9e1acff93e4a
github.com/openshift/console-operator v0.0.0-20220407014945-45d37e70e0c2
github.com/openshift/hive v1.1.16
github.com/openshift/hive/apis v0.0.0
github.com/openshift/installer v0.16.1
github.com/openshift/library-go v0.0.0-20220525173854-9b950a41acdc
github.com/openshift/machine-config-operator v3.11.0+incompatible
github.com/pires/go-proxyproto v0.6.2
github.com/pkg/errors v0.9.1
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.50.0
github.com/prometheus/client_golang v1.12.1
github.com/prometheus/common v0.33.0
github.com/sirupsen/logrus v1.8.1
github.com/stretchr/testify v1.7.1
github.com/ugorji/go/codec v1.2.7
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b
golang.org/x/oauth2 v0.0.0-20220411215720-9780585627b5
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/text v0.3.7
golang.org/x/tools v0.1.10
gotest.tools/gotestsum v1.6.4
k8s.io/api v0.24.1
k8s.io/apiextensions-apiserver v0.24.1
k8s.io/apimachinery v0.24.1
k8s.io/cli-runtime v0.24.1
k8s.io/client-go v12.0.0+incompatible
k8s.io/code-generator v0.24.1
k8s.io/kubectl v0.24.1
k8s.io/kubernetes v1.23.5
sigs.k8s.io/cluster-api-provider-azure v1.2.1
sigs.k8s.io/controller-runtime v0.12.1
sigs.k8s.io/controller-tools v0.9.0
)
require (
4d63.com/gochecknoglobals v0.0.0-20201008074935-acfc0b28355a // indirect
cloud.google.com/go/compute v1.5.0 // indirect
github.com/AlecAivazis/survey/v2 v2.3.4 // indirect
github.com/Antonboom/errname v0.1.4 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest/azure/cli v0.4.5 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/BurntSushi/toml v1.1.0 // indirect
github.com/Djarvur/go-err113 v0.1.0 // indirect
github.com/IBM-Cloud/bluemix-go v0.0.0-20220407050707-b4cd0d4da813 // indirect
github.com/IBM/go-sdk-core/v5 v5.9.5 // indirect
github.com/IBM/networking-go-sdk v0.28.0 // indirect
github.com/IBM/platform-services-go-sdk v0.24.0 // indirect
github.com/IBM/vpc-go-sdk v1.0.1 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/OpenPeeDeeP/depguard v1.0.1 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/VividCortex/ewma v1.2.0 // indirect
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
github.com/alexkohler/prealloc v1.0.0 // indirect
github.com/aliyun/alibaba-cloud-sdk-go v1.61.1550 // indirect
github.com/aliyun/aliyun-oss-go-sdk v2.2.2+incompatible // indirect
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d // indirect
github.com/ashanbrown/forbidigo v1.2.0 // indirect
github.com/ashanbrown/makezero v0.0.0-20210520155254-b6261585ddde // indirect
github.com/aws/aws-sdk-go v1.43.34 // indirect
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bkielbasa/cyclop v1.2.0 // indirect
github.com/bombsimon/wsl/v3 v3.3.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
github.com/charithe/durationcheck v0.0.8 // indirect
github.com/chavacava/garif v0.0.0-20210405164556-e8a0a408d6af // indirect
github.com/clarketm/json v1.17.1 // indirect
github.com/containers/image v3.0.2+incompatible // indirect
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a // indirect
github.com/containers/ocicrypt v1.1.3 // indirect
github.com/containers/storage v1.39.0 // indirect
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
github.com/coreos/ignition v0.35.0 // indirect
github.com/coreos/vcontext v0.0.0-20220326205524-7fcaf69e7050 // indirect
github.com/daixiang0/gci v0.2.9 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/denis-tingajkin/go-header v0.4.2 // indirect
github.com/dimchansky/utfbom v1.1.1 // indirect
github.com/dnephin/pflag v1.0.7 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker v20.10.14+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/esimonov/ifshort v1.0.2 // indirect
github.com/ettle/strcase v0.1.1 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.12.0 // indirect
github.com/fatih/structtag v1.2.0 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/fzipp/gocyclo v0.3.1 // indirect
github.com/go-critic/go-critic v0.5.6 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-openapi/errors v0.20.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/strfmt v0.21.2 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/go-playground/locales v0.14.0 // indirect
github.com/go-playground/universal-translator v0.18.0 // indirect
github.com/go-playground/validator/v10 v10.10.1 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
github.com/go-toolsmith/astcast v1.0.0 // indirect
github.com/go-toolsmith/astcopy v1.0.0 // indirect
github.com/go-toolsmith/astequal v1.0.0 // indirect
github.com/go-toolsmith/astfmt v1.0.0 // indirect
github.com/go-toolsmith/astp v1.0.0 // indirect
github.com/go-toolsmith/strparse v1.0.0 // indirect
github.com/go-toolsmith/typep v1.0.2 // indirect
github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b // indirect
github.com/gobuffalo/flect v0.2.5 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.4.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a // indirect
github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 // indirect
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a // indirect
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 // indirect
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca // indirect
github.com/golangci/misspell v0.3.5 // indirect
github.com/golangci/revgrep v0.0.0-20210208091834-cd28932614b5 // indirect
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 // indirect
github.com/google/renameio v1.0.1 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/googleapis/gax-go/v2 v2.2.0 // indirect
github.com/gophercloud/gophercloud v0.24.0 // indirect
github.com/gophercloud/utils v0.0.0-20220307143606-8e7800759d16 // indirect
github.com/gordonklaus/ineffassign v0.0.0-20210225214923-2e10b2664254 // indirect
github.com/gostaticanalysis/analysisutil v0.4.1 // indirect
github.com/gostaticanalysis/comment v1.4.1 // indirect
github.com/gostaticanalysis/forcetypeassert v0.0.0-20200621232751-01d4955beaa5 // indirect
github.com/gostaticanalysis/nilerr v0.1.1 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/h2non/filetype v1.1.3 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jgautheron/goconst v1.5.1 // indirect
github.com/jingyugao/rowserrcheck v1.1.0 // indirect
github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/jonboulle/clockwork v0.2.2 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/julz/importas v0.0.0-20210419104244-841f0c0fe66d // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/kisielk/errcheck v1.6.0 // indirect
github.com/kisielk/gotool v1.0.0 // indirect
github.com/klauspost/compress v1.15.1 // indirect
github.com/klauspost/pgzip v1.2.5 // indirect
github.com/kulti/thelper v0.4.0 // indirect
github.com/kunwardeep/paralleltest v1.0.2 // indirect
github.com/kyoh86/exportloopref v0.1.8 // indirect
github.com/ldez/gomoddirectives v0.2.2 // indirect
github.com/ldez/tagliatelle v0.2.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/klauspost/compress v1.14.2 // indirect
github.com/libvirt/libvirt-go v7.4.0+incompatible // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.5 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/maratori/testpackage v1.0.1 // indirect
github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mbilski/exhaustivestruct v1.2.0 // indirect
github.com/metal3-io/baremetal-operator v0.0.0-20220405082045-575f5c90718a // indirect
github.com/metal3-io/baremetal-operator/apis v0.0.0 // indirect
github.com/metal3-io/baremetal-operator/pkg/hardwareutils v0.0.0 // indirect
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81 // indirect
github.com/mgechev/revive v1.1.1 // indirect
github.com/metal3-io/baremetal-operator v0.0.0-20220125095243-13add0bfb3be // indirect
github.com/metal3-io/cluster-api-provider-baremetal v0.2.2 // indirect
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/sys/mountinfo v0.6.0 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/moricho/tparallel v0.2.1 // indirect
github.com/nakabonne/nestif v0.3.0 // indirect
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect
github.com/nishanths/exhaustive v0.2.3 // indirect
github.com/nishanths/predeclared v0.2.1 // indirect
github.com/oklog/ulid v1.3.1 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84 // indirect
github.com/opencontainers/runc v1.1.1 // indirect
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
github.com/openshift/cloud-credential-operator v0.0.0-20220316185125-ed0612946f4b // indirect
github.com/openshift/cluster-api v0.0.0-20191129101638-b09907ac6668 // indirect
github.com/openshift/cluster-api-provider-baremetal v0.0.0-20220218121658-fc0acaaec338 // indirect
github.com/openshift/cluster-api-provider-ibmcloud v0.0.1-0.20220201105455-8014e5e894b0 // indirect
github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20191219173431-2336783d4603 // indirect
github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20220323121149-e3f2850dd519 // indirect
github.com/ovirt/go-ovirt v0.0.0-20210308100159-ac0bcbc88d7c // indirect
github.com/pascaldekloe/name v0.0.0-20180628100202-0fd16699aae1 // indirect
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.18.0
github.com/openshift/api v0.0.0-20210831091943-07e756545ac1
github.com/openshift/client-go v0.0.0-20210831095141-e19a065e79f7
github.com/openshift/cloud-credential-operator v0.0.0-20220121204927-85a406b6d4b1 // indirect
github.com/openshift/console-operator v0.0.0-20220120123728-4789dbf7c1d3
github.com/openshift/installer v0.16.1
github.com/openshift/library-go v0.0.0-20220125143545-df4228ff1215
github.com/openshift/machine-api-operator v0.2.1-0.20210820103535-d50698c302f5
github.com/openshift/machine-config-operator v0.0.1-0.20201009041932-4fe8559913b8
github.com/pborman/uuid v1.2.1 // indirect
github.com/pelletier/go-toml v1.9.3 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/polyfloyd/go-errorlint v0.0.0-20210722154253-910bb7978349 // indirect
github.com/pires/go-proxyproto v0.6.1
github.com/pkg/errors v0.9.1
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/proglottis/gpgme v0.1.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/quasilyte/go-ruleguard v0.3.4 // indirect
github.com/quasilyte/regex/syntax v0.0.0-20200805063351-8f842688393c // indirect
github.com/rivo/uniseg v0.2.0 // indirect
github.com/russross/blackfriday v1.6.0 // indirect
github.com/ryancurrah/gomodguard v1.2.3 // indirect
github.com/ryanrolds/sqlclosecheck v0.3.0 // indirect
github.com/sanposhiho/wastedassign/v2 v2.0.6 // indirect
github.com/securego/gosec/v2 v2.8.1 // indirect
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c // indirect
github.com/sonatard/noctx v0.0.1 // indirect
github.com/sourcegraph/go-diff v0.6.1 // indirect
github.com/spf13/afero v1.6.0 // indirect
github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/cobra v1.4.0 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace // indirect
github.com/spf13/viper v1.10.0 // indirect
github.com/ssgreg/nlreturn/v2 v2.1.0 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/stretchr/objx v0.3.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 // indirect
github.com/tdakkota/asciicheck v0.0.0-20200416200610-e657995f937b // indirect
github.com/tetafro/godot v1.4.9 // indirect
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 // indirect
github.com/tomarrell/wrapcheck/v2 v2.3.0 // indirect
github.com/tommy-muehle/go-mnd/v2 v2.4.0 // indirect
github.com/ulikunitz/xz v0.5.10 // indirect
github.com/ultraware/funlen v0.0.3 // indirect
github.com/ultraware/whitespace v0.0.4 // indirect
github.com/uudashr/gocognit v1.0.5 // indirect
github.com/vbatts/tar-split v0.11.2 // indirect
github.com/vbauerster/mpb/v7 v7.4.1 // indirect
github.com/vincent-petithory/dataurl v1.0.0 // indirect
github.com/vmware/govmomi v0.27.4 // indirect
github.com/xlab/treeprint v1.1.0 // indirect
github.com/yeya24/promlinter v0.1.0 // indirect
go.etcd.io/bbolt v1.3.6 // indirect
go.mongodb.org/mongo-driver v1.9.0 // indirect
github.com/prometheus/client_golang v1.12.0
github.com/prometheus/common v0.32.1
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cobra v1.3.0 // indirect
github.com/stretchr/testify v1.7.0
github.com/ugorji/go/codec v1.2.6
github.com/vbauerster/mpb/v7 v7.3.2 // indirect
github.com/vmware/govmomi v0.27.2 // indirect
go.mongodb.org/mongo-driver v1.8.2 // indirect
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
go.opencensus.io v0.23.0 // indirect
go.starlark.net v0.0.0-20220328144851-d1966c6b9fcd // indirect
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
golang.org/x/sys v0.0.0-20220406163625-3f8b81556e12 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/time v0.0.0-20220224211638-0e9765cccd65 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/api v0.74.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220405205423-9d709892a2bf // indirect
google.golang.org/grpc v1.45.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/go-playground/validator.v9 v9.31.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.66.4 // indirect
go.starlark.net v0.0.0-20211203141949-70c0e40ae128 // indirect
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
golang.org/x/net v0.0.0-20220121210141-e204ce36a2ba
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect
golang.org/x/tools v0.1.8
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 // indirect
google.golang.org/grpc v1.43.0 // indirect
gopkg.in/ini.v1 v1.66.3 // indirect
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
honnef.co/go/tools v0.2.1 // indirect
k8s.io/apiserver v0.24.1 // indirect
k8s.io/component-base v0.24.1 // indirect
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185 // indirect
k8s.io/klog v1.0.0 // indirect
k8s.io/klog/v2 v2.60.1 // indirect
k8s.io/kube-openapi v0.0.0-20220401212409-b28bf2818661 // indirect
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
mvdan.cc/gofumpt v0.1.1 // indirect
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect
mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7 // indirect
sigs.k8s.io/cluster-api-provider-aws v1.4.0 // indirect
sigs.k8s.io/cluster-api-provider-openstack v0.5.3 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/kustomize/api v0.11.4 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.6 // indirect
gotest.tools/gotestsum v1.6.4
k8s.io/api v0.23.2
k8s.io/apiextensions-apiserver v0.23.2
k8s.io/apimachinery v0.23.2
k8s.io/apiserver v0.23.2 // indirect
k8s.io/cli-runtime v0.23.2 // indirect
k8s.io/client-go v12.0.0+incompatible
k8s.io/code-generator v0.22.1
k8s.io/component-base v0.23.2 // indirect
k8s.io/klog/v2 v2.40.1 // indirect
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf // indirect
k8s.io/kubectl v0.23.2
k8s.io/kubernetes v1.23.2
k8s.io/utils v0.0.0-20211208161948-7d6a63dca704 // indirect
sigs.k8s.io/cluster-api-provider-aws v1.2.0 // indirect
sigs.k8s.io/cluster-api-provider-azure v1.1.0
sigs.k8s.io/cluster-api-provider-openstack v0.5.0 // indirect
sigs.k8s.io/controller-runtime v0.11.0
sigs.k8s.io/controller-tools v0.6.3-0.20210916130746-94401651a6c3
sigs.k8s.io/kustomize/api v0.10.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.1 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)
@ -474,37 +247,36 @@ replace (
// https://www.whitesourcesoftware.com/vulnerability-database/WS-2018-0594
github.com/satori/go.uuid => github.com/satori/go.uuid v1.2.1-0.20181028125025-b2ce2384e17b
github.com/satori/uuid => github.com/satori/uuid v1.2.1-0.20181028125025-b2ce2384e17b
github.com/spf13/pflag => github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace
github.com/spf13/viper => github.com/spf13/viper v1.7.1
github.com/terraform-providers/terraform-provider-aws => github.com/openshift/terraform-provider-aws v1.60.1-0.20200630224953-76d1fb4e5699
github.com/terraform-providers/terraform-provider-azurerm => github.com/openshift/terraform-provider-azurerm v1.40.1-0.20200707062554-97ea089cc12a
github.com/terraform-providers/terraform-provider-ignition/v2 => github.com/community-terraform-providers/terraform-provider-ignition/v2 v2.1.0
k8s.io/api => k8s.io/api v0.23.0
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.23.0
k8s.io/apimachinery => k8s.io/apimachinery v0.23.0
k8s.io/apiserver => k8s.io/apiserver v0.23.0
k8s.io/cli-runtime => k8s.io/cli-runtime v0.23.0
k8s.io/client-go => k8s.io/client-go v0.23.0
k8s.io/cloud-provider => k8s.io/cloud-provider v0.23.0
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.23.0
k8s.io/code-generator => k8s.io/code-generator v0.23.0
k8s.io/component-base => k8s.io/component-base v0.23.0
k8s.io/component-helpers => k8s.io/component-helpers v0.23.0
k8s.io/controller-manager => k8s.io/controller-manager v0.23.0
k8s.io/cri-api => k8s.io/cri-api v0.23.0
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.23.0
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.23.0
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.23.0
k8s.io/kube-proxy => k8s.io/kube-proxy v0.23.0
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.23.0
k8s.io/kubectl => k8s.io/kubectl v0.23.0
k8s.io/kubelet => k8s.io/kubelet v0.23.0
k8s.io/kubernetes => k8s.io/kubernetes v1.23.0
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.23.0
k8s.io/metrics => k8s.io/metrics v0.23.0
k8s.io/mount-utils => k8s.io/mount-utils v0.23.0
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.23.0
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.23.0
k8s.io/api => k8s.io/api v0.22.0
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.22.0
k8s.io/apimachinery => k8s.io/apimachinery v0.22.0
k8s.io/apiserver => k8s.io/apiserver v0.22.0
k8s.io/cli-runtime => k8s.io/cli-runtime v0.22.0
k8s.io/client-go => k8s.io/client-go v0.22.0
k8s.io/cloud-provider => k8s.io/cloud-provider v0.22.0
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.22.0
k8s.io/code-generator => k8s.io/code-generator v0.22.0
k8s.io/component-base => k8s.io/component-base v0.22.0
k8s.io/component-helpers => k8s.io/component-helpers v0.22.0
k8s.io/controller-manager => k8s.io/controller-manager v0.22.0
k8s.io/cri-api => k8s.io/cri-api v0.22.0
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.22.0
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.22.0
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.22.0
k8s.io/kube-proxy => k8s.io/kube-proxy v0.22.0
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.22.0
k8s.io/kubectl => k8s.io/kubectl v0.22.0
k8s.io/kubelet => k8s.io/kubelet v0.22.0
k8s.io/kubernetes => k8s.io/kubernetes v1.22.0
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.22.0
k8s.io/metrics => k8s.io/metrics v0.22.0
k8s.io/mount-utils => k8s.io/mount-utils v0.22.0
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.22.0
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.22.0
sigs.k8s.io/controller-runtime => sigs.k8s.io/controller-runtime v0.9.1
sigs.k8s.io/controller-tools => sigs.k8s.io/controller-tools v0.5.0
)
@ -523,7 +295,7 @@ replace (
github.com/coreos/bbolt => go.etcd.io/bbolt v1.3.6
github.com/coreos/fcct => github.com/coreos/butane v0.13.1
github.com/coreos/prometheus-operator => github.com/prometheus-operator/prometheus-operator v0.48.1
github.com/coreos/stream-metadata-go => github.com/coreos/stream-metadata-go v0.1.3
github.com/coreos/stream-metadata-go => github.com/coreos/stream-metadata-go v0.0.0-20210225230131-70edb9eb47b3
github.com/cortexproject/cortex => github.com/cortexproject/cortex v1.10.0
github.com/deislabs/oras => github.com/oras-project/oras v0.12.0
github.com/etcd-io/bbolt => go.etcd.io/bbolt v1.3.6
@ -537,24 +309,23 @@ replace (
github.com/influxdata/flux => github.com/influxdata/flux v0.132.0
github.com/knq/sysutil => github.com/chromedp/sysutil v1.0.0
github.com/kshvakov/clickhouse => github.com/ClickHouse/clickhouse-go v1.4.9
github.com/metal3-io/baremetal-operator => github.com/openshift/baremetal-operator v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
github.com/metal3-io/baremetal-operator/apis => github.com/openshift/baremetal-operator/apis v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
github.com/metal3-io/baremetal-operator/pkg/hardwareutils => github.com/openshift/baremetal-operator/pkg/hardwareutils v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
github.com/metal3-io/baremetal-operator => github.com/openshift/baremetal-operator v0.0.0-20210706141527-5240e42f012a // Use OpenShift fork
github.com/metal3-io/baremetal-operator/apis => github.com/openshift/baremetal-operator/apis v0.0.0-20210706141527-5240e42f012a // Use OpenShift fork
github.com/metal3-io/cluster-api-provider-baremetal => github.com/openshift/cluster-api-provider-baremetal v0.0.0-20190821174549-a2a477909c1d // Pin OpenShift fork
github.com/mholt/certmagic => github.com/caddyserver/certmagic v0.15.0
github.com/openshift/api => github.com/openshift/api v0.0.0-20220124143425-d74727069f6f
github.com/openshift/client-go => github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3
github.com/openshift/api => github.com/openshift/api v0.0.0-20211028023115-7224b732cc14
github.com/openshift/client-go => github.com/openshift/client-go v0.0.0-20210831095141-e19a065e79f7
github.com/openshift/cloud-credential-operator => github.com/openshift/cloud-credential-operator v0.0.0-20200316201045-d10080b52c9e
github.com/openshift/cluster-api-provider-gcp => github.com/openshift/cluster-api-provider-gcp v0.0.1-0.20211123160814-0d569513f9fa
github.com/openshift/cluster-api-provider-ibmcloud => github.com/openshift/cluster-api-provider-ibmcloud v0.0.0-20211008100740-4d7907adbd6b
github.com/openshift/cluster-api-provider-gcp => github.com/openshift/cluster-api-provider-gcp v0.0.1-0.20211001174514-d92b08844a2b
github.com/openshift/cluster-api-provider-ibmcloud => github.com/openshift/cluster-api-provider-ibmcloud v0.0.1-0.20210806145144-04491027caa8
github.com/openshift/cluster-api-provider-kubevirt => github.com/openshift/cluster-api-provider-kubevirt v0.0.0-20210719100556-9b8bc3666720
github.com/openshift/cluster-api-provider-libvirt => github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20191219173431-2336783d4603
github.com/openshift/cluster-api-provider-ovirt => github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20211215231458-35ce9aafee1f
github.com/openshift/console-operator => github.com/openshift/console-operator v0.0.0-20220318130441-e44516b9c315
github.com/openshift/installer => github.com/jewzaam/installer-aro v0.9.0-master.0.20220524230743-7e2aa7a0cc1a
github.com/openshift/library-go => github.com/openshift/library-go v0.0.0-20220303081124-fb4e7a2872f0
github.com/openshift/machine-api-operator => github.com/openshift/machine-api-operator v0.2.1-0.20220124104622-668c5b52b104
github.com/openshift/machine-config-operator => github.com/openshift/machine-config-operator v0.0.1-0.20220319215057-e6ba00b88555
github.com/openshift/cluster-api-provider-libvirt => github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20210623230745-59ae2edf8875
github.com/openshift/cluster-api-provider-ovirt => github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20220120123528-15a6add2ff5b
github.com/openshift/console-operator => github.com/openshift/console-operator v0.0.0-20220124105820-fdcb82f487fb
github.com/openshift/installer => github.com/jewzaam/installer-aro v0.9.0-master.0.20220208140934-766bcf74e25c
github.com/openshift/library-go => github.com/openshift/library-go v0.0.0-20220125122342-ff51c8a74c7b
github.com/openshift/machine-api-operator => github.com/openshift/machine-api-operator v0.2.1-0.20211203013047-383c9b959b69
github.com/openshift/machine-config-operator => github.com/openshift/machine-config-operator v0.0.1-0.20211215135312-23d93af42378
github.com/oras-project/oras-go => oras.land/oras-go v0.4.0
github.com/ovirt/go-ovirt => github.com/ovirt/go-ovirt v0.0.0-20210112072624-e4d3b104de71
github.com/prometheus/prometheus => github.com/prometheus/prometheus v1.8.2-0.20210421143221-52df5ef7a3be
@ -566,21 +337,13 @@ replace (
google.golang.org/cloud => cloud.google.com/go v0.97.0
google.golang.org/grpc => google.golang.org/grpc v1.40.0
k8s.io/klog/v2 => k8s.io/klog/v2 v2.8.0
k8s.io/kube-openapi => k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65
k8s.io/kube-state-metrics => k8s.io/kube-state-metrics v1.9.7
mvdan.cc/unparam => mvdan.cc/unparam v0.0.0-20211002133954-f839ab2b2b11
sigs.k8s.io/cluster-api-provider-aws => github.com/openshift/cluster-api-provider-aws v0.2.1-0.20210121023454-5ffc5f422a80
sigs.k8s.io/cluster-api-provider-azure => github.com/openshift/cluster-api-provider-azure v0.1.0-alpha.3.0.20210626224711-5d94c794092f
sigs.k8s.io/cluster-api-provider-openstack => github.com/openshift/cluster-api-provider-openstack v0.0.0-20211111204942-611d320170af
//sigs.k8s.io/controller-tools => sigs.k8s.io/controller-tools v0.3.1-0.20200617211605-651903477185
sigs.k8s.io/kustomize/api => sigs.k8s.io/kustomize/api v0.11.2
sigs.k8s.io/kustomize/kyaml => sigs.k8s.io/kustomize/kyaml v0.13.3
sigs.k8s.io/cluster-api-provider-aws => github.com/openshift/cluster-api-provider-aws v0.2.1-0.20211213011328-8226e86fa06e
sigs.k8s.io/cluster-api-provider-azure => github.com/openshift/cluster-api-provider-azure v0.1.0-alpha.3.0.20211202014309-184ccedc799e
sigs.k8s.io/cluster-api-provider-openstack => github.com/openshift/cluster-api-provider-openstack v0.0.0-20210820223719-a7442bb18bce
sigs.k8s.io/kustomize/kyaml => sigs.k8s.io/kustomize/kyaml v0.13.0
sigs.k8s.io/structured-merge-diff => sigs.k8s.io/structured-merge-diff v1.0.1-0.20191108220359-b1b620dd3f06
sourcegraph.com/sourcegraph/go-diff => github.com/sourcegraph/go-diff v0.5.1
vbom.ml/util => github.com/fvbommel/util v0.0.3
)
replace (
github.com/openshift/hive => github.com/openshift/hive v1.1.17-0.20220719141355-c63c9b0281d8
github.com/openshift/hive/apis => github.com/openshift/hive/apis v0.0.0-20220719141355-c63c9b0281d8
)

1666
go.sum

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -67,7 +67,7 @@ docker run \
-CertFile /etc/mdm.pem \
-FrontEndUrl $MDMFRONTENDURL \
-Logger Console \
-LogLevel Debug \
-LogLevel Warning \
-PrivateKeyFile /etc/mdm.pem \
-SourceEnvironment $MDMSOURCEENVIRONMENT \
-SourceRole $MDMSOURCEROLE \
@ -86,6 +86,7 @@ ssh $CLOUDUSER@$PUBLICIP "sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/g'
ssh $CLOUDUSER@$PUBLICIP "sudo firewall-cmd --zone=public --add-port=12345/tcp --permanent"
ssh $CLOUDUSER@$PUBLICIP "sudo firewall-cmd --reload"
scp $BASE/dockerStartCommand.sh $CLOUDUSER@$PUBLICIP:
ssh $CLOUDUSER@$PUBLICIP "chmod +x dockerStartCommand.sh"
ssh $CLOUDUSER@$PUBLICIP "sudo ./dockerStartCommand.sh &"

Просмотреть файл

@ -0,0 +1,52 @@
BASE=$( git rev-parse --show-toplevel)
SOCKETPATH="$BASE/cmd/aro"
HOSTNAME=$( hostname )
NAME="mdm"
MDMIMAGE=linuxgeneva-microsoft.azurecr.io/genevamdm:master_20211120.1
MDMFRONTENDURL=https://int2.int.microsoftmetrics.com/
MDMSOURCEENVIRONMENT=$LOCATION
MDMSOURCEROLE=rp
MDMSOURCEROLEINSTANCE=$HOSTNAME
echo "Using:"
echo "Resourcegroup = $RESOURCEGROUP"
echo "User = $USER"
echo "HOSTNAME = $HOSTNAME"
echo "Containername = $NAME"
echo "Location = $LOCATION"
echo "MDM image = $MDMIMAGE"
echo " (version hardcoded. Check against pkg/util/version/const.go if things don't work)"
echo "Geneva API URL= $MDMFRONTENDURL"
echo "MDMSOURCEENV = $MDMSOURCEENVIRONMENT"
echo "MDMSOURCEROLE = $MDMSOURCEROLE"
echo "MDMSOURCEROLEINSTANCE = $MDMSOURCEROLEINSTANCE"
cp $BASE/secrets/rp-metrics-int.pem /etc/mdm.pem
podman run \
--entrypoint /usr/sbin/MetricsExtension \
--hostname $HOSTNAME \
--name $NAME \
-d \
--restart=always \
-m 2g \
-v /etc/mdm.pem:/etc/mdm.pem \
-v $SOCKETPATH:/var/etw:z \
$MDMIMAGE \
-CertFile /etc/mdm.pem \
-FrontEndUrl $MDMFRONTENDURL \
-Logger Console \
-LogLevel Debug \
-PrivateKeyFile /etc/mdm.pem \
-SourceEnvironment $MDMSOURCEENVIRONMENT \
-SourceRole $MDMSOURCEROLE \
-SourceRoleInstance $MDMSOURCEROLEINSTANCE

Просмотреть файл

@ -55,16 +55,16 @@ for x in vendor/github.com/openshift/*; do
;;
*)
go mod edit -replace ${x##vendor/}=$(go list -mod=mod -m ${x##vendor/}@release-4.10 | sed -e 's/ /@/')
go mod edit -replace ${x##vendor/}=$(go list -mod=mod -m ${x##vendor/}@release-4.9 | sed -e 's/ /@/')
;;
esac
done
for x in aws azure openstack; do
go mod edit -replace sigs.k8s.io/cluster-api-provider-$x=$(go list -mod=mod -m github.com/openshift/cluster-api-provider-$x@release-4.10 | sed -e 's/ /@/')
go mod edit -replace sigs.k8s.io/cluster-api-provider-$x=$(go list -mod=mod -m github.com/openshift/cluster-api-provider-$x@release-4.9 | sed -e 's/ /@/')
done
go mod edit -replace github.com/openshift/installer=$(go list -mod=mod -m github.com/jewzaam/installer-aro@release-4.10-azure | sed -e 's/ /@/')
go mod edit -replace github.com/openshift/installer=$(go list -mod=mod -m github.com/jewzaam/installer-aro@release-4.9-azure | sed -e 's/ /@/')
go get -u ./...

Просмотреть файл

@ -68,8 +68,6 @@ func DefaultOperatorFlags() OperatorFlags {
"aro.imageconfig.enabled": flagTrue,
"aro.machine.enabled": flagTrue,
"aro.machineset.enabled": flagTrue,
"aro.machinehealthcheck.enabled": flagTrue,
"aro.machinehealthcheck.managed": flagTrue,
"aro.monitoring.enabled": flagTrue,
"aro.nodedrainer.enabled": flagTrue,
"aro.pullsecret.enabled": flagTrue,

Просмотреть файл

@ -12,72 +12,113 @@ import (
)
func addRequiredResources(requiredResources map[string]int, vmSize api.VMSize, count int) error {
vmTypesMap := map[api.VMSize]struct {
CoreCount int
Family string
}{
api.VMSizeStandardD2sV3: {CoreCount: 2, Family: "standardDSv3Family"},
api.VMSizeStandardD4asV4: {CoreCount: 4, Family: "standardDASv4Family"},
api.VMSizeStandardD8asV4: {CoreCount: 8, Family: "standardDASv4Family"},
api.VMSizeStandardD16asV4: {CoreCount: 16, Family: "standardDASv4Family"},
api.VMSizeStandardD32asV4: {CoreCount: 32, Family: "standardDASv4Family"},
api.VMSizeStandardD4sV3: {CoreCount: 4, Family: "standardDSv3Family"},
api.VMSizeStandardD8sV3: {CoreCount: 8, Family: "standardDSv3Family"},
api.VMSizeStandardD16sV3: {CoreCount: 16, Family: "standardDSv3Family"},
api.VMSizeStandardD32sV3: {CoreCount: 32, Family: "standardDSv3Family"},
api.VMSizeStandardE4sV3: {CoreCount: 4, Family: "standardESv3Family"},
api.VMSizeStandardE8sV3: {CoreCount: 8, Family: "standardESv3Family"},
api.VMSizeStandardE16sV3: {CoreCount: 16, Family: "standardESv3Family"},
api.VMSizeStandardE32sV3: {CoreCount: 32, Family: "standardESv3Family"},
api.VMSizeStandardE64isV3: {CoreCount: 64, Family: "standardESv3Family"},
api.VMSizeStandardE64iV3: {CoreCount: 64, Family: "standardESv3Family"},
api.VMSizeStandardE80isV4: {CoreCount: 80, Family: "standardEISv4Family"},
api.VMSizeStandardE80idsV4: {CoreCount: 80, Family: "standardEIDSv4Family"},
api.VMSizeStandardE104iV5: {CoreCount: 104, Family: "standardEIv5Family"},
api.VMSizeStandardE104isV5: {CoreCount: 104, Family: "standardEISv5Family"},
api.VMSizeStandardE104idV5: {CoreCount: 104, Family: "standardEIDv5Family"},
api.VMSizeStandardE104idsV5: {CoreCount: 104, Family: "standardEIDSv5Family"},
api.VMSizeStandardF4sV2: {CoreCount: 4, Family: "standardFSv2Family"},
api.VMSizeStandardF8sV2: {CoreCount: 8, Family: "standardFSv2Family"},
api.VMSizeStandardF16sV2: {CoreCount: 16, Family: "standardFSv2Family"},
api.VMSizeStandardF32sV2: {CoreCount: 32, Family: "standardFSv2Family"},
api.VMSizeStandardF72sV2: {CoreCount: 72, Family: "standardFSv2Family"},
api.VMSizeStandardM128ms: {CoreCount: 128, Family: "standardMSFamily"},
api.VMSizeStandardG5: {CoreCount: 32, Family: "standardGFamily"},
api.VMSizeStandardGS5: {CoreCount: 32, Family: "standardGFamily"},
api.VMSizeStandardL4s: {CoreCount: 4, Family: "standardLsv2Family"},
api.VMSizeStandardL8s: {CoreCount: 8, Family: "standardLsv2Family"},
api.VMSizeStandardL16s: {CoreCount: 16, Family: "standardLsv2Family"},
api.VMSizeStandardL32s: {CoreCount: 32, Family: "standardLsv2Family"},
api.VMSizeStandardL8sV2: {CoreCount: 8, Family: "standardLsv2Family"},
api.VMSizeStandardL16sV2: {CoreCount: 16, Family: "standardLsv2Family"},
api.VMSizeStandardL32sV2: {CoreCount: 32, Family: "standardLsv2Family"},
api.VMSizeStandardL48sV2: {CoreCount: 48, Family: "standardLsv2Family"},
api.VMSizeStandardL64sV2: {CoreCount: 64, Family: "standardLsv2Family"},
// GPU nodes
api.VMSizeStandardNC4asT4V3: {CoreCount: 4, Family: "Standard_NC4as_T4_v3"},
api.VMSizeStandardNC8asT4V3: {CoreCount: 8, Family: "Standard_NC8as_T4_v3"},
api.VMSizeStandardNC16asT4V3: {CoreCount: 16, Family: "Standard_NC16as_T4_v3"},
api.VMSizeStandardNC64asT4V3: {CoreCount: 64, Family: "Standard_NC64as_T4_v3"},
}
vm, ok := vmTypesMap[vmSize]
if !ok {
return fmt.Errorf("unsupported VMSize %s", vmSize)
}
requiredResources["virtualMachines"] += count
requiredResources["PremiumDiskCount"] += count
switch vmSize {
case api.VMSizeStandardD2sV3:
requiredResources["standardDSv3Family"] += (count * 2)
requiredResources["cores"] += (count * 2)
requiredResources[vm.Family] += vm.CoreCount * count
requiredResources["cores"] += vm.CoreCount * count
case api.VMSizeStandardD4asV4:
requiredResources["standardDASv4Family"] += (count * 4)
requiredResources["cores"] += (count * 4)
case api.VMSizeStandardD8asV4:
requiredResources["standardDASv4Family"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardD16asV4:
requiredResources["standardDASv4Family"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardD32asV4:
requiredResources["standardDASv4Family"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardD4sV3:
requiredResources["standardDSv3Family"] += (count * 4)
requiredResources["cores"] += (count * 4)
case api.VMSizeStandardD8sV3:
requiredResources["standardDSv3Family"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardD16sV3:
requiredResources["standardDSv3Family"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardD32sV3:
requiredResources["standardDSv3Family"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardE4sV3:
requiredResources["standardESv3Family"] += (count * 4)
requiredResources["cores"] += (count * 4)
case api.VMSizeStandardE8sV3:
requiredResources["standardESv3Family"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardE16sV3:
requiredResources["standardESv3Family"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardE32sV3:
requiredResources["standardESv3Family"] += (count * 32)
requiredResources["cores"] += (count * 32)
//Support for Compute isolation
case api.VMSizeStandardE64iV3:
requiredResources["standardEIv3Family"] += (count * 64)
requiredResources["cores"] += (count * 64)
case api.VMSizeStandardE64isV3:
requiredResources["standardEISv3Family"] += (count * 64)
requiredResources["cores"] += (count * 64)
case api.VMSizeStandardF4sV2:
requiredResources["standardFSv2Family"] += (count * 4)
requiredResources["cores"] += (count * 4)
case api.VMSizeStandardF8sV2:
requiredResources["standardFSv2Family"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardF16sV2:
requiredResources["standardFSv2Family"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardF32sV2:
requiredResources["standardFSv2Family"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardF72sV2:
requiredResources["standardFSv2Family"] += (count * 72)
requiredResources["cores"] += (count * 72)
case api.VMSizeStandardM128ms:
requiredResources["standardMSFamily"] += (count * 128)
requiredResources["cores"] += (count * 128)
case api.VMSizeStandardG5:
requiredResources["standardGFamily"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardGS5:
requiredResources["standardGSFamily"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardL4s:
requiredResources["standardLsFamily"] += (count * 4)
requiredResources["cores"] += (count * 4)
case api.VMSizeStandardL8s:
requiredResources["standardLsFamily"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardL16s:
requiredResources["standardLsFamily"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardL32s:
requiredResources["standardLsFamily"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardL8sV2:
requiredResources["standardLsv2Family"] += (count * 8)
requiredResources["cores"] += (count * 8)
case api.VMSizeStandardL16sV2:
requiredResources["standardLsv2Family"] += (count * 16)
requiredResources["cores"] += (count * 16)
case api.VMSizeStandardL32sV2:
requiredResources["standardLsv2Family"] += (count * 32)
requiredResources["cores"] += (count * 32)
case api.VMSizeStandardL48sV2:
requiredResources["standardLsv2Family"] += (count * 48)
requiredResources["cores"] += (count * 48)
case api.VMSizeStandardL64sV2:
requiredResources["standardLsv2Family"] += (count * 64)
requiredResources["cores"] += (count * 64)
default:
//will only happen if pkg/api verification allows new VMSizes
return fmt.Errorf("unexpected node VMSize %s", vmSize)
}
return nil
}
@ -96,7 +137,6 @@ func (dv *dynamic) ValidateQuota(ctx context.Context, oc *api.OpenShiftCluster)
if err != nil {
return err
}
//worker node resource calculation
for _, w := range oc.Properties.WorkerProfiles {
err = addRequiredResources(requiredResources, w.VMSize, w.Count)

Просмотреть файл

@ -23,50 +23,22 @@ func TestValidateVMSku(t *testing.T) {
name string
restrictions mgmtcompute.ResourceSkuRestrictionsReasonCode
restrictionLocation *[]string
restrictedZones []string
targetLocation string
workerProfile1Sku string
workerProfile2Sku string
masterProfileSku string
availableSku string
availableSku2 string
restrictedSku string
resourceSkusClientErr error
wantErr string
}{
{
name: "worker and master skus are valid",
name: "worker and master sku are valid",
workerProfile1Sku: "Standard_D4s_v2",
workerProfile2Sku: "Standard_D4s_v2",
masterProfileSku: "Standard_D4s_v2",
availableSku: "Standard_D4s_v2",
},
{
name: "worker and master skus are distinct, both valid",
workerProfile1Sku: "Standard_E104i_v5",
workerProfile2Sku: "Standard_E104i_v5",
masterProfileSku: "Standard_D4s_v2",
availableSku: "Standard_E104i_v5",
availableSku2: "Standard_D4s_v2",
},
{
name: "worker and master skus are distinct, one invalid",
workerProfile1Sku: "Standard_E104i_v5",
workerProfile2Sku: "Standard_E104i_v5",
masterProfileSku: "Standard_D4s_v2",
availableSku: "Standard_E104i_v5",
availableSku2: "Standard_E104i_v5",
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_D4s_v2' is unavailable in region 'eastus'",
},
{
name: "worker and master skus are distinct, both invalid",
workerProfile1Sku: "Standard_E104i_v5",
workerProfile2Sku: "Standard_E104i_v5",
masterProfileSku: "Standard_D4s_v2",
availableSku: "Standard_L8s_v2",
availableSku2: "Standard_L16s_v2",
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_D4s_v2' is unavailable in region 'eastus'",
},
{
name: "unable to retrieve skus information",
workerProfile1Sku: "Standard_D4s_v2",
@ -124,30 +96,12 @@ func TestValidateVMSku(t *testing.T) {
restrictedSku: "Standard_L80",
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_L80' is restricted in region 'eastus' for selected subscription",
},
{
name: "sku is restricted in a single zone",
restrictions: mgmtcompute.NotAvailableForSubscription,
restrictionLocation: &[]string{
"eastus",
},
restrictedZones: []string{"3"},
workerProfile1Sku: "Standard_D4s_v2",
workerProfile2Sku: "Standard_D4s_v2",
masterProfileSku: "Standard_L80",
availableSku: "Standard_D4s_v2",
restrictedSku: "Standard_L80",
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_L80' is restricted in region 'eastus' for selected subscription",
},
} {
t.Run(tt.name, func(t *testing.T) {
if tt.targetLocation == "" {
tt.targetLocation = "eastus"
}
if tt.restrictedZones == nil {
tt.restrictedZones = []string{"1", "2", "3"}
}
controller := gomock.NewController(t)
defer controller.Finish()
@ -178,21 +132,11 @@ func TestValidateVMSku(t *testing.T) {
Capabilities: &[]mgmtcompute.ResourceSkuCapabilities{},
ResourceType: to.StringPtr("virtualMachines"),
},
{
Name: &tt.availableSku2,
Locations: &[]string{"eastus"},
LocationInfo: &[]mgmtcompute.ResourceSkuLocationInfo{
{Zones: &[]string{"1, 2, 3"}},
},
Restrictions: &[]mgmtcompute.ResourceSkuRestrictions{},
Capabilities: &[]mgmtcompute.ResourceSkuCapabilities{},
ResourceType: to.StringPtr("virtualMachines"),
},
{
Name: &tt.restrictedSku,
Locations: &[]string{tt.targetLocation},
LocationInfo: &[]mgmtcompute.ResourceSkuLocationInfo{
{Zones: &tt.restrictedZones},
{Zones: &[]string{"1, 2, 3"}},
},
Restrictions: &[]mgmtcompute.ResourceSkuRestrictions{
{

Просмотреть файл

@ -7,93 +7,66 @@ import (
"github.com/Azure/ARO-RP/pkg/api"
)
var supportedMasterVMSizes = map[api.VMSize]bool{
// General purpose
api.VMSizeStandardD8sV3: true,
api.VMSizeStandardD16sV3: true,
api.VMSizeStandardD32sV3: true,
// Memory optimized
api.VMSizeStandardE64iV3: true,
api.VMSizeStandardE64isV3: true,
api.VMSizeStandardE80isV4: true,
api.VMSizeStandardE80idsV4: true,
api.VMSizeStandardE104iV5: true,
api.VMSizeStandardE104isV5: true,
api.VMSizeStandardE104idV5: true,
api.VMSizeStandardE104idsV5: true,
// Compute optimized
api.VMSizeStandardF72sV2: true,
// Memory and storage optimized
api.VMSizeStandardGS5: true,
api.VMSizeStandardG5: true,
// Memory and compute optimized
api.VMSizeStandardM128ms: true,
}
var supportedWorkerVMSizes = map[api.VMSize]bool{
// General purpose
api.VMSizeStandardD4asV4: true,
api.VMSizeStandardD8asV4: true,
api.VMSizeStandardD16asV4: true,
api.VMSizeStandardD32asV4: true,
api.VMSizeStandardD4sV3: true,
api.VMSizeStandardD8sV3: true,
api.VMSizeStandardD16sV3: true,
api.VMSizeStandardD32sV3: true,
// Memory optimized
api.VMSizeStandardE4sV3: true,
api.VMSizeStandardE8sV3: true,
api.VMSizeStandardE16sV3: true,
api.VMSizeStandardE32sV3: true,
api.VMSizeStandardE64isV3: true,
api.VMSizeStandardE64iV3: true,
api.VMSizeStandardE80isV4: true,
api.VMSizeStandardE80idsV4: true,
api.VMSizeStandardE104iV5: true,
api.VMSizeStandardE104isV5: true,
api.VMSizeStandardE104idV5: true,
api.VMSizeStandardE104idsV5: true,
// Compute optimized
api.VMSizeStandardF4sV2: true,
api.VMSizeStandardF8sV2: true,
api.VMSizeStandardF16sV2: true,
api.VMSizeStandardF32sV2: true,
api.VMSizeStandardF72sV2: true,
// Memory and storage optimized
api.VMSizeStandardG5: true,
api.VMSizeStandardGS5: true,
// Memory and compute optimized
api.VMSizeStandardM128ms: true,
// Storage optimized
api.VMSizeStandardL4s: true,
api.VMSizeStandardL8s: true,
api.VMSizeStandardL16s: true,
api.VMSizeStandardL32s: true,
api.VMSizeStandardL8sV2: true,
api.VMSizeStandardL16sV2: true,
api.VMSizeStandardL32sV2: true,
api.VMSizeStandardL48sV2: true,
api.VMSizeStandardL64sV2: true,
// GPU
api.VMSizeStandardNC4asT4V3: true,
api.VMSizeStandardNC8asT4V3: true,
api.VMSizeStandardNC16asT4V3: true,
api.VMSizeStandardNC64asT4V3: true,
}
func DiskSizeIsValid(sizeGB int) bool {
return sizeGB >= 128
}
func VMSizeIsValid(vmSize api.VMSize, requiredD2sV3Workers, isMaster bool) bool {
if isMaster {
return supportedMasterVMSizes[vmSize]
switch vmSize {
case api.VMSizeStandardD8sV3,
api.VMSizeStandardD16sV3,
api.VMSizeStandardD32sV3,
api.VMSizeStandardE64iV3,
api.VMSizeStandardE64isV3,
api.VMSizeStandardF72sV2,
api.VMSizeStandardGS5,
api.VMSizeStandardG5,
api.VMSizeStandardM128ms:
return true
}
} else {
if requiredD2sV3Workers {
switch vmSize {
case api.VMSizeStandardD2sV3:
return true
}
} else {
switch vmSize {
case api.VMSizeStandardD4asV4,
api.VMSizeStandardD8asV4,
api.VMSizeStandardD16asV4,
api.VMSizeStandardD32asV4,
api.VMSizeStandardD4sV3,
api.VMSizeStandardD8sV3,
api.VMSizeStandardD16sV3,
api.VMSizeStandardD32sV3,
api.VMSizeStandardE4sV3,
api.VMSizeStandardE8sV3,
api.VMSizeStandardE16sV3,
api.VMSizeStandardE32sV3,
api.VMSizeStandardE64iV3,
api.VMSizeStandardE64isV3,
api.VMSizeStandardF4sV2,
api.VMSizeStandardF8sV2,
api.VMSizeStandardF16sV2,
api.VMSizeStandardF32sV2,
api.VMSizeStandardF72sV2,
api.VMSizeStandardG5,
api.VMSizeStandardGS5,
api.VMSizeStandardM128ms,
api.VMSizeStandardL4s,
api.VMSizeStandardL8s,
api.VMSizeStandardL16s,
api.VMSizeStandardL32s,
api.VMSizeStandardL8sV2,
api.VMSizeStandardL16sV2,
api.VMSizeStandardL32sV2,
api.VMSizeStandardL48sV2,
api.VMSizeStandardL64sV2:
return true
}
}
}
if (supportedWorkerVMSizes[vmSize] && !requiredD2sV3Workers) ||
(requiredD2sV3Workers && vmSize == api.VMSizeStandardD2sV3) {
return true
}
return false
}

Просмотреть файл

@ -41,6 +41,7 @@ func TestAdminUpdateSteps(t *testing.T) {
},
shouldRunSteps: []string{
"[Action initializeKubernetesClients-fm]",
"[Action initializeOperatorDeployer-fm]",
"[Action ensureBillingRecord-fm]",
"[Action ensureDefaults-fm]",
"[Action fixupClusterSPObjectID-fm]",
@ -63,6 +64,7 @@ func TestAdminUpdateSteps(t *testing.T) {
},
shouldRunSteps: []string{
"[Action initializeKubernetesClients-fm]",
"[Action initializeOperatorDeployer-fm]",
"[Action ensureBillingRecord-fm]",
"[Action ensureDefaults-fm]",
"[Action fixupClusterSPObjectID-fm]",
@ -147,6 +149,7 @@ func TestAdminUpdateSteps(t *testing.T) {
},
shouldRunSteps: []string{
"[Action initializeKubernetesClients-fm]",
"[Action initializeOperatorDeployer-fm]",
"[Action ensureBillingRecord-fm]",
"[Action ensureDefaults-fm]",
"[Action fixupClusterSPObjectID-fm]",

Просмотреть файл

@ -8,13 +8,14 @@ import (
)
func (m *manager) isIngressProfileAvailable() bool {
// We try to acquire the IngressProfiles data at frontend best effort enrichment time only.
// We try to aqcuire the IngressProfiles data at frontend best effort enrichment time only.
// When we start deallocated VMs and wait for the API do become available again, we don't pick
// the information up, even though it would be available.
return len(m.doc.OpenShiftCluster.Properties.IngressProfiles) != 0
}
func (m *manager) ensureAROOperator(ctx context.Context) error {
//ensure the IngressProfile information is available from the cluster which is not the case when the cluster vms were freshly restarted.
if !m.isIngressProfileAvailable() {
// If the ingress profile is not available, ARO operator update/deploy will fail.
m.log.Error("skip ensureAROOperator")

Просмотреть файл

@ -143,7 +143,7 @@ func TestAroDeploymentReady(t *testing.T) {
wantRes bool
}{
{
name: "operator is ready",
name: "create/update success",
doc: &api.OpenShiftClusterDocument{
Key: strings.ToLower(key),
OpenShiftCluster: &api.OpenShiftCluster{
@ -165,29 +165,6 @@ func TestAroDeploymentReady(t *testing.T) {
},
wantRes: true,
},
{
name: "operator is not ready",
doc: &api.OpenShiftClusterDocument{
Key: strings.ToLower(key),
OpenShiftCluster: &api.OpenShiftCluster{
ID: key,
Properties: api.OpenShiftClusterProperties{
IngressProfiles: []api.IngressProfile{
{
Visibility: api.VisibilityPublic,
Name: "default",
},
},
},
},
},
mocks: func(dep *mock_deploy.MockOperator) {
dep.EXPECT().
IsReady(gomock.Any()).
Return(false, nil)
},
wantRes: false,
},
{
name: "enriched data not available - skip",
doc: &api.OpenShiftClusterDocument{
@ -221,6 +198,7 @@ func TestAroDeploymentReady(t *testing.T) {
if err != nil || ok != tt.wantRes {
t.Error(err)
}
})
}
}

Просмотреть файл

@ -10,10 +10,10 @@ import (
"github.com/Azure/go-autorest/autorest/azure"
configclient "github.com/openshift/client-go/config/clientset/versioned"
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
operatorclient "github.com/openshift/client-go/operator/clientset/versioned"
samplesclient "github.com/openshift/client-go/samples/clientset/versioned"
securityclient "github.com/openshift/client-go/security/clientset/versioned"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
@ -89,7 +89,7 @@ type manager struct {
kubernetescli kubernetes.Interface
extensionscli extensionsclient.Interface
maocli machineclient.Interface
maocli maoclient.Interface
mcocli mcoclient.Interface
operatorcli operatorclient.Interface
configcli configclient.Interface

Просмотреть файл

@ -5,10 +5,10 @@ package cluster
import (
"context"
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"net/http"
"regexp"
"strings"
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
@ -16,11 +16,19 @@ import (
"github.com/Azure/go-autorest/autorest"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/go-autorest/autorest/to"
utilrand "k8s.io/apimachinery/pkg/util/rand"
"github.com/openshift/installer/pkg/asset/installconfig"
"github.com/openshift/installer/pkg/asset/releaseimage"
"github.com/openshift/installer/pkg/asset/targets"
"github.com/openshift/installer/pkg/asset/templates/content/bootkube"
"github.com/openshift/installer/pkg/types"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/Azure/ARO-RP/pkg/api"
"github.com/Azure/ARO-RP/pkg/bootstraplogging"
"github.com/Azure/ARO-RP/pkg/cluster/graph"
"github.com/Azure/ARO-RP/pkg/env"
"github.com/Azure/ARO-RP/pkg/util/arm"
"github.com/Azure/ARO-RP/pkg/util/feature"
"github.com/Azure/ARO-RP/pkg/util/stringutils"
"github.com/Azure/ARO-RP/pkg/util/subnet"
)
@ -29,40 +37,47 @@ func (m *manager) createDNS(ctx context.Context) error {
return m.dns.Create(ctx, m.doc.OpenShiftCluster)
}
func (m *manager) ensureInfraID(ctx context.Context) (err error) {
func (m *manager) ensureInfraID(ctx context.Context, installConfig *installconfig.InstallConfig) error {
if m.doc.OpenShiftCluster.Properties.InfraID != "" {
return nil
}
g := graph.Graph{}
g.Set(&installconfig.InstallConfig{
Config: &types.InstallConfig{
ObjectMeta: metav1.ObjectMeta{
Name: strings.ToLower(m.doc.OpenShiftCluster.Name),
},
},
})
err := g.Resolve(&installconfig.ClusterID{})
if err != nil {
return err
}
// generate an infra ID that is 27 characters long with 5 bytes of them random
infraID := generateInfraID(strings.ToLower(m.doc.OpenShiftCluster.Name), 27, 5)
clusterID := g.Get(&installconfig.ClusterID{}).(*installconfig.ClusterID)
m.doc, err = m.db.PatchWithLease(ctx, m.doc.Key, func(doc *api.OpenShiftClusterDocument) error {
doc.OpenShiftCluster.Properties.InfraID = infraID
doc.OpenShiftCluster.Properties.InfraID = clusterID.InfraID
return nil
})
return err
}
func (m *manager) ensureResourceGroup(ctx context.Context) (err error) {
func (m *manager) ensureResourceGroup(ctx context.Context) error {
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
group := mgmtfeatures.ResourceGroup{}
// The FPSP's role definition does not have read on a resource group
// if the resource group does not exist.
// Retain the existing resource group configuration (such as tags) if it exists
if m.doc.OpenShiftCluster.Properties.ProvisioningState != api.ProvisioningStateCreating {
group, err = m.resourceGroups.Get(ctx, resourceGroup)
if err != nil {
if detailedErr, ok := err.(autorest.DetailedError); !ok || detailedErr.StatusCode != http.StatusNotFound {
return err
}
}
group := mgmtfeatures.ResourceGroup{
Location: &m.doc.OpenShiftCluster.Location,
ManagedBy: to.StringPtr(m.doc.OpenShiftCluster.ID),
}
group.Location = &m.doc.OpenShiftCluster.Location
group.ManagedBy = &m.doc.OpenShiftCluster.ID
// HACK: set purge=true on dev clusters so our purger wipes them out since there is not deny assignment in place
if m.env.IsLocalDevelopmentMode() {
// grab tags so we do not accidently remove them on createOrUpdate, set purge tag to true for dev clusters
rg, err := m.resourceGroups.Get(ctx, resourceGroup)
if err == nil {
group.Tags = rg.Tags
}
if group.Tags == nil {
group.Tags = map[string]*string{}
}
@ -72,7 +87,7 @@ func (m *manager) ensureResourceGroup(ctx context.Context) (err error) {
// According to https://stackoverflow.microsoft.com/a/245391/62320,
// re-PUTting our RG should re-create RP RBAC after a customer subscription
// migrates between tenants.
_, err = m.resourceGroups.CreateOrUpdate(ctx, resourceGroup, group)
_, err := m.resourceGroups.CreateOrUpdate(ctx, resourceGroup, group)
var serviceError *azure.ServiceError
// CreateOrUpdate wraps DetailedError wrapping a *RequestError (if error generated in ResourceGroup CreateOrUpdateResponder at least)
@ -87,16 +102,6 @@ func (m *manager) ensureResourceGroup(ctx context.Context) (err error) {
serviceError = requestErr.ServiceError
}
if serviceError != nil && serviceError.Code == "ResourceGroupManagedByMismatch" {
return &api.CloudError{
StatusCode: http.StatusBadRequest,
CloudErrorBody: &api.CloudErrorBody{
Code: api.CloudErrorCodeClusterResourceGroupAlreadyExists,
Message: "Resource group " + m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID +
" must not already exist.",
},
}
}
if serviceError != nil && serviceError.Code == "RequestDisallowedByPolicy" {
// if request was disallowed by policy, inform user so they can take appropriate action
b, _ := json.Marshal(serviceError)
@ -120,30 +125,29 @@ func (m *manager) ensureResourceGroup(ctx context.Context) (err error) {
return m.env.EnsureARMResourceGroupRoleAssignment(ctx, m.fpAuthorizer, resourceGroup)
}
func (m *manager) deployStorageTemplate(ctx context.Context) error {
func (m *manager) deployStorageTemplate(ctx context.Context, installConfig *installconfig.InstallConfig) error {
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
infraID := m.doc.OpenShiftCluster.Properties.InfraID
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
azureRegion := strings.ToLower(m.doc.OpenShiftCluster.Location) // Used in k8s object names, so must pass DNS-1123 validation
resources := []*arm.Resource{
m.storageAccount(clusterStorageAccountName, azureRegion, true),
m.storageAccount(clusterStorageAccountName, installConfig.Config.Azure.Region, true),
m.storageAccountBlobContainer(clusterStorageAccountName, "ignition"),
m.storageAccountBlobContainer(clusterStorageAccountName, "aro"),
m.storageAccount(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, azureRegion, true),
m.storageAccount(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, installConfig.Config.Azure.Region, true),
m.storageAccountBlobContainer(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, "image-registry"),
m.clusterNSG(infraID, azureRegion),
m.clusterNSG(infraID, installConfig.Config.Azure.Region),
m.clusterServicePrincipalRBAC(),
m.networkPrivateLinkService(azureRegion),
m.networkPublicIPAddress(azureRegion, infraID+"-pip-v4"),
m.networkInternalLoadBalancer(azureRegion),
m.networkPublicLoadBalancer(azureRegion),
m.networkPrivateLinkService(installConfig),
m.networkPublicIPAddress(installConfig, infraID+"-pip-v4"),
m.networkInternalLoadBalancer(installConfig),
m.networkPublicLoadBalancer(installConfig),
}
if m.doc.OpenShiftCluster.Properties.IngressProfiles[0].Visibility == api.VisibilityPublic {
resources = append(resources,
m.networkPublicIPAddress(azureRegion, infraID+"-default-v4"),
m.networkPublicIPAddress(installConfig, infraID+"-default-v4"),
)
}
@ -163,7 +167,73 @@ func (m *manager) deployStorageTemplate(ctx context.Context) error {
t.Resources = append(t.Resources, m.denyAssignment())
}
return arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "storage", t, nil)
return m.deployARMTemplate(ctx, resourceGroup, "storage", t, nil)
}
func (m *manager) ensureGraph(ctx context.Context, installConfig *installconfig.InstallConfig, image *releaseimage.Image) error {
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
infraID := m.doc.OpenShiftCluster.Properties.InfraID
exists, err := m.graph.Exists(ctx, resourceGroup, clusterStorageAccountName)
if err != nil || exists {
return err
}
clusterID := &installconfig.ClusterID{
UUID: m.doc.ID,
InfraID: infraID,
}
bootstrapLoggingConfig, err := bootstraplogging.GetConfig(m.env, m.doc)
if err != nil {
return err
}
httpSecret := make([]byte, 64)
_, err = rand.Read(httpSecret)
if err != nil {
return err
}
imageRegistryConfig := &bootkube.AROImageRegistryConfig{
AccountName: m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName,
ContainerName: "image-registry",
HTTPSecret: hex.EncodeToString(httpSecret),
}
dnsConfig := &bootkube.ARODNSConfig{
APIIntIP: m.doc.OpenShiftCluster.Properties.APIServerProfile.IntIP,
IngressIP: m.doc.OpenShiftCluster.Properties.IngressProfiles[0].IP,
}
if m.doc.OpenShiftCluster.Properties.NetworkProfile.GatewayPrivateEndpointIP != "" {
dnsConfig.GatewayPrivateEndpointIP = m.doc.OpenShiftCluster.Properties.NetworkProfile.GatewayPrivateEndpointIP
dnsConfig.GatewayDomains = append(m.env.GatewayDomains(), m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName+".blob."+m.env.Environment().StorageEndpointSuffix)
}
g := graph.Graph{}
g.Set(installConfig, image, clusterID, bootstrapLoggingConfig, dnsConfig, imageRegistryConfig)
m.log.Print("resolving graph")
for _, a := range targets.Cluster {
err = g.Resolve(a)
if err != nil {
return err
}
}
// Handle MTU3900 feature flag
subProperties := m.subscriptionDoc.Subscription.Properties
if feature.IsRegisteredForFeature(subProperties, api.FeatureFlagMTU3900) {
m.log.Printf("applying feature flag %s", api.FeatureFlagMTU3900)
if err = m.overrideEthernetMTU(g); err != nil {
return err
}
}
// the graph is quite big so we store it in a storage account instead of in cosmosdb
return m.graph.Save(ctx, resourceGroup, clusterStorageAccountName, g)
}
func (m *manager) attachNSGs(ctx context.Context) error {
@ -232,29 +302,3 @@ func (m *manager) setMasterSubnetPolicies(ctx context.Context) error {
return m.subnet.CreateOrUpdate(ctx, m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID, s)
}
// generateInfraID take base and returns a ID that
// - is of length maxLen
// - contains randomLen random bytes
// - only contains `alphanum` or `-`
// see openshift/installer/pkg/asset/installconfig/clusterid.go for original implementation
func generateInfraID(base string, maxLen int, randomLen int) string {
maxBaseLen := maxLen - (randomLen + 1)
// replace all characters that are not `alphanum` or `-` with `-`
re := regexp.MustCompile("[^A-Za-z0-9-]")
base = re.ReplaceAllString(base, "-")
// replace all multiple dashes in a sequence with single one.
re = regexp.MustCompile(`-{2,}`)
base = re.ReplaceAllString(base, "-")
// truncate to maxBaseLen
if len(base) > maxBaseLen {
base = base[:maxBaseLen]
}
base = strings.TrimRight(base, "-")
// add random chars to the end to randomize
return fmt.Sprintf("%s-%s", base, utilrand.String(randomLen))
}

Просмотреть файл

@ -10,6 +10,7 @@ import (
mgmtauthorization "github.com/Azure/azure-sdk-for-go/services/preview/authorization/mgmt/2018-09-01-preview/authorization"
mgmtstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2019-06-01/storage"
"github.com/Azure/go-autorest/autorest/to"
"github.com/openshift/installer/pkg/asset/installconfig"
"github.com/Azure/ARO-RP/pkg/api"
"github.com/Azure/ARO-RP/pkg/util/arm"
@ -77,13 +78,14 @@ func (m *manager) clusterServicePrincipalRBAC() *arm.Resource {
// Legacy storage accounts (public) are not encrypted and cannot be retrofitted.
// The flag controls this behavior in update/create.
func (m *manager) storageAccount(name, region string, encrypted bool) *arm.Resource {
virtualNetworkRules := []mgmtstorage.VirtualNetworkRule{
{
VirtualNetworkResourceID: &m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID,
VirtualNetworkResourceID: to.StringPtr(m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID),
Action: mgmtstorage.Allow,
},
{
VirtualNetworkResourceID: &m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].SubnetID,
VirtualNetworkResourceID: to.StringPtr(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].SubnetID),
Action: mgmtstorage.Allow,
},
{
@ -176,7 +178,7 @@ func (m *manager) storageAccountBlobContainer(storageAccountName, name string) *
}
}
func (m *manager) networkPrivateLinkService(azureRegion string) *arm.Resource {
func (m *manager) networkPrivateLinkService(installConfig *installconfig.InstallConfig) *arm.Resource {
return &arm.Resource{
Resource: &mgmtnetwork.PrivateLinkService{
PrivateLinkServiceProperties: &mgmtnetwork.PrivateLinkServiceProperties{
@ -189,7 +191,7 @@ func (m *manager) networkPrivateLinkService(azureRegion string) *arm.Resource {
{
PrivateLinkServiceIPConfigurationProperties: &mgmtnetwork.PrivateLinkServiceIPConfigurationProperties{
Subnet: &mgmtnetwork.Subnet{
ID: &m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID,
ID: to.StringPtr(m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID),
},
},
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-pls-nic"),
@ -208,7 +210,7 @@ func (m *manager) networkPrivateLinkService(azureRegion string) *arm.Resource {
},
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-pls"),
Type: to.StringPtr("Microsoft.Network/privateLinkServices"),
Location: &azureRegion,
Location: &installConfig.Config.Azure.Region,
},
APIVersion: azureclient.APIVersion("Microsoft.Network"),
DependsOn: []string{
@ -244,7 +246,7 @@ func (m *manager) networkPrivateEndpoint() *arm.Resource {
}
}
func (m *manager) networkPublicIPAddress(azureRegion string, name string) *arm.Resource {
func (m *manager) networkPublicIPAddress(installConfig *installconfig.InstallConfig, name string) *arm.Resource {
return &arm.Resource{
Resource: &mgmtnetwork.PublicIPAddress{
Sku: &mgmtnetwork.PublicIPAddressSku{
@ -255,13 +257,13 @@ func (m *manager) networkPublicIPAddress(azureRegion string, name string) *arm.R
},
Name: &name,
Type: to.StringPtr("Microsoft.Network/publicIPAddresses"),
Location: &azureRegion,
Location: &installConfig.Config.Azure.Region,
},
APIVersion: azureclient.APIVersion("Microsoft.Network"),
}
}
func (m *manager) networkInternalLoadBalancer(azureRegion string) *arm.Resource {
func (m *manager) networkInternalLoadBalancer(installConfig *installconfig.InstallConfig) *arm.Resource {
return &arm.Resource{
Resource: &mgmtnetwork.LoadBalancer{
Sku: &mgmtnetwork.LoadBalancerSku{
@ -281,7 +283,7 @@ func (m *manager) networkInternalLoadBalancer(azureRegion string) *arm.Resource
},
BackendAddressPools: &[]mgmtnetwork.BackendAddressPool{
{
Name: &m.doc.OpenShiftCluster.Properties.InfraID,
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID),
},
{
Name: to.StringPtr("ssh-0"),
@ -428,13 +430,13 @@ func (m *manager) networkInternalLoadBalancer(azureRegion string) *arm.Resource
},
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-internal"),
Type: to.StringPtr("Microsoft.Network/loadBalancers"),
Location: &azureRegion,
Location: &installConfig.Config.Azure.Region,
},
APIVersion: azureclient.APIVersion("Microsoft.Network"),
}
}
func (m *manager) networkPublicLoadBalancer(azureRegion string) *arm.Resource {
func (m *manager) networkPublicLoadBalancer(installConfig *installconfig.InstallConfig) *arm.Resource {
lb := &mgmtnetwork.LoadBalancer{
Sku: &mgmtnetwork.LoadBalancerSku{
Name: mgmtnetwork.LoadBalancerSkuNameStandard,
@ -475,9 +477,9 @@ func (m *manager) networkPublicLoadBalancer(azureRegion string) *arm.Resource {
},
},
},
Name: &m.doc.OpenShiftCluster.Properties.InfraID,
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID),
Type: to.StringPtr("Microsoft.Network/loadBalancers"),
Location: &azureRegion,
Location: &installConfig.Config.Azure.Region,
}
if m.doc.OpenShiftCluster.Properties.APIServerProfile.Visibility == api.VisibilityPublic {

Просмотреть файл

@ -0,0 +1,240 @@
package cluster
// Copyright (c) Microsoft Corporation.
// Licensed under the Apache License 2.0.
import (
"context"
"crypto/x509"
"encoding/base64"
"fmt"
"strings"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/go-autorest/autorest/to"
"github.com/openshift/installer/pkg/asset/installconfig"
icazure "github.com/openshift/installer/pkg/asset/installconfig/azure"
"github.com/openshift/installer/pkg/asset/releaseimage"
"github.com/openshift/installer/pkg/ipnet"
"github.com/openshift/installer/pkg/types"
azuretypes "github.com/openshift/installer/pkg/types/azure"
"github.com/openshift/installer/pkg/types/validation"
"golang.org/x/crypto/ssh"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/Azure/ARO-RP/pkg/api"
"github.com/Azure/ARO-RP/pkg/util/computeskus"
"github.com/Azure/ARO-RP/pkg/util/pullsecret"
"github.com/Azure/ARO-RP/pkg/util/rhcos"
"github.com/Azure/ARO-RP/pkg/util/stringutils"
"github.com/Azure/ARO-RP/pkg/util/subnet"
"github.com/Azure/ARO-RP/pkg/util/version"
)
func (m *manager) generateInstallConfig(ctx context.Context) (*installconfig.InstallConfig, *releaseimage.Image, error) {
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
pullSecret, err := pullsecret.Build(m.doc.OpenShiftCluster, string(m.doc.OpenShiftCluster.Properties.ClusterProfile.PullSecret))
if err != nil {
return nil, nil, err
}
for _, key := range []string{"cloud.openshift.com"} {
pullSecret, err = pullsecret.RemoveKey(pullSecret, key)
if err != nil {
return nil, nil, err
}
}
r, err := azure.ParseResourceID(m.doc.OpenShiftCluster.ID)
if err != nil {
return nil, nil, err
}
_, masterSubnetName, err := subnet.Split(m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID)
if err != nil {
return nil, nil, err
}
vnetID, workerSubnetName, err := subnet.Split(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].SubnetID)
if err != nil {
return nil, nil, err
}
vnetr, err := azure.ParseResourceID(vnetID)
if err != nil {
return nil, nil, err
}
privateKey, err := x509.ParsePKCS1PrivateKey(m.doc.OpenShiftCluster.Properties.SSHKey)
if err != nil {
return nil, nil, err
}
sshkey, err := ssh.NewPublicKey(&privateKey.PublicKey)
if err != nil {
return nil, nil, err
}
domain := m.doc.OpenShiftCluster.Properties.ClusterProfile.Domain
if !strings.ContainsRune(domain, '.') {
domain += "." + m.env.Domain()
}
masterSKU, err := m.env.VMSku(string(m.doc.OpenShiftCluster.Properties.MasterProfile.VMSize))
if err != nil {
return nil, nil, err
}
masterZones := computeskus.Zones(masterSKU)
if len(masterZones) == 0 {
masterZones = []string{""}
}
workerSKU, err := m.env.VMSku(string(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].VMSize))
if err != nil {
return nil, nil, err
}
workerZones := computeskus.Zones(workerSKU)
if len(workerZones) == 0 {
workerZones = []string{""}
}
SoftwareDefinedNetwork := string(api.SoftwareDefinedNetworkOpenShiftSDN)
if m.doc.OpenShiftCluster.Properties.NetworkProfile.SoftwareDefinedNetwork != "" {
SoftwareDefinedNetwork = string(m.doc.OpenShiftCluster.Properties.NetworkProfile.SoftwareDefinedNetwork)
}
installConfig := &installconfig.InstallConfig{
Config: &types.InstallConfig{
TypeMeta: metav1.TypeMeta{
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: domain[:strings.IndexByte(domain, '.')],
},
SSHKey: sshkey.Type() + " " + base64.StdEncoding.EncodeToString(sshkey.Marshal()),
BaseDomain: domain[strings.IndexByte(domain, '.')+1:],
Networking: &types.Networking{
MachineNetwork: []types.MachineNetworkEntry{
{
CIDR: *ipnet.MustParseCIDR("127.0.0.0/8"), // dummy
},
},
NetworkType: SoftwareDefinedNetwork,
ClusterNetwork: []types.ClusterNetworkEntry{
{
CIDR: *ipnet.MustParseCIDR(m.doc.OpenShiftCluster.Properties.NetworkProfile.PodCIDR),
HostPrefix: 23,
},
},
ServiceNetwork: []ipnet.IPNet{
*ipnet.MustParseCIDR(m.doc.OpenShiftCluster.Properties.NetworkProfile.ServiceCIDR),
},
},
ControlPlane: &types.MachinePool{
Name: "master",
Replicas: to.Int64Ptr(3),
Platform: types.MachinePoolPlatform{
Azure: &azuretypes.MachinePool{
Zones: masterZones,
InstanceType: string(m.doc.OpenShiftCluster.Properties.MasterProfile.VMSize),
EncryptionAtHost: m.doc.OpenShiftCluster.Properties.MasterProfile.EncryptionAtHost == api.EncryptionAtHostEnabled,
OSDisk: azuretypes.OSDisk{
DiskEncryptionSetID: m.doc.OpenShiftCluster.Properties.MasterProfile.DiskEncryptionSetID,
DiskSizeGB: 1024,
},
},
},
Hyperthreading: "Enabled",
Architecture: types.ArchitectureAMD64,
},
Compute: []types.MachinePool{
{
Name: m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].Name,
Replicas: to.Int64Ptr(int64(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].Count)),
Platform: types.MachinePoolPlatform{
Azure: &azuretypes.MachinePool{
Zones: workerZones,
InstanceType: string(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].VMSize),
EncryptionAtHost: m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].EncryptionAtHost == api.EncryptionAtHostEnabled,
OSDisk: azuretypes.OSDisk{
DiskEncryptionSetID: m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].DiskEncryptionSetID,
DiskSizeGB: int32(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].DiskSizeGB),
},
},
},
Hyperthreading: "Enabled",
Architecture: types.ArchitectureAMD64,
},
},
Platform: types.Platform{
Azure: &azuretypes.Platform{
Region: strings.ToLower(m.doc.OpenShiftCluster.Location), // Used in k8s object names, so must pass DNS-1123 validation
NetworkResourceGroupName: vnetr.ResourceGroup,
VirtualNetwork: vnetr.ResourceName,
ControlPlaneSubnet: masterSubnetName,
ComputeSubnet: workerSubnetName,
CloudName: azuretypes.CloudEnvironment(m.env.Environment().Name),
OutboundType: azuretypes.LoadbalancerOutboundType,
ResourceGroupName: resourceGroup,
},
},
PullSecret: pullSecret,
FIPS: m.doc.OpenShiftCluster.Properties.ClusterProfile.FipsValidatedModules == api.FipsValidatedModulesEnabled,
ImageContentSources: []types.ImageContentSource{
{
Source: "quay.io/openshift-release-dev/ocp-release",
Mirrors: []string{
fmt.Sprintf("%s/openshift-release-dev/ocp-release", m.env.ACRDomain()),
},
},
{
Source: "quay.io/openshift-release-dev/ocp-release-nightly",
Mirrors: []string{
fmt.Sprintf("%s/openshift-release-dev/ocp-release-nightly", m.env.ACRDomain()),
},
},
{
Source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev",
Mirrors: []string{
fmt.Sprintf("%s/openshift-release-dev/ocp-v4.0-art-dev", m.env.ACRDomain()),
},
},
},
Publish: types.ExternalPublishingStrategy,
},
Azure: icazure.NewMetadataWithCredentials(
azuretypes.CloudEnvironment(m.env.Environment().Name),
m.env.Environment().ResourceManagerEndpoint,
&icazure.Credentials{
TenantID: m.subscriptionDoc.Subscription.Properties.TenantID,
ClientID: m.doc.OpenShiftCluster.Properties.ServicePrincipalProfile.ClientID,
ClientSecret: string(m.doc.OpenShiftCluster.Properties.ServicePrincipalProfile.ClientSecret),
SubscriptionID: r.SubscriptionID,
},
),
}
if m.doc.OpenShiftCluster.Properties.IngressProfiles[0].Visibility == api.VisibilityPrivate {
installConfig.Config.Publish = types.InternalPublishingStrategy
}
installConfig.Config.Azure.Image, err = rhcos.Image(ctx)
if err != nil {
return nil, nil, err
}
image := &releaseimage.Image{}
if m.doc.OpenShiftCluster.Properties.ClusterProfile.Version == version.InstallStream.Version.String() {
image.PullSpec = version.InstallStream.PullSpec
} else {
return nil, nil, fmt.Errorf("unimplemented version %q", m.doc.OpenShiftCluster.Properties.ClusterProfile.Version)
}
err = validation.ValidateInstallConfig(installConfig.Config).ToAggregate()
if err != nil {
return nil, nil, err
}
return installConfig, image, err
}

Просмотреть файл

@ -10,16 +10,17 @@ import (
configclient "github.com/openshift/client-go/config/clientset/versioned"
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
operatorclient "github.com/openshift/client-go/operator/clientset/versioned"
samplesclient "github.com/openshift/client-go/samples/clientset/versioned"
securityclient "github.com/openshift/client-go/security/clientset/versioned"
"github.com/openshift/installer/pkg/asset/installconfig"
"github.com/openshift/installer/pkg/asset/releaseimage"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/client-go/kubernetes"
"github.com/Azure/ARO-RP/pkg/api"
"github.com/Azure/ARO-RP/pkg/installer"
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
"github.com/Azure/ARO-RP/pkg/operator/deploy"
"github.com/Azure/ARO-RP/pkg/util/restconfig"
@ -30,7 +31,7 @@ import (
// AdminUpdate performs an admin update of an ARO cluster
func (m *manager) AdminUpdate(ctx context.Context) error {
toRun := m.adminUpdate()
return m.runSteps(ctx, toRun, false)
return m.runSteps(ctx, toRun)
}
func (m *manager) adminUpdate() []steps.Step {
@ -43,6 +44,7 @@ func (m *manager) adminUpdate() []steps.Step {
// don't require a running cluster
toRun := []steps.Step{
steps.Action(m.initializeKubernetesClients), // must be first
steps.Action(m.initializeOperatorDeployer), // depends on kube clients
steps.Action(m.ensureBillingRecord), // belt and braces
steps.Action(m.ensureDefaults),
steps.Action(m.fixupClusterSPObjectID),
@ -100,7 +102,8 @@ func (m *manager) adminUpdate() []steps.Step {
if isEverything {
toRun = append(toRun,
steps.Action(m.populateRegistryStorageAccountName),
steps.Action(m.ensureMTUSize),
steps.Action(m.populateCreatedAt), // TODO(mikalai): Remove after a round of admin updates
)
}
@ -120,17 +123,7 @@ func (m *manager) adminUpdate() []steps.Step {
toRun = append(toRun,
steps.Action(m.ensureAROOperator),
steps.Condition(m.aroDeploymentReady, 20*time.Minute, true),
steps.Condition(m.ensureAROOperatorRunningDesiredVersion, 5*time.Minute, true),
)
}
// Hive cluster adoption and reconciliation
if isEverything && m.adoptViaHive {
toRun = append(toRun,
steps.Action(m.hiveCreateNamespace),
steps.Action(m.hiveEnsureResources),
steps.Condition(m.hiveClusterDeploymentReady, 5*time.Minute, false),
steps.Action(m.hiveResetCorrelationData),
steps.Action(m.ensureAROOperatorRunningDesiredVersion),
)
}
@ -146,7 +139,7 @@ func (m *manager) adminUpdate() []steps.Step {
}
func (m *manager) Update(ctx context.Context) error {
s := []steps.Step{
steps := []steps.Step{
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.validateResources)),
steps.Action(m.initializeKubernetesClients), // All init steps are first
steps.Action(m.initializeOperatorDeployer), // depends on kube clients
@ -155,126 +148,62 @@ func (m *manager) Update(ctx context.Context) error {
// credentials rotation flow steps
steps.Action(m.createOrUpdateClusterServicePrincipalRBAC),
steps.Action(m.createOrUpdateDenyAssignment),
steps.Action(m.startVMs),
steps.Condition(m.apiServersReady, 30*time.Minute, true),
steps.Action(m.configureAPIServerCertificate),
steps.Action(m.configureIngressCertificate),
steps.Action(m.updateOpenShiftSecret),
steps.Action(m.updateAROSecret),
}
if m.adoptViaHive {
s = append(s,
// Hive reconciliation: we mostly need it to make sure that
// hive has the latest credentials after rotation.
steps.Action(m.hiveCreateNamespace),
steps.Action(m.hiveEnsureResources),
steps.Condition(m.hiveClusterDeploymentReady, 5*time.Minute, true),
steps.Action(m.hiveResetCorrelationData),
)
}
return m.runSteps(ctx, s, false)
}
func (m *manager) runIntegratedInstaller(ctx context.Context) error {
version, err := m.openShiftVersionFromVersion(ctx)
if err != nil {
return err
}
i := installer.NewInstaller(m.log, m.env, m.doc.ID, m.doc.OpenShiftCluster, m.subscriptionDoc.Subscription, version, m.fpAuthorizer, m.deployments, m.graph)
return i.Install(ctx)
}
func (m *manager) runHiveInstaller(ctx context.Context) error {
version, err := m.openShiftVersionFromVersion(ctx)
if err != nil {
return err
}
// Run installer. For M5/M6 we will persist the graph inside the installer
// code since it's easier, but in the future, this data should be collected
// from Hive's outputs where needed.
return m.hiveClusterManager.Install(ctx, m.subscriptionDoc, m.doc, version)
}
func (m *manager) bootstrap() []steps.Step {
s := []steps.Step{
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.validateResources)),
steps.Action(m.ensureACRToken),
steps.Action(m.ensureInfraID),
steps.Action(m.ensureSSHKey),
steps.Action(m.ensureStorageSuffix),
steps.Action(m.populateMTUSize),
steps.Action(m.createDNS),
steps.Action(m.initializeClusterSPClients), // must run before clusterSPObjectID
steps.Action(m.clusterSPObjectID),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureResourceGroup)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.enableServiceEndpoints)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.setMasterSubnetPolicies)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.deployStorageTemplate)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.attachNSGs)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.updateAPIIPEarly)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.createOrUpdateRouterIPEarly)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureGatewayCreate)),
steps.Action(m.createAPIServerPrivateEndpoint),
steps.Action(m.createCertificates),
}
if m.adoptViaHive || m.installViaHive {
// We will always need a Hive namespace, whether we are installing
// via Hive or adopting
s = append(s, steps.Action(m.hiveCreateNamespace))
}
if m.installViaHive {
s = append(s,
steps.Action(m.runHiveInstaller),
// Give Hive 60 minutes to install the cluster, since this includes
// all of bootstrapping being complete
steps.Condition(m.hiveClusterInstallationComplete, 60*time.Minute, true),
steps.Condition(m.hiveClusterDeploymentReady, 5*time.Minute, true),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.generateKubeconfigs)),
)
} else {
s = append(s,
steps.Action(m.runIntegratedInstaller),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.generateKubeconfigs)),
)
if m.adoptViaHive {
s = append(s,
steps.Action(m.hiveEnsureResources),
steps.Condition(m.hiveClusterDeploymentReady, 5*time.Minute, true),
)
}
}
if m.adoptViaHive || m.installViaHive {
s = append(s,
// Reset correlation data whether adopting or installing via Hive
steps.Action(m.hiveResetCorrelationData),
)
}
s = append(s,
steps.Action(m.ensureBillingRecord),
steps.Action(m.initializeKubernetesClients),
steps.Action(m.initializeOperatorDeployer), // depends on kube clients
steps.Condition(m.apiServersReady, 30*time.Minute, true),
steps.Action(m.ensureAROOperator),
steps.Action(m.incrInstallPhase),
)
return s
return m.runSteps(ctx, steps)
}
// Install installs an ARO cluster
func (m *manager) Install(ctx context.Context) error {
var (
installConfig *installconfig.InstallConfig
image *releaseimage.Image
)
steps := map[api.InstallPhase][]steps.Step{
api.InstallPhaseBootstrap: m.bootstrap(),
api.InstallPhaseBootstrap: {
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.validateResources)),
steps.Action(m.ensureACRToken),
steps.Action(m.generateSSHKey),
steps.Action(m.generateFIPSMode),
steps.Action(func(ctx context.Context) error {
var err error
installConfig, image, err = m.generateInstallConfig(ctx)
return err
}),
steps.Action(m.createDNS),
steps.Action(m.initializeClusterSPClients), // must run before clusterSPObjectID
steps.Action(m.clusterSPObjectID),
steps.Action(func(ctx context.Context) error {
return m.ensureInfraID(ctx, installConfig)
}),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureResourceGroup)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.enableServiceEndpoints)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.setMasterSubnetPolicies)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(func(ctx context.Context) error {
return m.deployStorageTemplate(ctx, installConfig)
})),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.updateAPIIPEarly)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.createOrUpdateRouterIPEarly)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureGatewayCreate)),
steps.Action(func(ctx context.Context) error {
return m.ensureGraph(ctx, installConfig, image)
}),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.attachNSGs)),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.generateKubeconfigs)),
steps.Action(m.ensureBillingRecord),
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.deployResourceTemplate)),
steps.Action(m.createAPIServerPrivateEndpoint),
steps.Action(m.createCertificates),
steps.Action(m.initializeKubernetesClients),
steps.Action(m.initializeOperatorDeployer), // depends on kube clients
steps.Condition(m.bootstrapConfigMapReady, 30*time.Minute, true),
steps.Action(m.ensureAROOperator),
steps.Action(m.incrInstallPhase),
},
api.InstallPhaseRemoveBootstrap: {
steps.Action(m.initializeKubernetesClients),
steps.Action(m.initializeOperatorDeployer), // depends on kube clients
@ -286,11 +215,11 @@ func (m *manager) Install(ctx context.Context) error {
steps.Condition(m.operatorConsoleExists, 30*time.Minute, true),
steps.Action(m.updateConsoleBranding),
steps.Condition(m.operatorConsoleReady, 20*time.Minute, true),
steps.Action(m.disableSamples),
steps.Action(m.disableOperatorHubSources),
steps.Action(m.disableUpdates),
steps.Condition(m.clusterVersionReady, 30*time.Minute, true),
steps.Condition(m.aroDeploymentReady, 20*time.Minute, true),
steps.Action(m.disableUpdates),
steps.Action(m.disableSamples),
steps.Action(m.disableOperatorHubSources),
steps.Action(m.updateClusterData),
steps.Action(m.configureIngressCertificate),
steps.Condition(m.ingressControllerReady, 30*time.Minute, true),
@ -308,25 +237,11 @@ func (m *manager) Install(ctx context.Context) error {
return fmt.Errorf("unrecognised phase %s", m.doc.OpenShiftCluster.Properties.Install.Phase)
}
m.log.Printf("starting phase %s", m.doc.OpenShiftCluster.Properties.Install.Phase)
return m.runSteps(ctx, steps[m.doc.OpenShiftCluster.Properties.Install.Phase], true)
return m.runSteps(ctx, steps[m.doc.OpenShiftCluster.Properties.Install.Phase])
}
func (m *manager) runSteps(ctx context.Context, s []steps.Step, emitMetrics bool) error {
var err error
if emitMetrics {
var stepsTimeRun map[string]int64
stepsTimeRun, err = steps.Run(ctx, m.log, 10*time.Second, s, m.now)
if err == nil {
var totalInstallTime int64
for topic, duration := range stepsTimeRun {
m.metricsEmitter.EmitGauge(fmt.Sprintf("backend.openshiftcluster.installtime.%s", topic), duration, nil)
totalInstallTime += duration
}
m.metricsEmitter.EmitGauge("backend.openshiftcluster.installtime.total", totalInstallTime, nil)
}
} else {
_, err = steps.Run(ctx, m.log, 10*time.Second, s, nil)
}
func (m *manager) runSteps(ctx context.Context, s []steps.Step) error {
err := steps.Run(ctx, m.log, 10*time.Second, s)
if err != nil {
m.gatherFailureLogs(ctx)
}
@ -337,11 +252,7 @@ func (m *manager) startInstallation(ctx context.Context) error {
var err error
m.doc, err = m.db.PatchWithLease(ctx, m.doc.Key, func(doc *api.OpenShiftClusterDocument) error {
if doc.OpenShiftCluster.Properties.Install == nil {
// set the install time which is used for the SAS token with which
// the bootstrap node retrieves its ignition payload
doc.OpenShiftCluster.Properties.Install = &api.Install{
Now: time.Now().UTC(),
}
doc.OpenShiftCluster.Properties.Install = &api.Install{}
}
return nil
})
@ -384,7 +295,7 @@ func (m *manager) initializeKubernetesClients(ctx context.Context) error {
return err
}
m.maocli, err = machineclient.NewForConfig(restConfig)
m.maocli, err = maoclient.NewForConfig(restConfig)
if err != nil {
return err
}

Просмотреть файл

@ -5,8 +5,10 @@ package cluster
import (
"context"
"fmt"
"strings"
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
"github.com/Azure/go-autorest/autorest/to"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/Azure/ARO-RP/pkg/api"
@ -14,14 +16,59 @@ import (
"github.com/Azure/ARO-RP/pkg/util/stringutils"
)
// enableServiceEndpoints should enable service endpoints on
// subnets for storage account access
func (m *manager) enableServiceEndpoints(ctx context.Context) error {
subnets := []string{
m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID,
}
for _, wp := range m.doc.OpenShiftCluster.Properties.WorkerProfiles {
subnets = append(subnets, wp.SubnetID)
}
for _, subnetId := range subnets {
subnet, err := m.subnet.Get(ctx, subnetId)
if err != nil {
return err
}
var changed bool
for _, endpoint := range api.SubnetsEndpoints {
var found bool
if subnet != nil && subnet.ServiceEndpoints != nil {
for _, se := range *subnet.ServiceEndpoints {
if strings.EqualFold(*se.Service, endpoint) &&
se.ProvisioningState == mgmtnetwork.Succeeded {
found = true
}
}
}
if !found {
if subnet.ServiceEndpoints == nil {
subnet.ServiceEndpoints = &[]mgmtnetwork.ServiceEndpointPropertiesFormat{}
}
*subnet.ServiceEndpoints = append(*subnet.ServiceEndpoints, mgmtnetwork.ServiceEndpointPropertiesFormat{
Service: to.StringPtr(endpoint),
Locations: &[]string{"*"},
})
changed = true
}
}
if changed {
err := m.subnet.CreateOrUpdate(ctx, subnetId, subnet)
if err != nil {
return err
}
}
}
return nil
}
// migrateStorageAccounts redeploys storage accounts with firewall rules preventing external access
// The encryption flag is set to false/disabled for legacy storage accounts.
func (m *manager) migrateStorageAccounts(ctx context.Context) error {
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
if len(m.doc.OpenShiftCluster.Properties.WorkerProfiles) == 0 {
m.log.Error("skipping migrateStorageAccounts due to missing WorkerProfiles.")
return nil
}
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
registryStorageAccountName := m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName
@ -34,7 +81,7 @@ func (m *manager) migrateStorageAccounts(ctx context.Context) error {
},
}
return arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "storage", t, nil)
return m.deployARMTemplate(ctx, resourceGroup, "storage", t, nil)
}
func (m *manager) populateRegistryStorageAccountName(ctx context.Context) error {
@ -48,10 +95,6 @@ func (m *manager) populateRegistryStorageAccountName(ctx context.Context) error
}
m.doc, err = m.db.PatchWithLease(ctx, m.doc.Key, func(doc *api.OpenShiftClusterDocument) error {
if rc.Spec.Storage.Azure == nil {
return fmt.Errorf("azure storage field is nil in image registry config")
}
doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName = rc.Spec.Storage.Azure.AccountName
return nil
})

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

681
pkg/deploy/bindata.go Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -38,7 +38,7 @@ func (g *generator) devProxyVMSS() *arm.Resource {
)
}
trailer := base64.StdEncoding.EncodeToString([]byte(`yum -y update
trailer := base64.StdEncoding.EncodeToString([]byte(`yum -y update -x WALinuxAgent
yum -y install docker
firewall-cmd --add-port=443/tcp --permanent
@ -87,13 +87,6 @@ EOF
systemctl enable proxy.service
cat >/etc/cron.weekly/yumupdate <<'EOF'
#!/bin/bash
yum update -y
EOF
chmod +x /etc/cron.weekly/yumupdate
(sleep 30; reboot) &
`))
@ -312,50 +305,16 @@ func (g *generator) devCIPool() *arm.Resource {
sleep 60
for attempt in {1..5}; do
yum -y update && break
yum -y update -x WALinuxAgent && break
if [[ ${attempt} -lt 5 ]]; then sleep 10; else exit 1; fi
done
DEVICE_PARTITION=$(pvs | grep '/dev/' | awk '{print $1}' | grep -oP '[a-z]{3}[0-9]$')
DEVICE=$(echo $DEVICE_PARTITION | grep -oP '^[a-z]{3}')
PARTITION=$(echo $DEVICE_PARTITION | grep -oP '[0-9]$')
# Fix the "GPT PMBR size mismatch (134217727 != 268435455)"
echo "w" | fdisk /dev/${DEVICE}
# Steps from https://access.redhat.com/solutions/5808001
# 1. Delete the LVM partition "d\n2\n"
# 2. Recreate the partition "n\n2\n"
# 3. Accept the default start and end sectors (2 x \n)
# 4. LVM2_member signature remains by default
# 5. Change type to Linux LVM "t\n2\n31\n
# 6. Write new table "w\n"
fdisk /dev/${DEVICE} <<EOF
d
${PARTITION}
n
${PARTITION}
t
${PARTITION}
31
w
EOF
partx -u /dev/${DEVICE}
pvresize /dev/${DEVICE_PARTITION}
lvextend -l +50%FREE /dev/rootvg/homelv
xfs_growfs /home
lvextend -l +50%FREE /dev/rootvg/tmplv
xfs_growfs /tmp
lvextend -l +100%FREE /dev/rootvg/varlv
lvextend -l +50%FREE /dev/rootvg/varlv
xfs_growfs /var
lvextend -l +100%FREE /dev/rootvg/homelv
xfs_growfs /home
rpm --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-8
rpm --import https://packages.microsoft.com/keys/microsoft.asc
@ -369,16 +328,16 @@ enabled=yes
gpgcheck=yes
EOF
yum -y install azure-cli podman podman-docker jq gcc gpgme-devel libassuan-devel git make tmpwatch python3-devel htop go-toolset-1.17.12-1.module+el8.6.0+16014+a372c00b openvpn
yum -y install azure-cli podman podman-docker jq gcc gpgme-devel libassuan-devel git make tmpwatch python3-devel go-toolset-1.16.12-1.module+el8.5.0+13637+960c7771
# Suppress emulation output for podman instead of docker for az acr compatability
mkdir -p /etc/containers/
touch /etc/containers/nodocker
VSTS_AGENT_VERSION=2.206.1
VSTS_AGENT_VERSION=2.193.1
mkdir /home/cloud-user/agent
pushd /home/cloud-user/agent
curl -s https://vstsagentpackage.azureedge.net/agent/${VSTS_AGENT_VERSION}/vsts-agent-linux-x64-${VSTS_AGENT_VERSION}.tar.gz | tar -xz
curl https://vstsagentpackage.azureedge.net/agent/${VSTS_AGENT_VERSION}/vsts-agent-linux-x64-${VSTS_AGENT_VERSION}.tar.gz | tar -xz
chown -R cloud-user:cloud-user .
./bin/installdependencies.sh
@ -390,21 +349,13 @@ cat >/home/cloud-user/agent/.path <<'EOF'
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/cloud-user/.local/bin:/home/cloud-user/bin
EOF
# Set the agent's "System capabilities" for tests (go-1.17 and GOLANG_FIPS) in the agent's .env file
# and add a HACK for XDG_RUNTIME_DIR: https://github.com/containers/podman/issues/427
# HACK for XDG_RUNTIME_DIR: https://github.com/containers/podman/issues/427
cat >/home/cloud-user/agent/.env <<'EOF'
go-1.17=true
go-1.16=true
GOLANG_FIPS=1
XDG_RUNTIME_DIR=/run/user/1000
EOF
cat >/etc/cron.weekly/yumupdate <<'EOF'
#!/bin/bash
yum update -y
EOF
chmod +x /etc/cron.weekly/yumupdate
cat >/etc/cron.hourly/tmpwatch <<'EOF'
#!/bin/bash
@ -494,7 +445,6 @@ rm cron
ManagedDisk: &mgmtcompute.VirtualMachineScaleSetManagedDiskParameters{
StorageAccountType: mgmtcompute.StorageAccountTypesPremiumLRS,
},
DiskSizeGB: to.Int32Ptr(200),
},
},
NetworkProfile: &mgmtcompute.VirtualMachineScaleSetNetworkProfile{
@ -559,11 +509,11 @@ rm cron
}
const (
sharedKeyVaultName = "concat(take(resourceGroup().name,10), '" + SharedKeyVaultNameSuffix + "')"
sharedKeyVaultName = "concat(take(resourceGroup().name,15), '" + SharedKeyVaultNameSuffix + "')"
sharedDiskEncryptionSetName = "concat(resourceGroup().name, '" + SharedDiskEncryptionSetNameSuffix + "')"
sharedDiskEncryptionKeyName = "concat(resourceGroup().name, '-disk-encryption-key')"
// Conflicts with current development subscription. cannot have two keyvaults with same name
SharedKeyVaultNameSuffix = "-dev-sharedKV"
SharedKeyVaultNameSuffix = "-sharedKV"
SharedDiskEncryptionSetNameSuffix = "-disk-encryption-set"
)

Просмотреть файл

@ -196,14 +196,12 @@ func (g *generator) gatewayVMSS() *arm.Resource {
for _, variable := range []string{
"acrResourceId",
"azureCloudName",
"azureSecPackQualysUrl",
"azureSecPackVSATenantId",
"databaseAccountName",
"dbtokenClientId",
"dbtokenUrl",
"mdmFrontendUrl",
"mdsdEnvironment",
"fluentbitImage",
"gatewayMdsdConfigVersion",
"gatewayDomains",
"gatewayFeatures",
@ -257,6 +255,7 @@ rm -f /var/lib/rpm/__db*
rpm --import https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
rpm --import https://packages.microsoft.com/keys/microsoft.asc
rpm --import https://packages.fluentbit.io/fluentbit.key
for attempt in {1..5}; do
yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && break
@ -277,11 +276,16 @@ enabled=yes
gpgcheck=no
EOF
semanage fcontext -a -t var_log_t "/var/log/journal(/.*)?"
mkdir -p /var/log/journal
cat >/etc/yum.repos.d/td-agent-bit.repo <<'EOF'
[td-agent-bit]
name=td-agent-bit
baseurl=https://packages.fluentbit.io/centos/7/$basearch
enabled=yes
gpgcheck=yes
EOF
for attempt in {1..5}; do
yum --enablerepo=rhui-rhel-7-server-rhui-optional-rpms -y install clamav azsec-clamav azsec-monitor azure-cli azure-mdsd azure-security docker openssl-perl python3 && break
yum --enablerepo=rhui-rhel-7-server-rhui-optional-rpms -y install clamav azsec-clamav azsec-monitor azure-cli azure-mdsd azure-security docker openssl-perl td-agent-bit python3 && break
# hack - we are installing python3 on hosts due to an issue with Azure Linux Extensions https://github.com/Azure/azure-linux-extensions/pull/1505
if [[ ${attempt} -lt 5 ]]; then sleep 10; else exit 1; fi
done
@ -301,30 +305,7 @@ sysctl --system
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --add-port=443/tcp --permanent
export AZURE_CLOUD_NAME=$AZURECLOUDNAME
az login -i --allow-no-subscriptions
# The managed identity that the VM runs as only has a single roleassignment.
# This role assignment is ACRPull which is not necessarily present in the
# subscription we're deploying into. If the identity does not have any
# role assignments scoped on the subscription we're deploying into, it will
# not show on az login -i, which is why the below line is commented.
# az account set -s "$SUBSCRIPTIONID"
systemctl start docker.service
az acr login --name "$(sed -e 's|.*/||' <<<"$ACRRESOURCEID")"
MDMIMAGE="${RPIMAGE%%/*}/${MDMIMAGE##*/}"
docker pull "$MDMIMAGE"
docker pull "$RPIMAGE"
docker pull "$FLUENTBITIMAGE"
az logout
mkdir -p /etc/fluentbit/
mkdir -p /var/lib/fluent
cat >/etc/fluentbit/fluentbit.conf <<'EOF'
cat >/etc/td-agent-bit/td-agent-bit.conf <<'EOF'
[INPUT]
Name systemd
Tag journald
@ -342,42 +323,24 @@ cat >/etc/fluentbit/fluentbit.conf <<'EOF'
Port 29230
EOF
echo "FLUENTBITIMAGE=$FLUENTBITIMAGE" >/etc/sysconfig/fluentbit
export AZURE_CLOUD_NAME=$AZURECLOUDNAME
az login -i --allow-no-subscriptions
cat >/etc/systemd/system/fluentbit.service <<'EOF'
[Unit]
After=docker.service
Requires=docker.service
StartLimitIntervalSec=0
# The managed identity that the VM runs as only has a single roleassignment.
# This role assignment is ACRPull which is not necessarily present in the
# subscription we're deploying into. If the identity does not have any
# role assignments scoped on the subscription we're deploying into, it will
# not show on az login -i, which is why the below line is commented.
# az account set -s "$SUBSCRIPTIONID"
[Service]
RestartSec=1s
EnvironmentFile=/etc/sysconfig/fluentbit
ExecStartPre=-/usr/bin/docker rm -f %N
ExecStart=/usr/bin/docker run \
--security-opt label=disable \
--entrypoint /opt/td-agent-bit/bin/td-agent-bit \
--net=host \
--hostname %H \
--name %N \
--rm \
--cap-drop net_raw \
-v /etc/fluentbit/fluentbit.conf:/etc/fluentbit/fluentbit.conf \
-v /var/lib/fluent:/var/lib/fluent:z \
-v /var/log/journal:/var/log/journal:ro \
-v /run/log/journal:/run/log/journal:ro \
-v /etc/machine-id:/etc/machine-id:ro \
$FLUENTBITIMAGE \
-c /etc/fluentbit/fluentbit.conf
systemctl start docker.service
az acr login --name "$(sed -e 's|.*/||' <<<"$ACRRESOURCEID")"
ExecStop=/usr/bin/docker stop %N
Restart=always
RestartSec=5
StartLimitInterval=0
MDMIMAGE="${RPIMAGE%%/*}/${MDMIMAGE##*/}"
docker pull "$MDMIMAGE"
docker pull "$RPIMAGE"
[Install]
WantedBy=multi-user.target
EOF
az logout
cat >/etc/sysconfig/mdm <<EOF
MDMFRONTENDURL='$MDMFRONTENDURL'
@ -604,8 +567,6 @@ export MONITORING_USE_GENEVA_CONFIG_SERVICE=true
export MONITORING_TENANT='$LOCATION'
export MONITORING_ROLE=gateway
export MONITORING_ROLE_INSTANCE='$(hostname)'
export MDSD_MSGPACK_SORT_COLUMNS=1
EOF
# setting MONITORING_GCS_AUTH_ID_TYPE=AuthKeyVault seems to have caused mdsd not
@ -622,7 +583,6 @@ cat >/etc/default/vsa-nodescan-agent.config <<EOF
"Timeout": 10800,
"ClientId": "",
"TenantId": "$AZURESECPACKVSATENANTID",
"QualysStoreBaseUrl": "$AZURESECPACKQUALYSURL",
"ProcessTimeout": 300,
"CommandDelay": 0
}
@ -638,7 +598,7 @@ PATH=/bin
0 * * * * root chown syslog:syslog /var/opt/microsoft/linuxmonagent/eh/EventNotice/arorplogs*
EOF
for service in aro-gateway auoms azsecd azsecmond mdsd mdm chronyd fluentbit; do
for service in aro-gateway auoms azsecd azsecmond mdsd mdm chronyd td-agent-bit; do
systemctl enable $service.service
done
@ -646,11 +606,6 @@ for scan in baseline clamav software; do
/usr/local/bin/azsecd config -s $scan -d P1D
done
# We need to manually set PasswordAuthentication to true in order for the VMSS Access JIT to work
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
restorecon -RF /var/log/*
(sleep 30; reboot) &
`))
@ -665,9 +620,6 @@ restorecon -RF /var/log/*
Tier: to.StringPtr("Standard"),
Capacity: to.Int64Ptr(1339),
},
Tags: map[string]*string{
"SkipLinuxAzSecPack": to.StringPtr("true"),
},
VirtualMachineScaleSetProperties: &mgmtcompute.VirtualMachineScaleSetProperties{
UpgradePolicy: &mgmtcompute.UpgradePolicy{
Mode: mgmtcompute.UpgradeModeManual,

Просмотреть файл

@ -1061,8 +1061,6 @@ export MONITORING_USE_GENEVA_CONFIG_SERVICE=true
export MONITORING_TENANT='$LOCATION'
export MONITORING_ROLE=rp
export MONITORING_ROLE_INSTANCE='$(hostname)'
export MDSD_MSGPACK_SORT_COLUMNS=1
EOF
# setting MONITORING_GCS_AUTH_ID_TYPE=AuthKeyVault seems to have caused mdsd not
@ -1085,8 +1083,8 @@ cat >/etc/default/vsa-nodescan-agent.config <<EOF
}
EOF
# we start a cron job to run every hour to ensure the said directory is accessible
# by the correct user as it gets created by root and may cause a race condition
# we start a cron job to run every hour to ensure the said directory is accessible
# by the correct user as it gets created by root and may cause a race condition
# where root owns the dir instead of syslog
# TODO: https://msazure.visualstudio.com/AzureRedHatOpenShift/_workitems/edit/12591207
cat >/etc/cron.d/mdsd-chown-workaround <<EOF
@ -1103,9 +1101,6 @@ for scan in baseline clamav software; do
/usr/local/bin/azsecd config -s $scan -d P1D
done
# We need to manually set PasswordAuthentication to true in order for the VMSS Access JIT to work
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
restorecon -RF /var/log/*
(sleep 30; reboot) &
`))
@ -1121,9 +1116,6 @@ restorecon -RF /var/log/*
Tier: to.StringPtr("Standard"),
Capacity: to.Int64Ptr(1338),
},
Tags: map[string]*string{
"SkipLinuxAzSecPack": to.StringPtr("true"),
},
VirtualMachineScaleSetProperties: &mgmtcompute.VirtualMachineScaleSetProperties{
UpgradePolicy: &mgmtcompute.UpgradePolicy{
Mode: mgmtcompute.UpgradeModeManual,

Просмотреть файл

@ -10,7 +10,6 @@ import (
"log"
"net"
"net/http"
"sync"
"sync/atomic"
"time"
@ -31,7 +30,6 @@ import (
"github.com/Azure/ARO-RP/pkg/util/encryption"
"github.com/Azure/ARO-RP/pkg/util/heartbeat"
"github.com/Azure/ARO-RP/pkg/util/recover"
"github.com/Azure/ARO-RP/pkg/util/version"
)
type statusCodeError int
@ -44,26 +42,20 @@ type kubeActionsFactory func(*logrus.Entry, env.Interface, *api.OpenShiftCluster
type azureActionsFactory func(*logrus.Entry, env.Interface, *api.OpenShiftCluster, *api.SubscriptionDocument) (adminactions.AzureActions, error)
type ocEnricherFactory func(log *logrus.Entry, dialer proxy.Dialer, m metrics.Emitter) clusterdata.OpenShiftClusterEnricher
type ocEnricherFactory func(log *logrus.Entry, dialer proxy.Dialer, m metrics.Interface) clusterdata.OpenShiftClusterEnricher
type frontend struct {
auditLog *logrus.Entry
baseLog *logrus.Entry
env env.Interface
dbAsyncOperations database.AsyncOperations
dbClusterManagerConfiguration database.ClusterManagerConfigurations
dbOpenShiftClusters database.OpenShiftClusters
dbSubscriptions database.Subscriptions
dbOpenShiftVersions database.OpenShiftVersions
dbAsyncOperations database.AsyncOperations
dbOpenShiftClusters database.OpenShiftClusters
dbSubscriptions database.Subscriptions
dbOpenShiftVersions database.OpenShiftVersions
enabledOcpVersions map[string]*api.OpenShiftVersion
apis map[string]*api.Version
lastChangefeed atomic.Value //time.Time
mu sync.RWMutex
m metrics.Emitter
apis map[string]*api.Version
m metrics.Interface
aead encryption.AEAD
kubeActionsFactory kubeActionsFactory
@ -80,9 +72,8 @@ type frontend struct {
ready atomic.Value
// these helps us to test and mock easier
now func() time.Time
systemDataClusterDocEnricher func(*api.OpenShiftClusterDocument, *api.SystemData)
systemDataClusterManagerEnricher func(*api.ClusterManagerConfigurationDocument, *api.SystemData)
now func() time.Time
systemDataEnricher func(*api.OpenShiftClusterDocument, *api.SystemData)
}
// Runnable represents a runnable object
@ -96,49 +87,36 @@ func NewFrontend(ctx context.Context,
baseLog *logrus.Entry,
_env env.Interface,
dbAsyncOperations database.AsyncOperations,
dbClusterManagerConfiguration database.ClusterManagerConfigurations,
dbOpenShiftClusters database.OpenShiftClusters,
dbSubscriptions database.Subscriptions,
dbOpenShiftVersions database.OpenShiftVersions,
apis map[string]*api.Version,
m metrics.Emitter,
m metrics.Interface,
aead encryption.AEAD,
kubeActionsFactory kubeActionsFactory,
azureActionsFactory azureActionsFactory,
ocEnricherFactory ocEnricherFactory) (Runnable, error) {
f := &frontend{
auditLog: auditLog,
baseLog: baseLog,
env: _env,
dbAsyncOperations: dbAsyncOperations,
dbClusterManagerConfiguration: dbClusterManagerConfiguration,
dbOpenShiftClusters: dbOpenShiftClusters,
dbSubscriptions: dbSubscriptions,
dbOpenShiftVersions: dbOpenShiftVersions,
apis: apis,
m: m,
aead: aead,
kubeActionsFactory: kubeActionsFactory,
azureActionsFactory: azureActionsFactory,
ocEnricherFactory: ocEnricherFactory,
// add default installation version so it's always supported
enabledOcpVersions: map[string]*api.OpenShiftVersion{
version.InstallStream.Version.String(): {
Properties: api.OpenShiftVersionProperties{
Version: version.InstallStream.Version.String(),
Enabled: true,
},
},
},
auditLog: auditLog,
baseLog: baseLog,
env: _env,
dbAsyncOperations: dbAsyncOperations,
dbOpenShiftClusters: dbOpenShiftClusters,
dbSubscriptions: dbSubscriptions,
dbOpenShiftVersions: dbOpenShiftVersions,
apis: apis,
m: m,
aead: aead,
kubeActionsFactory: kubeActionsFactory,
azureActionsFactory: azureActionsFactory,
ocEnricherFactory: ocEnricherFactory,
bucketAllocator: &bucket.Random{},
startTime: time.Now(),
now: time.Now,
systemDataClusterDocEnricher: enrichClusterSystemData,
systemDataClusterManagerEnricher: enrichClusterManagerSystemData,
now: time.Now,
systemDataEnricher: enrichSystemData,
}
l, err := f.env.Listen()
@ -201,18 +179,6 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
s.Methods(http.MethodPatch).HandlerFunc(f.putOrPatchOpenShiftCluster).Name("putOrPatchOpenShiftCluster")
s.Methods(http.MethodPut).HandlerFunc(f.putOrPatchOpenShiftCluster).Name("putOrPatchOpenShiftCluster")
if f.env.FeatureIsSet(env.FeatureEnableOCMEndpoints) {
s = r.
Path("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/{ocmResourceType}/{ocmResourceName}").
Queries("api-version", "{api-version}").
Subrouter()
s.Methods(http.MethodDelete).HandlerFunc(f.deleteClusterManagerConfiguration).Name("deleteClusterManagerConfiguration")
s.Methods(http.MethodGet).HandlerFunc(f.getClusterManagerConfiguration).Name("getClusterManagerConfiguration")
s.Methods(http.MethodPatch).HandlerFunc(f.putOrPatchClusterManagerConfiguration).Name("putOrPatchClusterManagerConfiguration")
s.Methods(http.MethodPut).HandlerFunc(f.putOrPatchClusterManagerConfiguration).Name("putOrPatchClusterManagerConfiguration")
}
s = r.
Path("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}").
Queries("api-version", "{api-version}").
@ -271,19 +237,6 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
s.Methods(http.MethodPost).HandlerFunc(f.postAdminKubernetesObjects).Name("postAdminKubernetesObjects")
s.Methods(http.MethodDelete).HandlerFunc(f.deleteAdminKubernetesObjects).Name("deleteAdminKubernetesObjects")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/approvecsr").
Subrouter()
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterApproveCSR).Name("postAdminOpenShiftClusterApproveCSR")
// Pod logs
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/kubernetespodlogs").
Subrouter()
s.Methods(http.MethodGet).HandlerFunc(f.getAdminKubernetesPodLogs).Name("getAdminKubernetesPodLogs")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/resources").
Subrouter()
@ -302,18 +255,6 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterRedeployVM).Name("postAdminOpenShiftClusterRedeployVM")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/stopvm").
Subrouter()
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterStopVM).Name("postAdminOpenShiftClusterStopVM")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/startvm").
Subrouter()
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterStartVM).Name("postAdminOpenShiftClusterStartVM")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/upgrade").
Subrouter()
@ -326,18 +267,6 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
s.Methods(http.MethodGet).HandlerFunc(f.getAdminOpenShiftClusters).Name("getAdminOpenShiftClusters")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/skus").
Subrouter()
s.Methods(http.MethodGet).HandlerFunc(f.getAdminOpenShiftClusterVMResizeOptions).Name("getAdminOpenShiftClusterVMResizeOptions")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/resize").
Subrouter()
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterVMResize).Name("postAdminOpenShiftClusterVMResize")
s = r.
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/reconcilefailednic").
Subrouter()
@ -407,7 +336,6 @@ func (f *frontend) setupRouter() *mux.Router {
func (f *frontend) Run(ctx context.Context, stop <-chan struct{}, done chan<- struct{}) {
defer recover.Panic(f.baseLog)
go f.changefeed(ctx)
if stop != nil {
go func() {

Просмотреть файл

@ -79,7 +79,7 @@ func (f *frontend) validateOpenShiftUniqueKey(ctx context.Context, doc *api.Open
var rxKubernetesString = regexp.MustCompile(`(?i)^[-a-z0-9.]{0,255}$`)
func validateAdminKubernetesObjectsNonCustomer(method, groupKind, namespace, name string) error {
if !utilnamespace.IsOpenShiftNamespace(namespace) {
if !utilnamespace.IsOpenShift(namespace) {
return api.NewCloudError(http.StatusForbidden, api.CloudErrorCodeForbidden, "", "Access to the provided namespace '%s' is forbidden.", namespace)
}
@ -129,25 +129,6 @@ func validateAdminVMName(vmName string) error {
return nil
}
func validateAdminKubernetesPodLogs(namespace, podName, containerName string) error {
if podName == "" || !rxKubernetesString.MatchString(podName) {
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided pod name '%s' is invalid.", podName)
}
if namespace == "" || !rxKubernetesString.MatchString(namespace) {
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided namespace '%s' is invalid.", namespace)
}
// Checking if the namespace is an OpenShift namespace not a customer workload namespace.
if !utilnamespace.IsOpenShiftNamespace(namespace) {
return api.NewCloudError(http.StatusForbidden, api.CloudErrorCodeForbidden, "", "Access to the provided namespace '%s' is forbidden.", namespace)
}
if containerName == "" || !rxKubernetesString.MatchString(containerName) {
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided container name '%s' is invalid.", containerName)
}
return nil
}
// Azure resource name rules:
// https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules#microsoftnetwork
var rxNetworkInterfaceName = regexp.MustCompile(`^[a-zA-Z0-9].*\w$`)
@ -156,13 +137,7 @@ func validateNetworkInterfaceName(nicName string) error {
if nicName == "" || !rxNetworkInterfaceName.MatchString(nicName) {
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided nicName '%s' is invalid.", nicName)
}
return nil
}
func validateAdminVMSize(vmSize string) error {
if vmSize == "" {
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided vmSize '%s' is invalid.", vmSize)
}
return nil
}

Просмотреть файл

@ -54,6 +54,14 @@ func (g *gateway) gatewayVerification(host, linkID string) (string, bool, error)
})
}
// Emit a gauge for the linkID if the host is empty
if host == "" {
g.m.EmitGauge("gateway.nohost", 1, map[string]string{
"linkid": linkID,
"action": "denied",
})
}
if _, found := g.allowList[strings.ToLower(host)]; found {
return gateway.ID, true, nil
}

Просмотреть файл

@ -6,12 +6,13 @@ package cluster
import (
"context"
"net/http"
"reflect"
"runtime"
"github.com/Azure/go-autorest/autorest/azure"
configv1 "github.com/openshift/api/config/v1"
configclient "github.com/openshift/client-go/config/clientset/versioned"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
hiveclient "github.com/openshift/hive/pkg/client/clientset/versioned"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
@ -22,7 +23,6 @@ import (
"github.com/Azure/ARO-RP/pkg/api"
"github.com/Azure/ARO-RP/pkg/metrics"
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
"github.com/Azure/ARO-RP/pkg/util/steps"
)
type Monitor struct {
@ -35,13 +35,11 @@ type Monitor struct {
restconfig *rest.Config
cli kubernetes.Interface
configcli configclient.Interface
maocli machineclient.Interface
maocli maoclient.Interface
mcocli mcoclient.Interface
m metrics.Emitter
m metrics.Interface
arocli aroclient.Interface
hiveclientset hiveclient.Interface
// access below only via the helper functions in cache.go
cache struct {
cos *configv1.ClusterOperatorList
@ -51,7 +49,7 @@ type Monitor struct {
}
}
func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config, oc *api.OpenShiftCluster, m metrics.Emitter, hiveRestConfig *rest.Config, hourlyRun bool) (*Monitor, error) {
func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config, oc *api.OpenShiftCluster, m metrics.Interface, hourlyRun bool) (*Monitor, error) {
r, err := azure.ParseResourceID(oc.ID)
if err != nil {
return nil, err
@ -74,7 +72,7 @@ func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config,
return nil, err
}
maocli, err := machineclient.NewForConfig(restConfig)
maocli, err := maoclient.NewForConfig(restConfig)
if err != nil {
return nil, err
}
@ -89,16 +87,6 @@ func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config,
return nil, err
}
var hiveclientset hiveclient.Interface
if hiveRestConfig != nil {
var err error
hiveclientset, err = hiveclient.NewForConfig(hiveRestConfig)
if err != nil {
// TODO(hive): Update to fail once we have Hive everywhere in prod and dev
log.Error(err)
}
}
return &Monitor{
log: log,
hourlyRun: hourlyRun,
@ -106,14 +94,13 @@ func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config,
oc: oc,
dims: dims,
restconfig: restConfig,
cli: cli,
configcli: configcli,
maocli: maocli,
mcocli: mcocli,
arocli: arocli,
m: m,
hiveclientset: hiveclientset,
restconfig: restConfig,
cli: cli,
configcli: configcli,
maocli: maocli,
mcocli: mcocli,
arocli: arocli,
m: m,
}, nil
}
@ -132,13 +119,13 @@ func (mon *Monitor) Monitor(ctx context.Context) (errs []error) {
statusCode, err := mon.emitAPIServerHealthzCode(ctx)
if err != nil {
errs = append(errs, err)
friendlyFuncName := steps.FriendlyName(mon.emitAPIServerHealthzCode)
mon.log.Printf("%s: %s", friendlyFuncName, err)
mon.emitGauge("monitor.clustererrors", 1, map[string]string{"monitor": friendlyFuncName})
mon.log.Printf("%s: %s", runtime.FuncForPC(reflect.ValueOf(mon.emitAPIServerHealthzCode).Pointer()).Name(), err)
mon.emitGauge("monitor.clustererrors", 1, map[string]string{"monitor": runtime.FuncForPC(reflect.ValueOf(mon.emitAPIServerHealthzCode).Pointer()).Name()})
}
if statusCode != http.StatusOK {
return
}
for _, f := range []func(context.Context) error{
mon.emitAroOperatorHeartbeat,
mon.emitAroOperatorConditions,
@ -156,15 +143,13 @@ func (mon *Monitor) Monitor(ctx context.Context) (errs []error) {
mon.emitStatefulsetStatuses,
mon.emitJobConditions,
mon.emitSummary,
mon.emitHiveRegistrationStatus,
mon.emitPrometheusAlerts, // at the end for now because it's the slowest/least reliable
} {
err = f(ctx)
if err != nil {
errs = append(errs, err)
friendlyFuncName := steps.FriendlyName(f)
mon.log.Printf("%s: %s", friendlyFuncName, err)
mon.emitGauge("monitor.clustererrors", 1, map[string]string{"monitor": friendlyFuncName})
mon.log.Printf("%s: %s", runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name(), err)
mon.emitGauge("monitor.clustererrors", 1, map[string]string{"monitor": runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()})
// keep going
}
}

Просмотреть файл

@ -32,6 +32,7 @@ func (mon *Monitor) emitNodeConditions(ctx context.Context) error {
mon.emitGauge("node.count", int64(len(ns.Items)), nil)
for _, n := range ns.Items {
for _, c := range n.Status.Conditions {
if c.Status == nodeConditionsExpected[c.Type] {
continue

Просмотреть файл

@ -9,14 +9,15 @@ import (
"testing"
"github.com/golang/mock/gomock"
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
machinefake "github.com/openshift/client-go/machine/clientset/versioned/fake"
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
maofake "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned/fake"
"github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kruntime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
azureproviderv1beta1 "sigs.k8s.io/cluster-api-provider-azure/pkg/apis/azureprovider/v1beta1"
mock_metrics "github.com/Azure/ARO-RP/pkg/util/mocks/metrics"
)
@ -24,7 +25,7 @@ import (
func TestEmitNodeConditions(t *testing.T) {
ctx := context.Background()
provSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{})
provSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{})
if err != nil {
t.Fatal(err)
}
@ -68,7 +69,7 @@ func TestEmitNodeConditions(t *testing.T) {
},
},
})
machineclient := machinefake.NewSimpleClientset(
maoclient := maofake.NewSimpleClientset(
&machinev1beta1.Machine{
Spec: machinev1beta1.MachineSpec{
ProviderSpec: machinev1beta1.ProviderSpec{
@ -100,11 +101,11 @@ func TestEmitNodeConditions(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
m := mock_metrics.NewMockEmitter(controller)
m := mock_metrics.NewMockInterface(controller)
mon := &Monitor{
cli: cli,
maocli: machineclient,
maocli: maoclient,
m: m,
}
@ -140,27 +141,27 @@ func TestEmitNodeConditions(t *testing.T) {
func TestGetSpotInstances(t *testing.T) {
ctx := context.Background()
spotProvSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{
SpotVMOptions: &machinev1beta1.SpotVMOptions{},
spotProvSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{
SpotVMOptions: &azureproviderv1beta1.SpotVMOptions{},
})
if err != nil {
t.Fatal(err)
}
provSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{})
provSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{})
if err != nil {
t.Fatal(err)
}
for _, tt := range []struct {
name string
maocli machineclient.Interface
maocli maoclient.Interface
node corev1.Node
expectedSpotInstance bool
}{
{
name: "node is a spot instance",
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
Spec: machinev1beta1.MachineSpec{
ProviderSpec: machinev1beta1.ProviderSpec{
Value: &kruntime.RawExtension{
@ -185,7 +186,7 @@ func TestGetSpotInstances(t *testing.T) {
},
{
name: "node is not a spot instance",
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
Spec: machinev1beta1.MachineSpec{
ProviderSpec: machinev1beta1.ProviderSpec{
Value: &kruntime.RawExtension{
@ -210,7 +211,7 @@ func TestGetSpotInstances(t *testing.T) {
},
{
name: "node is missing annotation",
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
Spec: machinev1beta1.MachineSpec{
ProviderSpec: machinev1beta1.ProviderSpec{
Value: &kruntime.RawExtension{
@ -233,7 +234,7 @@ func TestGetSpotInstances(t *testing.T) {
},
{
name: "malformed json in providerSpec",
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
Spec: machinev1beta1.MachineSpec{
ProviderSpec: machinev1beta1.ProviderSpec{
Value: &kruntime.RawExtension{

Просмотреть файл

@ -143,6 +143,7 @@ func TestGenevaLoggingDaemonset(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cluster := &arov1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{Name: "cluster"},
Status: arov1alpha1.ClusterStatus{Conditions: []operatorv1.OperatorCondition{}},
@ -167,6 +168,7 @@ func TestGenevaLoggingDaemonset(t *testing.T) {
for _, err := range errs {
t.Error(err)
}
})
}
}

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,174 @@
package muo
// Copyright (c) Microsoft Corporation.
// Licensed under the Apache License 2.0.
import (
"context"
"errors"
"strings"
"github.com/ghodss/yaml"
"github.com/ugorji/go/codec"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
kruntime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
"github.com/Azure/ARO-RP/pkg/api"
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
"github.com/Azure/ARO-RP/pkg/util/dynamichelper"
"github.com/Azure/ARO-RP/pkg/util/ready"
)
type muoConfig struct {
api.MissingFields
ConfigManager struct {
api.MissingFields
Source string `json:"source,omitempty"`
OcmBaseUrl string `json:"ocmBaseUrl,omitempty"`
LocalConfigName string `json:"localConfigName,omitempty"`
} `json:"configManager,omitempty"`
}
type Deployer interface {
CreateOrUpdate(context.Context, *arov1alpha1.Cluster, *config.MUODeploymentConfig) error
Remove(context.Context) error
IsReady(ctx context.Context) (bool, error)
Resources(*config.MUODeploymentConfig) ([]kruntime.Object, error)
}
type deployer struct {
kubernetescli kubernetes.Interface
dh dynamichelper.Interface
jsonHandle *codec.JsonHandle
}
func newDeployer(kubernetescli kubernetes.Interface, dh dynamichelper.Interface) Deployer {
return &deployer{
kubernetescli: kubernetescli,
dh: dh,
jsonHandle: new(codec.JsonHandle),
}
}
func (o *deployer) Resources(config *config.MUODeploymentConfig) ([]kruntime.Object, error) {
results := []kruntime.Object{}
for _, assetName := range AssetNames() {
b, err := Asset(assetName)
if err != nil {
return nil, err
}
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(b, nil, nil)
if err != nil {
return nil, err
}
// set the image for the deployments
if d, ok := obj.(*appsv1.Deployment); ok {
for i := range d.Spec.Template.Spec.Containers {
d.Spec.Template.Spec.Containers[i].Image = config.Pullspec
}
}
if cm, ok := obj.(*corev1.ConfigMap); ok {
if cm.Name == "managed-upgrade-operator-config" && cm.Namespace == "openshift-managed-upgrade-operator" {
// read the config.yaml from the MUO ConfigMap which stores defaults
configDataJSON, err := yaml.YAMLToJSON([]byte(cm.Data["config.yaml"]))
if err != nil {
return nil, err
}
var configData muoConfig
err = codec.NewDecoderBytes(configDataJSON, o.jsonHandle).Decode(&configData)
if err != nil {
return nil, err
}
if config.EnableConnected {
configData.ConfigManager.Source = "OCM"
configData.ConfigManager.OcmBaseUrl = config.OCMBaseURL
configData.ConfigManager.LocalConfigName = ""
} else {
configData.ConfigManager.Source = "LOCAL"
configData.ConfigManager.LocalConfigName = "managed-upgrade-config"
configData.ConfigManager.OcmBaseUrl = ""
}
// Write the yaml back into the ConfigMap
var b []byte
err = codec.NewEncoderBytes(&b, o.jsonHandle).Encode(configData)
if err != nil {
return nil, err
}
cmYaml, err := yaml.JSONToYAML(b)
if err != nil {
return nil, err
}
cm.Data["config.yaml"] = string(cmYaml)
}
}
results = append(results, obj)
}
return results, nil
}
func (o *deployer) CreateOrUpdate(ctx context.Context, cluster *arov1alpha1.Cluster, config *config.MUODeploymentConfig) error {
resources, err := o.Resources(config)
if err != nil {
return err
}
err = dynamichelper.SetControllerReferences(resources, cluster)
if err != nil {
return err
}
err = dynamichelper.Prepare(resources)
if err != nil {
return err
}
return o.dh.Ensure(ctx, resources...)
}
func (o *deployer) Remove(ctx context.Context) error {
resources, err := o.Resources(&config.MUODeploymentConfig{})
if err != nil {
return err
}
var errs []error
for _, obj := range resources {
// delete any deployments we have
if d, ok := obj.(*appsv1.Deployment); ok {
err := o.dh.EnsureDeleted(ctx, "Deployment", d.Namespace, d.Name)
// Don't error out because then we might delete some resources and not others
if err != nil {
errs = append(errs, err)
}
}
}
if len(errs) != 0 {
errContent := []string{"error removing MUO:"}
for _, err := range errs {
errContent = append(errContent, err.Error())
}
return errors.New(strings.Join(errContent, "\n"))
}
return nil
}
func (o *deployer) IsReady(ctx context.Context) (bool, error) {
return ready.CheckDeploymentIsReady(ctx, o.kubernetescli.AppsV1().Deployments("openshift-managed-upgrade-operator"), "managed-upgrade-operator")()
}

Просмотреть файл

@ -0,0 +1,352 @@
package muo
// Copyright (c) Microsoft Corporation.
// Licensed under the Apache License 2.0.
import (
"context"
"errors"
"strings"
"testing"
"github.com/go-test/deep"
"github.com/golang/mock/gomock"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kruntime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes/fake"
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
mock_dynamichelper "github.com/Azure/ARO-RP/pkg/util/mocks/dynamichelper"
)
func TestDeployCreateOrUpdateCorrectKinds(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
setPullSpec := "MyMUOPullSpec"
cluster := &arov1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: arov1alpha1.SingletonClusterName,
},
}
k8scli := fake.NewSimpleClientset()
dh := mock_dynamichelper.NewMockInterface(controller)
// When the DynamicHelper is called, count the number of objects it creates
// and capture any deployments so that we can check the pullspec
var deployments []*appsv1.Deployment
deployedObjects := make(map[string]int)
check := func(ctx context.Context, objs ...kruntime.Object) error {
m := meta.NewAccessor()
for _, i := range objs {
kind, err := m.Kind(i)
if err != nil {
return err
}
if d, ok := i.(*appsv1.Deployment); ok {
deployments = append(deployments, d)
}
deployedObjects[kind] = deployedObjects[kind] + 1
}
return nil
}
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
deployer := newDeployer(k8scli, dh)
err := deployer.CreateOrUpdate(context.Background(), cluster, &config.MUODeploymentConfig{Pullspec: setPullSpec})
if err != nil {
t.Error(err)
}
// We expect these numbers of resources to be created
expectedKinds := map[string]int{
"ClusterRole": 1,
"ConfigMap": 2,
"ClusterRoleBinding": 1,
"CustomResourceDefinition": 1,
"Deployment": 1,
"Namespace": 1,
"Role": 4,
"RoleBinding": 4,
"ServiceAccount": 1,
}
errs := deep.Equal(deployedObjects, expectedKinds)
for _, e := range errs {
t.Error(e)
}
// Ensure we have set the pullspec set on the containers
for _, d := range deployments {
for _, c := range d.Spec.Template.Spec.Containers {
if c.Image != setPullSpec {
t.Errorf("expected %s, got %s for pullspec", setPullSpec, c.Image)
}
}
}
}
func TestDeployCreateOrUpdateSetsOwnerReferences(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
setPullSpec := "MyMUOPullSpec"
cluster := &arov1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: arov1alpha1.SingletonClusterName,
},
}
k8scli := fake.NewSimpleClientset()
dh := mock_dynamichelper.NewMockInterface(controller)
// the OwnerReference that we expect to be set on each object we Ensure
pointerToTrueForSomeReason := bool(true)
expectedOwner := metav1.OwnerReference{
APIVersion: "aro.openshift.io/v1alpha1",
Kind: "Cluster",
Name: arov1alpha1.SingletonClusterName,
UID: cluster.UID,
BlockOwnerDeletion: &pointerToTrueForSomeReason,
Controller: &pointerToTrueForSomeReason,
}
// save the list of OwnerReferences on each of the Ensured objects
var ownerReferences [][]metav1.OwnerReference
check := func(ctx context.Context, objs ...kruntime.Object) error {
for _, i := range objs {
obj, err := meta.Accessor(i)
if err != nil {
return err
}
ownerReferences = append(ownerReferences, obj.GetOwnerReferences())
}
return nil
}
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
deployer := newDeployer(k8scli, dh)
err := deployer.CreateOrUpdate(context.Background(), cluster, &config.MUODeploymentConfig{Pullspec: setPullSpec})
if err != nil {
t.Error(err)
}
// Check that each list of OwnerReferences contains our controller
for _, references := range ownerReferences {
errs := deep.Equal([]metav1.OwnerReference{expectedOwner}, references)
for _, e := range errs {
t.Error(e)
}
}
}
func TestDeployDelete(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
k8scli := fake.NewSimpleClientset()
dh := mock_dynamichelper.NewMockInterface(controller)
dh.EXPECT().EnsureDeleted(gomock.Any(), "Deployment", "openshift-managed-upgrade-operator", "managed-upgrade-operator").Return(nil)
deployer := newDeployer(k8scli, dh)
err := deployer.Remove(context.Background())
if err != nil {
t.Error(err)
}
}
func TestDeployDeleteFailure(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
k8scli := fake.NewSimpleClientset()
dh := mock_dynamichelper.NewMockInterface(controller)
dh.EXPECT().EnsureDeleted(gomock.Any(), "Deployment", "openshift-managed-upgrade-operator", "managed-upgrade-operator").Return(errors.New("fail"))
deployer := newDeployer(k8scli, dh)
err := deployer.Remove(context.Background())
if err == nil {
t.Error(err)
}
if err.Error() != "error removing MUO:\nfail" {
t.Error(err)
}
}
func TestDeployIsReady(t *testing.T) {
specReplicas := int32(1)
k8scli := fake.NewSimpleClientset(&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "managed-upgrade-operator",
Namespace: "openshift-managed-upgrade-operator",
Generation: 1234,
},
Spec: appsv1.DeploymentSpec{
Replicas: &specReplicas,
},
Status: appsv1.DeploymentStatus{
ObservedGeneration: 1234,
Replicas: 1,
ReadyReplicas: 1,
UpdatedReplicas: 1,
AvailableReplicas: 1,
UnavailableReplicas: 0,
},
})
deployer := newDeployer(k8scli, nil)
ready, err := deployer.IsReady(context.Background())
if err != nil {
t.Error(err)
}
if !ready {
t.Error("deployment is not seen as ready")
}
}
func TestDeployIsReadyMissing(t *testing.T) {
k8scli := fake.NewSimpleClientset()
deployer := newDeployer(k8scli, nil)
ready, err := deployer.IsReady(context.Background())
if err != nil {
t.Error(err)
}
if ready {
t.Error("deployment is wrongly seen as ready")
}
}
func TestDeployConfig(t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
cluster := &arov1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: arov1alpha1.SingletonClusterName,
},
}
tests := []struct {
name string
deploymentConfig *config.MUODeploymentConfig
expected []string
}{
{
name: "local",
deploymentConfig: &config.MUODeploymentConfig{EnableConnected: false},
expected: []string{
"configManager:",
" localConfigName: managed-upgrade-config",
" source: LOCAL",
" watchInterval: 1",
"healthCheck:",
" ignoredCriticals:",
" - PrometheusRuleFailures",
" - CannotRetrieveUpdates",
" - FluentdNodeDown",
" ignoredNamespaces:",
" - openshift-logging",
" - openshift-redhat-marketplace",
" - openshift-operators",
" - openshift-user-workload-monitoring",
" - openshift-pipelines",
" - openshift-azure-logging",
"maintenance:",
" controlPlaneTime: 90",
" ignoredAlerts:",
" controlPlaneCriticals:",
" - ClusterOperatorDown",
" - ClusterOperatorDegraded",
"nodeDrain:",
" expectedNodeDrainTime: 8",
" timeOut: 45",
"scale:",
" timeOut: 30",
"upgradeWindow:",
" delayTrigger: 30",
" timeOut: 120",
"",
},
},
{
name: "connected",
deploymentConfig: &config.MUODeploymentConfig{EnableConnected: true, OCMBaseURL: "https://example.com"},
expected: []string{
"configManager:",
" ocmBaseUrl: https://example.com",
" source: OCM",
" watchInterval: 1",
"healthCheck:",
" ignoredCriticals:",
" - PrometheusRuleFailures",
" - CannotRetrieveUpdates",
" - FluentdNodeDown",
" ignoredNamespaces:",
" - openshift-logging",
" - openshift-redhat-marketplace",
" - openshift-operators",
" - openshift-user-workload-monitoring",
" - openshift-pipelines",
" - openshift-azure-logging",
"maintenance:",
" controlPlaneTime: 90",
" ignoredAlerts:",
" controlPlaneCriticals:",
" - ClusterOperatorDown",
" - ClusterOperatorDegraded",
"nodeDrain:",
" expectedNodeDrainTime: 8",
" timeOut: 45",
"scale:",
" timeOut: 30",
"upgradeWindow:",
" delayTrigger: 30",
" timeOut: 120",
"",
},
},
}
for _, tt := range tests {
k8scli := fake.NewSimpleClientset()
dh := mock_dynamichelper.NewMockInterface(controller)
// When the DynamicHelper is called, capture configmaps to inspect them
var configs []*corev1.ConfigMap
check := func(ctx context.Context, objs ...kruntime.Object) error {
for _, i := range objs {
if cm, ok := i.(*corev1.ConfigMap); ok {
configs = append(configs, cm)
}
}
return nil
}
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
deployer := newDeployer(k8scli, dh)
err := deployer.CreateOrUpdate(context.Background(), cluster, tt.deploymentConfig)
if err != nil {
t.Error(err)
}
foundConfig := false
for _, cms := range configs {
if cms.Name == "managed-upgrade-operator-config" && cms.Namespace == "openshift-managed-upgrade-operator" {
foundConfig = true
errs := deep.Equal(tt.expected, strings.Split(cms.Data["config.yaml"], "\n"))
for _, e := range errs {
t.Error(e)
}
}
}
if !foundConfig {
t.Error("MUO config was not found")
}
}
}

Просмотреть файл

@ -0,0 +1,12 @@
package muo
// Copyright (c) Microsoft Corporation.
// Licensed under the Apache License 2.0.
// bindata for the above yaml files
//go:generate go run ../../../../vendor/github.com/go-bindata/go-bindata/go-bindata -nometadata -pkg muo -prefix staticresources/ -o bindata.go staticresources/...
//go:generate gofmt -s -l -w bindata.go
//go:generate rm -rf ../../mocks/$GOPACKAGE
//go:generate go run ../../../../vendor/github.com/golang/mock/mockgen -destination=../../mocks/$GOPACKAGE/$GOPACKAGE.go github.com/Azure/ARO-RP/pkg/operator/controllers/$GOPACKAGE Deployer
//go:generate go run ../../../../vendor/golang.org/x/tools/cmd/goimports -local=github.com/Azure/ARO-RP -e -w ../../mocks/$GOPACKAGE/$GOPACKAGE.go

Просмотреть файл

@ -5,7 +5,6 @@ package muo
import (
"context"
"embed"
"fmt"
"strings"
"time"
@ -18,15 +17,12 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
"github.com/Azure/ARO-RP/pkg/util/deployer"
"github.com/Azure/ARO-RP/pkg/util/dynamichelper"
"github.com/Azure/ARO-RP/pkg/util/pullsecret"
"github.com/Azure/ARO-RP/pkg/util/version"
@ -38,16 +34,13 @@ const (
controllerEnabled = "rh.srep.muo.enabled"
controllerManaged = "rh.srep.muo.managed"
controllerPullSpec = "rh.srep.muo.deploy.pullspec"
controllerForceLocalOnly = "rh.srep.muo.deploy.forceLocalOnly"
controllerAllowOCM = "rh.srep.muo.deploy.allowOCM"
controllerOcmBaseURL = "rh.srep.muo.deploy.ocmBaseUrl"
controllerOcmBaseURLDefaultValue = "https://api.openshift.com"
pullSecretOCMKey = "cloud.openshift.com"
pullSecretOCMKey = "cloud.redhat.com"
)
//go:embed staticresources
var staticFiles embed.FS
var pullSecretName = types.NamespacedName{Name: "pull-secret", Namespace: "openshift-config"}
type MUODeploymentConfig struct {
@ -59,7 +52,7 @@ type MUODeploymentConfig struct {
type Reconciler struct {
arocli aroclient.Interface
kubernetescli kubernetes.Interface
deployer deployer.Deployer
deployer Deployer
readinessPollTime time.Duration
readinessTimeout time.Duration
@ -69,7 +62,7 @@ func NewReconciler(arocli aroclient.Interface, kubernetescli kubernetes.Interfac
return &Reconciler{
arocli: arocli,
kubernetescli: kubernetescli,
deployer: deployer.NewDeployer(kubernetescli, dh, staticFiles, "staticresources"),
deployer: newDeployer(kubernetescli, dh),
readinessPollTime: 10 * time.Second,
readinessTimeout: 5 * time.Minute,
@ -103,8 +96,8 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
Pullspec: pullSpec,
}
disableOCM := instance.Spec.OperatorFlags.GetSimpleBoolean(controllerForceLocalOnly)
if !disableOCM {
allowOCM := instance.Spec.OperatorFlags.GetSimpleBoolean(controllerAllowOCM)
if allowOCM {
useOCM := func() bool {
var userSecret *corev1.Secret
@ -145,13 +138,13 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
defer cancel()
err := wait.PollImmediateUntil(r.readinessPollTime, func() (bool, error) {
return r.deployer.IsReady(ctx, "openshift-managed-upgrade-operator", "managed-upgrade-operator")
return r.deployer.IsReady(ctx)
}, timeoutCtx.Done())
if err != nil {
return reconcile.Result{}, fmt.Errorf("managed Upgrade Operator deployment timed out on Ready: %w", err)
return reconcile.Result{}, fmt.Errorf("Managed Upgrade Operator deployment timed out on Ready: %w", err)
}
} else if strings.EqualFold(managed, "false") {
err := r.deployer.Remove(ctx, config.MUODeploymentConfig{})
err := r.deployer.Remove(ctx)
if err != nil {
return reconcile.Result{}, err
}
@ -162,23 +155,14 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
// SetupWithManager setup our manager
func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
pullSecretPredicate := predicate.NewPredicateFuncs(func(o client.Object) bool {
return (o.GetName() == pullSecretName.Name && o.GetNamespace() == pullSecretName.Namespace)
})
aroClusterPredicate := predicate.NewPredicateFuncs(func(o client.Object) bool {
return o.GetName() == arov1alpha1.SingletonClusterName
})
muoBuilder := ctrl.NewControllerManagedBy(mgr).
For(&arov1alpha1.Cluster{}, builder.WithPredicates(aroClusterPredicate)).
Watches(
&source.Kind{Type: &corev1.Secret{}},
&handler.EnqueueRequestForObject{},
builder.WithPredicates(pullSecretPredicate),
)
builder := ctrl.NewControllerManagedBy(mgr).
For(&arov1alpha1.Cluster{}, builder.WithPredicates(aroClusterPredicate))
resources, err := r.deployer.Template(&config.MUODeploymentConfig{}, staticFiles)
resources, err := r.deployer.Resources(&config.MUODeploymentConfig{})
if err != nil {
return err
}
@ -186,11 +170,11 @@ func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
for _, i := range resources {
o, ok := i.(client.Object)
if ok {
muoBuilder.Owns(o)
builder.Owns(o)
}
}
return muoBuilder.
return builder.
WithEventFilter(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.AnnotationChangedPredicate{}, predicate.LabelChangedPredicate{})).
Named(ControllerName).
Complete(r)

Просмотреть файл

@ -18,13 +18,13 @@ import (
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
arofake "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned/fake"
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
mock_deployer "github.com/Azure/ARO-RP/pkg/util/mocks/deployer"
mock_muo "github.com/Azure/ARO-RP/pkg/operator/mocks/muo"
)
func TestMUOReconciler(t *testing.T) {
tests := []struct {
name string
mocks func(*mock_deployer.MockDeployer, *arov1alpha1.Cluster)
mocks func(*mock_muo.MockDeployer, *arov1alpha1.Cluster)
flags arov1alpha1.OperatorFlags
// connected MUO -- cluster pullsecret
pullsecret string
@ -46,13 +46,12 @@ func TestMUOReconciler(t *testing.T) {
controllerManaged: "true",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
Pullspec: "wonderfulPullspec",
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
@ -61,123 +60,104 @@ func TestMUOReconciler(t *testing.T) {
controllerEnabled: "true",
controllerManaged: "true",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "acrtest.example.com/managed-upgrade-operator:aro-b4",
EnableConnected: false,
Pullspec: "acrtest.example.com/managed-upgrade-operator:aro-b1",
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
name: "managed, OCM allowed but pull secret entirely missing",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "false",
controllerPullSpec: "wonderfulPullspec",
controllerEnabled: "true",
controllerManaged: "true",
controllerAllowOCM: "true",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
name: "managed, OCM allowed but empty pullsecret",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "false",
controllerPullSpec: "wonderfulPullspec",
controllerEnabled: "true",
controllerManaged: "true",
controllerAllowOCM: "true",
controllerPullSpec: "wonderfulPullspec",
},
pullsecret: "{\"auths\": {}}",
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
name: "managed, OCM allowed but mangled pullsecret",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "false",
controllerPullSpec: "wonderfulPullspec",
controllerEnabled: "true",
controllerManaged: "true",
controllerAllowOCM: "true",
controllerPullSpec: "wonderfulPullspec",
},
pullsecret: "i'm a little json, short and stout",
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
name: "managed, OCM connected mode",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "false",
controllerPullSpec: "wonderfulPullspec",
controllerEnabled: "true",
controllerManaged: "true",
controllerAllowOCM: "true",
controllerPullSpec: "wonderfulPullspec",
},
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: true,
OCMBaseURL: "https://api.openshift.com",
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
name: "managed, OCM connected mode, custom OCM URL",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "false",
controllerOcmBaseURL: "https://example.com",
controllerPullSpec: "wonderfulPullspec",
controllerEnabled: "true",
controllerManaged: "true",
controllerAllowOCM: "true",
controllerOcmBaseURL: "https://example.com",
controllerPullSpec: "wonderfulPullspec",
},
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: true,
OCMBaseURL: "https://example.com",
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
},
},
{
name: "managed, pull secret exists, OCM disabled",
flags: arov1alpha1.OperatorFlags{
controllerEnabled: "true",
controllerManaged: "true",
controllerForceLocalOnly: "true",
controllerPullSpec: "wonderfulPullspec",
},
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
},
},
{
@ -187,15 +167,14 @@ func TestMUOReconciler(t *testing.T) {
controllerManaged: "true",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
expectedConfig := &config.MUODeploymentConfig{
Pullspec: "wonderfulPullspec",
EnableConnected: false,
Pullspec: "wonderfulPullspec",
}
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(false, nil)
md.EXPECT().IsReady(gomock.Any()).Return(false, nil)
},
wantErr: "managed Upgrade Operator deployment timed out on Ready: timed out waiting for the condition",
wantErr: "Managed Upgrade Operator deployment timed out on Ready: timed out waiting for the condition",
},
{
name: "managed, CreateOrUpdate() fails",
@ -204,7 +183,7 @@ func TestMUOReconciler(t *testing.T) {
controllerManaged: "true",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, gomock.AssignableToTypeOf(&config.MUODeploymentConfig{})).Return(errors.New("failed ensure"))
},
wantErr: "failed ensure",
@ -216,8 +195,8 @@ func TestMUOReconciler(t *testing.T) {
controllerManaged: "false",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
md.EXPECT().Remove(gomock.Any(), gomock.Any()).Return(nil)
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
md.EXPECT().Remove(gomock.Any()).Return(nil)
},
},
{
@ -227,8 +206,8 @@ func TestMUOReconciler(t *testing.T) {
controllerManaged: "false",
controllerPullSpec: "wonderfulPullspec",
},
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
md.EXPECT().Remove(gomock.Any(), gomock.Any()).Return(errors.New("failed delete"))
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
md.EXPECT().Remove(gomock.Any()).Return(errors.New("failed delete"))
},
wantErr: "failed delete",
},
@ -257,7 +236,7 @@ func TestMUOReconciler(t *testing.T) {
}
arocli := arofake.NewSimpleClientset(cluster)
kubecli := fake.NewSimpleClientset()
deployer := mock_deployer.NewMockDeployer(controller)
deployer := mock_muo.NewMockDeployer(controller)
if tt.pullsecret != "" {
_, err := kubecli.CoreV1().Secrets(pullSecretName.Namespace).Create(context.Background(),
@ -286,12 +265,12 @@ func TestMUOReconciler(t *testing.T) {
readinessPollTime: 1 * time.Second,
}
_, err := r.Reconcile(context.Background(), reconcile.Request{})
if err != nil && err.Error() != tt.wantErr {
t.Errorf("got error '%v', wanted error '%v'", err, tt.wantErr)
}
if err == nil && tt.wantErr != "" {
t.Errorf("did not get an error, but wanted error '%v'", tt.wantErr)
t.Error(err)
} else if err != nil {
if err.Error() != tt.wantErr {
t.Errorf("wanted '%v', got '%v'", tt.wantErr, err)
}
}
})
}

Просмотреть файл

@ -2,21 +2,19 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: managed-upgrade-operator-config
namespace: openshift-managed-upgrade-operator
namespace: openshift-managed-upgrade-operator
data:
config.yaml: |
configManager:
source: {{ if .EnableConnected }}OCM{{ else }}LOCAL{{ end }}
{{ if .EnableConnected }}ocmBaseUrl: {{.OCMBaseURL}}{{end}}
{{ if not .EnableConnected }}localConfigName: managed-upgrade-config{{end}}
watchInterval: {{ if .EnableConnected }}60{{ else }}15{{ end }}
source: LOCAL
localConfigName: managed-upgrade-config
watchInterval: 1
maintenance:
controlPlaneTime: 90
ignoredAlerts:
controlPlaneCriticals:
- ClusterOperatorDown
- ClusterOperatorDegraded
upgradeType: ARO
upgradeWindow:
delayTrigger: 30
timeOut: 120

Просмотреть файл

@ -1,3 +1,4 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition

Просмотреть файл

@ -40,7 +40,7 @@ spec:
- name: managed-upgrade-operator
# Replace this with the built image name
# This will get replaced on deploy by /hack/generate-operator-bundle.py
image: "{{ .Pullspec }}"
image: GENERATED
command:
- managed-upgrade-operator
imagePullPolicy: Always

Просмотреть файл

@ -7,7 +7,7 @@ rules:
- apiGroups:
- ""
resources:
- configmaps
- configmaps
- serviceaccounts
- secrets
- services

Просмотреть файл

@ -78,21 +78,21 @@ func (n *nsgFlowLogsFeature) Enable(ctx context.Context, instance *aropreviewv1a
func (n *nsgFlowLogsFeature) newFlowLog(instance *aropreviewv1alpha1.PreviewFeature, nsgID string) *mgmtnetwork.FlowLog {
// build a request as described here https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-rest#enable-network-security-group-flow-logs
return &mgmtnetwork.FlowLog{
Location: &n.location,
Location: to.StringPtr(n.location),
FlowLogPropertiesFormat: &mgmtnetwork.FlowLogPropertiesFormat{
TargetResourceID: &nsgID,
TargetResourceID: to.StringPtr(nsgID),
Enabled: to.BoolPtr(true),
Format: &mgmtnetwork.FlowLogFormatParameters{
Type: mgmtnetwork.JSON,
Version: to.Int32Ptr(int32(instance.Spec.NSGFlowLogs.Version)),
},
RetentionPolicy: &mgmtnetwork.RetentionPolicyParameters{
Days: &instance.Spec.NSGFlowLogs.RetentionDays,
Days: to.Int32Ptr(instance.Spec.NSGFlowLogs.RetentionDays),
},
StorageID: &instance.Spec.NSGFlowLogs.StorageAccountResourceID,
StorageID: to.StringPtr(instance.Spec.NSGFlowLogs.StorageAccountResourceID),
FlowAnalyticsConfiguration: &mgmtnetwork.TrafficAnalyticsProperties{
NetworkWatcherFlowAnalyticsConfiguration: &mgmtnetwork.TrafficAnalyticsConfigurationProperties{
WorkspaceID: &instance.Spec.NSGFlowLogs.TrafficAnalyticsLogAnalyticsWorkspaceID,
WorkspaceID: to.StringPtr(instance.Spec.NSGFlowLogs.TrafficAnalyticsLogAnalyticsWorkspaceID),
TrafficAnalyticsInterval: to.Int32Ptr(int32(instance.Spec.NSGFlowLogs.TrafficAnalyticsInterval.Truncate(time.Minute).Minutes())),
},
},

Просмотреть файл

@ -7,7 +7,7 @@ import (
"context"
"github.com/Azure/go-autorest/autorest/azure"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
@ -41,10 +41,10 @@ type Reconciler struct {
arocli aroclient.Interface
kubernetescli kubernetes.Interface
maocli machineclient.Interface
maocli maoclient.Interface
}
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli machineclient.Interface) *Reconciler {
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli maoclient.Interface) *Reconciler {
return &Reconciler{
log: log,
arocli: arocli,

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -7,9 +7,9 @@ import (
"context"
"github.com/Azure/go-autorest/autorest/azure"
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -42,7 +42,7 @@ type Reconciler struct {
arocli aroclient.Interface
kubernetescli kubernetes.Interface
maocli machineclient.Interface
maocli maoclient.Interface
imageregistrycli imageregistryclient.Interface
}
@ -59,7 +59,7 @@ type reconcileManager struct {
}
// NewReconciler creates a new Reconciler
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, maocli machineclient.Interface, kubernetescli kubernetes.Interface, imageregistrycli imageregistryclient.Interface) *Reconciler {
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, maocli maoclient.Interface, kubernetescli kubernetes.Interface, imageregistrycli imageregistryclient.Interface) *Reconciler {
return &Reconciler{
log: log,
arocli: arocli,

Просмотреть файл

@ -101,6 +101,7 @@ func TestReconcileManager(t *testing.T) {
result := getValidAccount([]string{resourceIdMaster, resourceIdWorker})
storage.EXPECT().GetProperties(gomock.Any(), clusterResourceGroupName, clusterStorageAccountName, gomock.Any()).Return(*result, nil)
storage.EXPECT().GetProperties(gomock.Any(), clusterResourceGroupName, registryStorageAccountName, gomock.Any()).Return(*result, nil)
},
imageregistrycli: imageregistryfake.NewSimpleClientset(
&imageregistryv1.Config{
@ -137,6 +138,7 @@ func TestReconcileManager(t *testing.T) {
result := getValidAccount([]string{resourceIdMaster, resourceIdWorker})
storage.EXPECT().GetProperties(gomock.Any(), clusterResourceGroupName, clusterStorageAccountName, gomock.Any()).Return(*result, nil)
storage.EXPECT().GetProperties(gomock.Any(), clusterResourceGroupName, registryStorageAccountName, gomock.Any()).Return(*result, nil)
},
imageregistrycli: imageregistryfake.NewSimpleClientset(
&imageregistryv1.Config{
@ -157,6 +159,7 @@ func TestReconcileManager(t *testing.T) {
name: "Operator Flag enabled - all rules to all accounts",
operatorFlag: true,
mocks: func(storage *mock_storage.MockAccountsClient, kubeSubnet *mock_subnet.MockKubeManager) {
// cluster subnets
kubeSubnet.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{

Просмотреть файл

@ -137,6 +137,7 @@ func TestReconcileManager(t *testing.T) {
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdMaster,
@ -171,6 +172,7 @@ func TestReconcileManager(t *testing.T) {
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdMaster,
@ -200,6 +202,7 @@ func TestReconcileManager(t *testing.T) {
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdMaster,
@ -229,6 +232,7 @@ func TestReconcileManager(t *testing.T) {
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdMaster,
@ -266,6 +270,7 @@ func TestReconcileManager(t *testing.T) {
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdWorker,
@ -311,43 +316,6 @@ func TestReconcileManager(t *testing.T) {
instace.Spec.ArchitectureVersion = int(api.ArchitectureVersionV2)
},
},
{
name: "Architecture V2 - empty NSG",
operatorFlagEnabled: true,
operatorFlagNSG: true,
operatorFlagServiceEndpoint: true,
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
{
ResourceID: subnetResourceIdMaster,
IsMaster: true,
},
{
ResourceID: subnetResourceIdWorker,
IsMaster: false,
},
}, nil)
subnetObjectMaster := getValidSubnet()
subnetObjectMaster.NetworkSecurityGroup = nil
mock.EXPECT().Get(gomock.Any(), subnetResourceIdMaster).Return(subnetObjectMaster, nil).MaxTimes(2)
subnetObjectMasterUpdate := getValidSubnet()
subnetObjectMasterUpdate.NetworkSecurityGroup.ID = to.StringPtr(nsgv2ResourceId)
mock.EXPECT().CreateOrUpdate(gomock.Any(), subnetResourceIdMaster, subnetObjectMasterUpdate).Return(nil)
subnetObjectWorker := getValidSubnet()
subnetObjectWorker.NetworkSecurityGroup = nil
mock.EXPECT().Get(gomock.Any(), subnetResourceIdWorker).Return(subnetObjectWorker, nil).MaxTimes(2)
subnetObjectWorkerUpdate := getValidSubnet()
subnetObjectWorkerUpdate.NetworkSecurityGroup.ID = to.StringPtr(nsgv2ResourceId)
mock.EXPECT().CreateOrUpdate(gomock.Any(), subnetResourceIdWorker, subnetObjectWorkerUpdate).Return(nil)
},
instance: func(instace *arov1alpha1.Cluster) {
instace.Spec.ArchitectureVersion = int(api.ArchitectureVersionV2)
},
},
} {
t.Run(tt.name, func(t *testing.T) {
controller := gomock.NewController(t)

Просмотреть файл

@ -55,6 +55,7 @@ func (r *reconcileManager) ensureSubnetServiceEndpoints(ctx context.Context, s s
if err != nil {
return err
}
}
return nil
}

Просмотреть файл

@ -9,8 +9,8 @@ import (
"strings"
"github.com/Azure/go-autorest/autorest/azure"
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
"github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -44,7 +44,7 @@ type Reconciler struct {
arocli aroclient.Interface
kubernetescli kubernetes.Interface
maocli machineclient.Interface
maocli maoclient.Interface
}
// reconcileManager is an instance of the manager instantiated per request
@ -59,7 +59,7 @@ type reconcileManager struct {
}
// NewReconciler creates a new Reconciler
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli machineclient.Interface) *Reconciler {
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli maoclient.Interface) *Reconciler {
return &Reconciler{
log: log,
arocli: arocli,
@ -114,6 +114,7 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
}
func (r *reconcileManager) reconcileSubnets(ctx context.Context, instance *arov1alpha1.Cluster) error {
subnets, err := r.kubeSubnets.List(ctx)
if err != nil {
return err
@ -124,6 +125,7 @@ func (r *reconcileManager) reconcileSubnets(ctx context.Context, instance *arov1
// This potentially calls an update twice for the same loop, but this is the price
// to pay for keeping logic split, separate, and simple
for _, s := range subnets {
if instance.Spec.OperatorFlags.GetSimpleBoolean(controllerNSGManaged) {
err = r.ensureSubnetNSG(ctx, s)
if err != nil {

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -4,16 +4,13 @@ package deploy
// Licensed under the Apache License 2.0.
import (
"bytes"
"context"
"embed"
"errors"
"fmt"
"strings"
"text/template"
"time"
"github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
extensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
@ -24,8 +21,6 @@ import (
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
appsv1client "k8s.io/client-go/kubernetes/typed/apps/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
@ -41,16 +36,12 @@ import (
"github.com/Azure/ARO-RP/pkg/util/restconfig"
"github.com/Azure/ARO-RP/pkg/util/subnet"
utiltls "github.com/Azure/ARO-RP/pkg/util/tls"
"github.com/Azure/ARO-RP/pkg/util/version"
)
//go:embed staticresources
var embeddedFiles embed.FS
type Operator interface {
CreateOrUpdate(context.Context) error
IsReady(context.Context) (bool, error)
IsRunningDesiredVersion(context.Context) (bool, error)
RenewMDSDCertificate(context.Context) error
}
type operator struct {
@ -86,90 +77,40 @@ func New(log *logrus.Entry, env env.Interface, oc *api.OpenShiftCluster, arocli
}, nil
}
type deploymentData struct {
Image string
Version string
IsLocalDevelopment bool
}
func templateManifests(data deploymentData) ([][]byte, error) {
templatesRoot, err := template.ParseFS(embeddedFiles, "staticresources/*.yaml")
if err != nil {
return nil, err
}
templatesMaster, err := template.ParseFS(embeddedFiles, "staticresources/master/*")
if err != nil {
return nil, err
}
templatesWorker, err := template.ParseFS(embeddedFiles, "staticresources/worker/*")
if err != nil {
return nil, err
}
templatedFiles := make([][]byte, 0)
templatesArray := []*template.Template{templatesMaster, templatesRoot, templatesWorker}
for _, templates := range templatesArray {
for _, templ := range templates.Templates() {
buff := &bytes.Buffer{}
if err := templ.Execute(buff, data); err != nil {
return nil, err
}
templatedFiles = append(templatedFiles, buff.Bytes())
}
}
return templatedFiles, nil
}
func (o *operator) createDeploymentData() deploymentData {
image := o.env.AROOperatorImage()
// HACK: Override for ARO_IMAGE env variable setup in local-dev mode
version := "latest"
if strings.Contains(image, ":") {
str := strings.Split(image, ":")
version = str[len(str)-1]
}
// Set version correctly if it's overridden
if o.oc.Properties.OperatorVersion != "" {
version = o.oc.Properties.OperatorVersion
image = fmt.Sprintf("%s/aro:%s", o.env.ACRDomain(), version)
}
return deploymentData{
IsLocalDevelopment: o.env.IsLocalDevelopmentMode(),
Image: image,
Version: version,
}
}
func (o *operator) createObjects() ([]kruntime.Object, error) {
deploymentData := o.createDeploymentData()
templated, err := templateManifests(deploymentData)
if err != nil {
return nil, err
}
objects := make([]kruntime.Object, 0, len(templated))
for _, v := range templated {
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(v, nil, nil)
func (o *operator) resources() ([]kruntime.Object, error) {
// first static resources from Assets
results := []kruntime.Object{}
for _, assetName := range AssetNames() {
b, err := Asset(assetName)
if err != nil {
return nil, err
}
objects = append(objects, obj)
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(b, nil, nil)
if err != nil {
return nil, err
}
// set the image for the deployments
if d, ok := obj.(*appsv1.Deployment); ok {
if d.Labels == nil {
d.Labels = map[string]string{}
}
d.Labels["version"] = version.GitCommit
for i := range d.Spec.Template.Spec.Containers {
d.Spec.Template.Spec.Containers[i].Image = o.env.AROOperatorImage()
if o.env.IsLocalDevelopmentMode() {
d.Spec.Template.Spec.Containers[i].Env = append(d.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{
Name: "RP_MODE",
Value: "development",
})
}
}
}
results = append(results, obj)
}
return objects, nil
}
func (o *operator) resources() ([]kruntime.Object, error) {
// first static resources from Assets
results, err := o.createObjects()
if err != nil {
return nil, err
}
// then dynamic resources
key, cert := o.env.ClusterGenevaLoggingSecret()
gcsKeyBytes, err := utiltls.PrivateKeyAsBytes(key)
@ -197,11 +138,6 @@ func (o *operator) resources() ([]kruntime.Object, error) {
domain += "." + o.env.Domain()
}
ingressIP, err := checkIngressIP(o.oc.Properties.IngressProfiles)
if err != nil {
return nil, err
}
serviceSubnets := []string{
"/subscriptions/" + o.env.SubscriptionID() + "/resourceGroups/" + o.env.ResourceGroup() + "/providers/Microsoft.Network/virtualNetworks/rp-pe-vnet-001/subnets/rp-pe-subnet",
"/subscriptions/" + o.env.SubscriptionID() + "/resourceGroups/" + o.env.ResourceGroup() + "/providers/Microsoft.Network/virtualNetworks/rp-vnet/subnets/rp-subnet",
@ -244,7 +180,7 @@ func (o *operator) resources() ([]kruntime.Object, error) {
},
APIIntIP: o.oc.Properties.APIServerProfile.IntIP,
IngressIP: ingressIP,
IngressIP: o.oc.Properties.IngressProfiles[0].IP,
GatewayPrivateEndpointIP: o.oc.Properties.NetworkProfile.GatewayPrivateEndpointIP,
// Update the OperatorFlags from the version in the RP
OperatorFlags: arov1alpha1.OperatorFlags(o.oc.Properties.OperatorFlags),
@ -365,31 +301,6 @@ func (o *operator) CreateOrUpdate(ctx context.Context) error {
return nil
}
func (o *operator) RenewMDSDCertificate(ctx context.Context) error {
key, cert := o.env.ClusterGenevaLoggingSecret()
gcsKeyBytes, err := utiltls.PrivateKeyAsBytes(key)
if err != nil {
return err
}
gcsCertBytes, err := utiltls.CertAsBytes(cert)
if err != nil {
return err
}
s, err := o.kubernetescli.CoreV1().Secrets(pkgoperator.Namespace).Get(ctx, pkgoperator.SecretName, metav1.GetOptions{})
if err != nil {
return err
}
s.Data["gcscert.pem"] = gcsCertBytes
s.Data["gcskey.pem"] = gcsKeyBytes
_, err = o.kubernetescli.CoreV1().Secrets(pkgoperator.Namespace).Update(ctx, s, metav1.UpdateOptions{})
if err != nil {
return err
}
return nil
}
func (o *operator) IsReady(ctx context.Context) (bool, error) {
ok, err := ready.CheckDeploymentIsReady(ctx, o.kubernetescli.AppsV1().Deployments(pkgoperator.Namespace), "aro-operator-master")()
if !ok || err != nil {
@ -403,89 +314,6 @@ func (o *operator) IsReady(ctx context.Context) (bool, error) {
return true, nil
}
func checkOperatorDeploymentVersion(ctx context.Context, cli appsv1client.DeploymentInterface, name string, desiredVersion string) (bool, error) {
d, err := cli.Get(ctx, name, metav1.GetOptions{})
switch {
case kerrors.IsNotFound(err):
return false, nil
case err != nil:
return false, err
}
if d.Labels["version"] != desiredVersion {
return false, nil
}
return true, nil
}
func checkPodImageVersion(ctx context.Context, cli corev1client.PodInterface, role string, desiredVersion string) (bool, error) {
podList, err := cli.List(ctx, metav1.ListOptions{LabelSelector: "app=" + role})
switch {
case kerrors.IsNotFound(err):
return false, nil
case err != nil:
return false, err
}
imageTag := "latest"
for _, pod := range podList.Items {
if strings.Contains(pod.Spec.Containers[0].Image, ":") {
str := strings.Split(pod.Spec.Containers[0].Image, ":")
imageTag = str[len(str)-1]
}
}
if imageTag != desiredVersion {
return false, nil
}
return true, nil
}
func (o *operator) IsRunningDesiredVersion(ctx context.Context) (bool, error) {
// Get the desired Version
image := o.env.AROOperatorImage()
desiredVersion := "latest"
if strings.Contains(image, ":") {
str := strings.Split(image, ":")
desiredVersion = str[len(str)-1]
}
if o.oc.Properties.OperatorVersion != "" {
desiredVersion = o.oc.Properties.OperatorVersion
}
// Check if aro-operator-master is running desired version
ok, err := checkOperatorDeploymentVersion(ctx, o.kubernetescli.AppsV1().Deployments(pkgoperator.Namespace), "aro-operator-master", desiredVersion)
if !ok || err != nil {
return ok, err
}
ok, err = checkPodImageVersion(ctx, o.kubernetescli.CoreV1().Pods(pkgoperator.Namespace), "aro-operator-master", desiredVersion)
if !ok || err != nil {
return ok, err
}
// Check if aro-operator-worker is running desired version
ok, err = checkOperatorDeploymentVersion(ctx, o.kubernetescli.AppsV1().Deployments(pkgoperator.Namespace), "aro-operator-worker", desiredVersion)
if !ok || err != nil {
return ok, err
}
ok, err = checkPodImageVersion(ctx, o.kubernetescli.CoreV1().Pods(pkgoperator.Namespace), "aro-operator-worker", desiredVersion)
if !ok || err != nil {
return ok, err
}
return true, nil
}
func checkIngressIP(ingressProfiles []api.IngressProfile) (string, error) {
if ingressProfiles == nil || len(ingressProfiles) < 1 {
return "", errors.New("no Ingress Profiles found")
}
ingressIP := ingressProfiles[0].IP
if len(ingressProfiles) > 1 {
for _, p := range ingressProfiles {
if p.Name == "default" {
return p.IP, nil
}
}
}
return ingressIP, nil
}
func isCRDEstablished(crd *extensionsv1.CustomResourceDefinition) bool {
m := make(map[extensionsv1.CustomResourceDefinitionConditionType]extensionsv1.ConditionStatus, len(crd.Status.Conditions))
for _, cond := range crd.Status.Conditions {

Просмотреть файл

@ -4,7 +4,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
controller-gen.kubebuilder.io/version: v0.6.3-0.20210916130746-94401651a6c3
creationTimestamp: null
name: clusters.aro.openshift.io
spec:

Просмотреть файл

@ -4,7 +4,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.9.0
controller-gen.kubebuilder.io/version: v0.6.3-0.20210916130746-94401651a6c3
creationTimestamp: null
name: previewfeatures.preview.aro.openshift.io
spec:

Просмотреть файл

@ -0,0 +1,97 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: github.com/Azure/ARO-RP/pkg/operator/controllers/muo (interfaces: Deployer)
// Package mock_muo is a generated GoMock package.
package mock_muo
import (
context "context"
reflect "reflect"
gomock "github.com/golang/mock/gomock"
runtime "k8s.io/apimachinery/pkg/runtime"
v1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
config "github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
)
// MockDeployer is a mock of Deployer interface.
type MockDeployer struct {
ctrl *gomock.Controller
recorder *MockDeployerMockRecorder
}
// MockDeployerMockRecorder is the mock recorder for MockDeployer.
type MockDeployerMockRecorder struct {
mock *MockDeployer
}
// NewMockDeployer creates a new mock instance.
func NewMockDeployer(ctrl *gomock.Controller) *MockDeployer {
mock := &MockDeployer{ctrl: ctrl}
mock.recorder = &MockDeployerMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockDeployer) EXPECT() *MockDeployerMockRecorder {
return m.recorder
}
// CreateOrUpdate mocks base method.
func (m *MockDeployer) CreateOrUpdate(arg0 context.Context, arg1 *v1alpha1.Cluster, arg2 *config.MUODeploymentConfig) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "CreateOrUpdate", arg0, arg1, arg2)
ret0, _ := ret[0].(error)
return ret0
}
// CreateOrUpdate indicates an expected call of CreateOrUpdate.
func (mr *MockDeployerMockRecorder) CreateOrUpdate(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateOrUpdate", reflect.TypeOf((*MockDeployer)(nil).CreateOrUpdate), arg0, arg1, arg2)
}
// IsReady mocks base method.
func (m *MockDeployer) IsReady(arg0 context.Context) (bool, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsReady", arg0)
ret0, _ := ret[0].(bool)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// IsReady indicates an expected call of IsReady.
func (mr *MockDeployerMockRecorder) IsReady(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsReady", reflect.TypeOf((*MockDeployer)(nil).IsReady), arg0)
}
// Remove mocks base method.
func (m *MockDeployer) Remove(arg0 context.Context) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Remove", arg0)
ret0, _ := ret[0].(error)
return ret0
}
// Remove indicates an expected call of Remove.
func (mr *MockDeployerMockRecorder) Remove(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Remove", reflect.TypeOf((*MockDeployer)(nil).Remove), arg0)
}
// Resources mocks base method.
func (m *MockDeployer) Resources(arg0 *config.MUODeploymentConfig) ([]runtime.Object, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "Resources", arg0)
ret0, _ := ret[0].([]runtime.Object)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// Resources indicates an expected call of Resources.
func (mr *MockDeployerMockRecorder) Resources(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Resources", reflect.TypeOf((*MockDeployer)(nil).Resources), arg0)
}

Просмотреть файл

@ -171,6 +171,7 @@ func TestFilterVmSizes(t *testing.T) {
},
} {
t.Run(tt.name, func(t *testing.T) {
sku := []mgmtcompute.ResourceSku{
{
Name: to.StringPtr("Fake_Sku"),

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -1 +1 @@
4.10.20
4.9.9

Просмотреть файл

@ -4,92 +4,19 @@ package dynamichelper
// Licensed under the Apache License 2.0.
import (
"context"
"net/http"
"reflect"
"testing"
"time"
"github.com/Azure/go-autorest/autorest/to"
mcv1 "github.com/openshift/machine-config-operator/pkg/apis/machineconfiguration.openshift.io/v1"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
extensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
extensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
kruntime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/cli-runtime/pkg/resource"
"k8s.io/client-go/rest/fake"
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
"github.com/Azure/ARO-RP/pkg/util/cmp"
)
type mockGVRResolver struct{}
func (gvr mockGVRResolver) Refresh() error {
return nil
}
func (gvr mockGVRResolver) Resolve(groupKind, optionalVersion string) (*schema.GroupVersionResource, error) {
return &schema.GroupVersionResource{Group: "metal3.io", Version: "v1alpha1", Resource: "configmap"}, nil
}
func TestEsureDeleted(t *testing.T) {
ctx := context.Background()
mockGVRResolver := mockGVRResolver{}
mockRestCLI := &fake.RESTClient{
GroupVersion: schema.GroupVersion{Group: "testgroup", Version: "v1"},
NegotiatedSerializer: resource.UnstructuredPlusDefaultContentConfig().NegotiatedSerializer,
Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) {
switch req.Method {
case "DELETE":
switch req.URL.Path {
case "/apis/metal3.io/v1alpha1/namespaces/test-ns-1/configmap/test-name-1":
return &http.Response{StatusCode: http.StatusNotFound}, nil
case "/apis/metal3.io/v1alpha1/namespaces/test-ns-2/configmap/test-name-2":
return &http.Response{StatusCode: http.StatusInternalServerError}, nil
case "/apis/metal3.io/v1alpha1/namespaces/test-ns-3/configmap/test-name-3":
return &http.Response{StatusCode: http.StatusOK}, nil
default:
t.Fatalf("unexpected path: %#v\n%#v", req.URL, req)
return nil, nil
}
default:
t.Fatalf("unexpected request: %s %#v\n%#v", req.Method, req.URL, req)
return nil, nil
}
}),
}
dh := &dynamicHelper{
GVRResolver: mockGVRResolver,
restcli: mockRestCLI,
}
err := dh.EnsureDeleted(ctx, "configmap", "test-ns-1", "test-name-1")
if err != nil {
t.Errorf("no error should be bounced for status not found, but got: %v", err)
}
err = dh.EnsureDeleted(ctx, "configmap", "test-ns-2", "test-name-2")
if err == nil {
t.Errorf("function should handle failure response (non-404) correctly")
}
err = dh.EnsureDeleted(ctx, "configmap", "test-ns-3", "test-name-3")
if err != nil {
t.Errorf("function should handle success response correctly")
}
}
func TestMerge(t *testing.T) {
serviceInternalTrafficPolicy := corev1.ServiceInternalTrafficPolicyCluster
for _, tt := range []struct {
name string
old kruntime.Object
@ -322,217 +249,6 @@ func TestMerge(t *testing.T) {
},
wantEmptyDiff: true,
},
{
name: "DaemonSet changes",
old: &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
"deprecated.daemonset.template.generation": "1",
},
},
Status: appsv1.DaemonSetStatus{
CurrentNumberScheduled: 5,
NumberReady: 5,
ObservedGeneration: 1,
},
},
new: &appsv1.DaemonSet{},
want: &appsv1.DaemonSet{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
"deprecated.daemonset.template.generation": "1",
},
},
Status: appsv1.DaemonSetStatus{
CurrentNumberScheduled: 5,
NumberReady: 5,
ObservedGeneration: 1,
},
Spec: appsv1.DaemonSetSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: "Always",
TerminationGracePeriodSeconds: to.Int64Ptr(corev1.DefaultTerminationGracePeriodSeconds),
DNSPolicy: "ClusterFirst",
SecurityContext: &corev1.PodSecurityContext{},
SchedulerName: "default-scheduler",
},
},
UpdateStrategy: appsv1.DaemonSetUpdateStrategy{
Type: appsv1.RollingUpdateDaemonSetStrategyType,
RollingUpdate: &appsv1.RollingUpdateDaemonSet{
MaxUnavailable: &intstr.IntOrString{IntVal: 1},
MaxSurge: &intstr.IntOrString{IntVal: 0},
},
},
RevisionHistoryLimit: to.Int32Ptr(10),
},
},
wantChanged: true,
},
{
name: "Deployment changes",
old: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
"deployment.kubernetes.io/revision": "2",
},
},
Spec: appsv1.DeploymentSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
DeprecatedServiceAccount: "openshift-apiserver-sa",
},
},
},
Status: appsv1.DeploymentStatus{
AvailableReplicas: 3,
ReadyReplicas: 3,
Replicas: 3,
UpdatedReplicas: 3,
},
},
new: &appsv1.Deployment{},
want: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Annotations: map[string]string{
"deployment.kubernetes.io/revision": "2",
},
},
Status: appsv1.DeploymentStatus{
AvailableReplicas: 3,
ReadyReplicas: 3,
Replicas: 3,
UpdatedReplicas: 3,
},
Spec: appsv1.DeploymentSpec{
Replicas: to.Int32Ptr(1),
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: "Always",
TerminationGracePeriodSeconds: to.Int64Ptr(corev1.DefaultTerminationGracePeriodSeconds),
DNSPolicy: "ClusterFirst",
SecurityContext: &corev1.PodSecurityContext{},
SchedulerName: "default-scheduler",
DeprecatedServiceAccount: "openshift-apiserver-sa",
},
},
Strategy: appsv1.DeploymentStrategy{
Type: appsv1.RollingUpdateDeploymentStrategyType,
RollingUpdate: &appsv1.RollingUpdateDeployment{
MaxUnavailable: &intstr.IntOrString{
Type: 1,
StrVal: "25%",
},
MaxSurge: &intstr.IntOrString{
Type: 1,
StrVal: "25%",
},
},
},
RevisionHistoryLimit: to.Int32Ptr(10),
ProgressDeadlineSeconds: to.Int32Ptr(600),
},
},
wantChanged: true,
},
{
name: "KubeletConfig no changes",
old: &mcv1.KubeletConfig{
Status: mcv1.KubeletConfigStatus{
Conditions: []mcv1.KubeletConfigCondition{
{
Message: "Success",
Status: "True",
Type: "Success",
},
},
},
},
new: &mcv1.KubeletConfig{},
want: &mcv1.KubeletConfig{
Status: mcv1.KubeletConfigStatus{
Conditions: []mcv1.KubeletConfigCondition{
{
Message: "Success",
Status: "True",
Type: "Success",
},
},
},
},
wantEmptyDiff: true,
},
{
name: "Cluster no changes",
old: &arov1alpha1.Cluster{
Status: arov1alpha1.ClusterStatus{
OperatorVersion: "8b66c40",
},
},
new: &arov1alpha1.Cluster{},
want: &arov1alpha1.Cluster{
Status: arov1alpha1.ClusterStatus{
OperatorVersion: "8b66c40",
},
},
wantEmptyDiff: true,
},
{
name: "CustomResourceDefinition Betav1 no changes",
old: &extensionsv1beta1.CustomResourceDefinition{
Status: extensionsv1beta1.CustomResourceDefinitionStatus{
Conditions: []extensionsv1beta1.CustomResourceDefinitionCondition{
{
Message: "no conflicts found",
Reason: "NoConflicts",
},
},
},
},
new: &extensionsv1beta1.CustomResourceDefinition{},
want: &extensionsv1beta1.CustomResourceDefinition{
Status: extensionsv1beta1.CustomResourceDefinitionStatus{
Conditions: []extensionsv1beta1.CustomResourceDefinitionCondition{
{
Message: "no conflicts found",
Reason: "NoConflicts",
},
},
},
},
wantEmptyDiff: true,
},
{
name: "CustomResourceDefinition changes",
old: &extensionsv1.CustomResourceDefinition{
Status: extensionsv1.CustomResourceDefinitionStatus{
Conditions: []extensionsv1.CustomResourceDefinitionCondition{
{
Message: "no conflicts found",
Reason: "NoConflicts",
},
},
},
},
new: &extensionsv1.CustomResourceDefinition{},
want: &extensionsv1.CustomResourceDefinition{
Spec: extensionsv1.CustomResourceDefinitionSpec{
Conversion: &extensionsv1.CustomResourceConversion{
Strategy: "None",
},
},
Status: extensionsv1.CustomResourceDefinitionStatus{
Conditions: []extensionsv1.CustomResourceDefinitionCondition{
{
Message: "no conflicts found",
Reason: "NoConflicts",
},
},
},
},
wantChanged: true,
},
{
name: "Secret changes, not logged",
old: &corev1.Secret{
@ -572,76 +288,3 @@ func TestMerge(t *testing.T) {
})
}
}
func TestMakeURLSegments(t *testing.T) {
for _, tt := range []struct {
gvr *schema.GroupVersionResource
namespace string
uname, name string
url []string
want []string
}{
{
uname: "Group is empty",
gvr: &schema.GroupVersionResource{
Group: "",
Version: "4.10",
Resource: "test-resource",
},
namespace: "openshift",
name: "test-name-1",
want: []string{"api", "4.10", "namespaces", "openshift", "test-resource", "test-name-1"},
},
{
uname: "Group is not empty",
gvr: &schema.GroupVersionResource{
Group: "test-group",
Version: "4.10",
Resource: "test-resource",
},
namespace: "openshift-apiserver",
name: "test-name-2",
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-apiserver", "test-resource", "test-name-2"},
},
{
uname: "Namespace is empty",
gvr: &schema.GroupVersionResource{
Group: "test-group",
Version: "4.10",
Resource: "test-resource",
},
namespace: "",
name: "test-name-3",
want: []string{"apis", "test-group", "4.10", "test-resource", "test-name-3"},
},
{
uname: "Namespace is not empty",
gvr: &schema.GroupVersionResource{
Group: "test-group",
Version: "4.10",
Resource: "test-resource",
},
namespace: "openshift-sdn",
name: "test-name-3",
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-sdn", "test-resource", "test-name-3"},
},
{
uname: "Name is empty",
gvr: &schema.GroupVersionResource{
Group: "test-group",
Version: "4.10",
Resource: "test-resource",
},
namespace: "openshift-ns",
name: "",
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-ns", "test-resource"},
},
} {
t.Run(tt.uname, func(t *testing.T) {
got := makeURLSegments(tt.gvr, tt.namespace, tt.name)
if !reflect.DeepEqual(got, tt.want) {
t.Error(cmp.Diff(got, tt.want))
}
})
}
}

Просмотреть файл

@ -4,6 +4,8 @@ package version
// Licensed under the Apache License 2.0.
import (
"os"
"github.com/Azure/ARO-RP/pkg/api"
)
@ -27,18 +29,14 @@ var GitCommit = "unknown"
// InstallStream describes stream we are defaulting to for all new clusters
var InstallStream = &Stream{
Version: NewVersion(4, 10, 20),
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:b89ada9261a1b257012469e90d7d4839d0d2f99654f5ce76394fa3f06522b600",
Version: NewVersion(4, 9, 9),
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:dc6d4d8b2f9264c0037ed0222285f19512f112cc85a355b14a66bd6b910a4940",
}
// UpgradeStreams describes list of streams we support for upgrades
var (
UpgradeStreams = []*Stream{
InstallStream,
{
Version: NewVersion(4, 9, 28),
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:4084d94969b186e20189649b5affba7da59f7d1943e4e5bc7ef78b981eafb7a8",
},
{
Version: NewVersion(4, 8, 18),
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:321aae3d3748c589bc2011062cee9fd14e106f258807dc2d84ced3f7461160ea",
@ -68,18 +66,26 @@ func FluentbitImage(acrDomain string) string {
}
// MdmImage contains the location of the MDM container image
// https://eng.ms/docs/products/geneva/collect/references/linuxcontainers
func MdmImage(acrDomain string) string {
return acrDomain + "/genevamdm:master_20220711.1"
// for the latest version see https://genevamondocs.azurewebsites.net/collect/references/linuxcontainers.html?q=container
if os.Getenv("GENEVA_MDM_IMAGE_OVERRIDE") != "" {
return os.Getenv("GENEVA_MDM_IMAGE_OVERRIDE")
}
return acrDomain + "/genevamdm:master_20220111.2"
}
// MdsdImage contains the location of the MDSD container image
// see https://eng.ms/docs/products/geneva/collect/references/linuxcontainers
func MdsdImage(acrDomain string) string {
return acrDomain + "/genevamdsd:master_20220713.1"
// for the latest version see https://genevamondocs.azurewebsites.net/collect/references/linuxcontainers.html?q=container
if os.Getenv("GENEVA_MDSD_IMAGE_OVERRIDE") != "" {
return os.Getenv("GENEVA_MDSD_IMAGE_OVERRIDE")
}
return acrDomain + "/genevamdsd:master_20211223.1"
}
// MUOImage contains the location of the Managed Upgrade Operator container image
func MUOImage(acrDomain string) string {
return acrDomain + "/managed-upgrade-operator:aro-b4"
return acrDomain + "/managed-upgrade-operator:aro-b1"
}

6250
portal/package-lock.json сгенерированный Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,68 +0,0 @@
/*!
* Bootstrap dropdown.js v4.6.1 (https://getbootstrap.com/)
* Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
/*!
* Bootstrap util.js v4.6.1 (https://getbootstrap.com/)
* Copyright 2011-2021 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
/*!
* Bootstrap-select v1.13.18 (https://developer.snapappointments.com/bootstrap-select)
*
* Copyright 2012-2020 SnapAppointments, LLC
* Licensed under MIT (https://github.com/snapappointments/bootstrap-select/blob/master/LICENSE)
*/
/*!
* Sizzle CSS Selector Engine v2.3.6
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://js.foundation/
*
* Date: 2021-02-16
*/
/*!
* jQuery JavaScript Library v3.6.0
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright OpenJS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2021-03-02T17:08Z
*/
/**!
* @fileOverview Kickass library to create and place poppers near their reference elements.
* @version 1.16.1
* @license
* Copyright (c) 2016 Federico Zivolo and contributors
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/

Просмотреть файл

@ -1,103 +0,0 @@
import 'bootstrap/dist/css/bootstrap.min.css';
import 'bootstrap-select/dist/css/bootstrap-select.min.css';
import 'bootstrap/js/dist/util';
import 'bootstrap/js/dist/dropdown';
import 'bootstrap-select'
jQuery.extend({
redirect: function (location, args) {
var form = $("<form method='POST' style='display: none;'></form>");
form.attr("action", location);
$.each(args || {}, function (key, value) {
var input = $("<input name='hidden'></input>");
input.attr("name", key);
input.attr("value", value);
form.append(input);
});
form.append($("input[name='gorilla.csrf.Token']").first());
form.appendTo("body").submit();
}
});
jQuery(function () {
$.ajax({
url: "/api/clusters",
success: function (clusters) {
$.each(clusters, function (i, cluster) {
$("#selResourceId").append($("<option>").text(cluster.resourceId));
});
$("#selResourceId").selectpicker();
},
dataType: "json",
});
$("#btnLogout").click(function () {
$.redirect("/api/logout");
});
$("#btnKubeconfig").click(function () {
$.redirect($("#selResourceId").val() + "/kubeconfig/new");
});
$("#btnPrometheus").click(function () {
window.location = $("#selResourceId").val() + "/prometheus";
});
$("#btnSSH").click(function () {
$.ajax({
method: "POST",
url: $("#selResourceId").val() + "/ssh/new",
headers: {
"X-CSRF-Token": $("input[name='gorilla.csrf.Token']").val(),
},
contentType: "application/json",
data: JSON.stringify({
"master": parseInt($("#selMaster").val()),
}),
success: function (reply) {
if (reply["error"]) {
var template = $("#tmplSSHAlertError").html();
var alert = $(template);
alert.find("span[data-copy='error']").text(reply["error"]);
$("#divAlerts").html(alert);
return;
}
var template = $("#tmplSSHAlert").html();
var alert = $(template);
alert.find("span[data-copy='command'] > code").text(reply["command"]);
alert.find("span[data-copy='command']").attr("data-copy", reply["command"]);
alert.find("span[data-copy='password'] > code").text("********");
alert.find("span[data-copy='password']").attr("data-copy", reply["password"]);
$("#divAlerts").html(alert);
$('.copy-button').click(function () {
var textarea = $("<textarea class='style: hidden;' id='textarea'></textarea>");
textarea.text($(this).next().attr("data-copy"));
textarea.appendTo("body");
textarea = document.getElementById("textarea")
textarea.select();
textarea.setSelectionRange(0, textarea.value.length + 1);
document.execCommand('copy');
document.body.removeChild(textarea)
});
},
dataType: "json",
});
});
$("#btnV2").click(function () {
window.location = "/v2";
});
});

Просмотреть файл

@ -1,61 +0,0 @@
const webpack = require('webpack');
const path = require('path');
const MiniCssExtractPlugin = require('mini-css-extract-plugin');
const CssMinimizerPlugin = require('css-minimizer-webpack-plugin');
const CopyPlugin = require("copy-webpack-plugin");
module.exports = {
entry: './src/index.js',
output: {
filename: '[name].js',
path: path.resolve(__dirname, 'build'),
clean: true,
},
plugins: [
new MiniCssExtractPlugin(),
new webpack.ProvidePlugin({
$: 'jquery',
jQuery: 'jquery',
}),
new CopyPlugin({
patterns: [
{ from: "src/index.html", to: "index.html" },
],
}),
],
module: {
rules: [
{
test: /\.s?css$/i,
use: [MiniCssExtractPlugin.loader, 'css-loader'],
},
],
},
optimization: {
minimizer: [
`...`,
new CssMinimizerPlugin(),
],
splitChunks: {
cacheGroups: {
styles: {
name: 'styles',
type: 'css/mini-extract',
chunks: 'all',
enforce: true,
},
defaultVendors: {
test: /[\\/]node_modules[\\/]/,
priority: -10,
reuseExistingChunk: true,
},
default: {
minChunks: 2,
priority: -20,
reuseExistingChunk: true,
}
},
},
},
};

Просмотреть файл

@ -1,8 +0,0 @@
const { merge } = require('webpack-merge');
const common = require('./webpack.common.js');
const fs = require('fs');
module.exports = merge(common, {
mode: 'development',
devtool: 'source-map',
});

Просмотреть файл

@ -1,6 +0,0 @@
const { merge } = require('webpack-merge');
const common = require('./webpack.common.js');
module.exports = merge(common, {
mode: 'production',
});

Просмотреть файл

@ -164,6 +164,7 @@ func getNodeUptime(node string) (time.Time, error) {
for reader.Scan() {
select {
case <-ctx.Done():
break
default:
line := reader.Text()
message += line

Просмотреть файл

@ -7,19 +7,16 @@ import (
"context"
"fmt"
"regexp"
"sort"
"strings"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
"github.com/Azure/go-autorest/autorest/azure"
"github.com/Azure/go-autorest/autorest/to"
"github.com/ghodss/yaml"
configv1 "github.com/openshift/api/config/v1"
operatorv1 "github.com/openshift/api/operator/v1"
"github.com/ugorji/go/codec"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
@ -28,7 +25,7 @@ import (
"k8s.io/client-go/util/retry"
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
imageController "github.com/Azure/ARO-RP/pkg/operator/controllers/imageconfig"
"github.com/Azure/ARO-RP/pkg/operator/controllers/machineset"
"github.com/Azure/ARO-RP/pkg/operator/controllers/monitoring"
"github.com/Azure/ARO-RP/pkg/util/conditions"
"github.com/Azure/ARO-RP/pkg/util/ready"
@ -77,7 +74,7 @@ func dumpEvents(ctx context.Context, namespace string) error {
var _ = Describe("ARO Operator - Internet checking", func() {
var originalURLs []string
BeforeEach(func() {
By("saving the original URLs")
// save the originalURLs
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
if kerrors.IsNotFound(err) {
Skip("skipping tests as aro-operator is not deployed")
@ -87,7 +84,7 @@ var _ = Describe("ARO Operator - Internet checking", func() {
originalURLs = co.Spec.InternetChecker.URLs
})
AfterEach(func() {
By("restoring the original URLs")
// set the URLs back again
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
if err != nil {
@ -99,20 +96,20 @@ var _ = Describe("ARO Operator - Internet checking", func() {
})
Expect(err).NotTo(HaveOccurred())
})
It("sets InternetReachableFromMaster to true when the default URL is reachable from master nodes", func() {
Specify("the InternetReachable default list should all be reachable", func() {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
Expect(conditions.IsTrue(co.Status.Conditions, arov1alpha1.InternetReachableFromMaster)).To(BeTrue())
})
It("sets InternetReachableFromWorker to true when the default URL is reachable from worker nodes", func() {
Specify("the InternetReachable default list should all be reachable from worker", func() {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
Expect(conditions.IsTrue(co.Status.Conditions, arov1alpha1.InternetReachableFromWorker)).To(BeTrue())
})
It("sets InternetReachableFromMaster and InternetReachableFromWorker to false when URL is not reachable", func() {
By("setting a deliberately unreachable URL")
Specify("custom invalid site shows not InternetReachable", func() {
// set an unreachable URL
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
if err != nil {
@ -124,7 +121,7 @@ var _ = Describe("ARO Operator - Internet checking", func() {
})
Expect(err).NotTo(HaveOccurred())
By("waiting for the expected conditions to be set")
// confirm the conditions are correct
err = wait.PollImmediate(10*time.Second, 10*time.Minute, func() (bool, error) {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
if err != nil {
@ -141,7 +138,7 @@ var _ = Describe("ARO Operator - Internet checking", func() {
})
var _ = Describe("ARO Operator - Geneva Logging", func() {
It("must be repaired if DaemonSet deleted", func() {
Specify("genevalogging must be repaired if deployment deleted", func() {
mdsdReady := func() (bool, error) {
done, err := ready.CheckDaemonSetIsReady(context.Background(), clients.Kubernetes.AppsV1().DaemonSets("openshift-azure-logging"), "mdsd")()
if err != nil {
@ -150,7 +147,6 @@ var _ = Describe("ARO Operator - Geneva Logging", func() {
return done, nil // swallow error
}
By("checking that mdsd DaemonSet is ready before the test")
err := wait.PollImmediate(30*time.Second, 15*time.Minute, mdsdReady)
if err != nil {
// TODO: Remove dump once reason for flakes is clear
@ -162,11 +158,11 @@ var _ = Describe("ARO Operator - Geneva Logging", func() {
initial, err := updatedObjects(context.Background(), "openshift-azure-logging")
Expect(err).NotTo(HaveOccurred())
By("deleting mdsd DaemonSet")
// delete the mdsd daemonset
err = clients.Kubernetes.AppsV1().DaemonSets("openshift-azure-logging").Delete(context.Background(), "mdsd", metav1.DeleteOptions{})
Expect(err).NotTo(HaveOccurred())
By("checking that mdsd DaemonSet is ready")
// wait for it to be fixed
err = wait.PollImmediate(30*time.Second, 15*time.Minute, mdsdReady)
if err != nil {
// TODO: Remove dump once reason for flakes is clear
@ -175,7 +171,7 @@ var _ = Describe("ARO Operator - Geneva Logging", func() {
}
Expect(err).NotTo(HaveOccurred())
By("confirming that only one object was updated")
// confirm that only one object was updated
final, err := updatedObjects(context.Background(), "openshift-azure-logging")
Expect(err).NotTo(HaveOccurred())
if len(final)-len(initial) != 1 {
@ -187,7 +183,7 @@ var _ = Describe("ARO Operator - Geneva Logging", func() {
})
var _ = Describe("ARO Operator - Cluster Monitoring ConfigMap", func() {
It("must not have persistent volume set", func() {
Specify("cluster monitoring configmap should not have persistent volume config", func() {
var cm *corev1.ConfigMap
var err error
configMapExists := func() (bool, error) {
@ -198,11 +194,9 @@ var _ = Describe("ARO Operator - Cluster Monitoring ConfigMap", func() {
return true, nil
}
By("waiting for the ConfigMap to make sure it exists")
err = wait.PollImmediate(30*time.Second, 15*time.Minute, configMapExists)
Expect(err).NotTo(HaveOccurred())
By("unmarshalling the config from the ConfigMap data")
var configData monitoring.Config
configDataJSON, err := yaml.YAMLToJSON([]byte(cm.Data["config.yaml"]))
Expect(err).NotTo(HaveOccurred())
@ -212,14 +206,13 @@ var _ = Describe("ARO Operator - Cluster Monitoring ConfigMap", func() {
log.Warn(err)
}
By("checking config correctness")
Expect(configData.PrometheusK8s.Retention).To(BeEmpty())
Expect(configData.PrometheusK8s.VolumeClaimTemplate).To(BeNil())
Expect(configData.AlertManagerMain.VolumeClaimTemplate).To(BeNil())
})
It("must be restored if deleted", func() {
Specify("cluster monitoring configmap should be restored if deleted", func() {
configMapExists := func() (bool, error) {
_, err := clients.Kubernetes.CoreV1().ConfigMaps("openshift-monitoring").Get(context.Background(), "cluster-monitoring-config", metav1.GetOptions{})
if err != nil {
@ -228,22 +221,22 @@ var _ = Describe("ARO Operator - Cluster Monitoring ConfigMap", func() {
return true, nil
}
By("waiting for the ConfigMap to make sure it exists")
err := wait.PollImmediate(30*time.Second, 15*time.Minute, configMapExists)
Expect(err).NotTo(HaveOccurred())
By("deleting for the ConfigMap")
err = clients.Kubernetes.CoreV1().ConfigMaps("openshift-monitoring").Delete(context.Background(), "cluster-monitoring-config", metav1.DeleteOptions{})
Expect(err).NotTo(HaveOccurred())
By("waiting for the ConfigMap to make sure it was restored")
err = wait.PollImmediate(30*time.Second, 15*time.Minute, configMapExists)
Expect(err).NotTo(HaveOccurred())
_, err = clients.Kubernetes.CoreV1().ConfigMaps("openshift-monitoring").Get(context.Background(), "cluster-monitoring-config", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
})
})
var _ = Describe("ARO Operator - RBAC", func() {
It("must restore system:aro-sre ClusterRole if deleted", func() {
Specify("system:aro-sre ClusterRole should be restored if deleted", func() {
clusterRoleExists := func() (bool, error) {
_, err := clients.Kubernetes.RbacV1().ClusterRoles().Get(context.Background(), "system:aro-sre", metav1.GetOptions{})
if err != nil {
@ -252,30 +245,25 @@ var _ = Describe("ARO Operator - RBAC", func() {
return true, nil
}
By("waiting for the ClusterRole to make sure it exists")
err := wait.PollImmediate(30*time.Second, 15*time.Minute, clusterRoleExists)
Expect(err).NotTo(HaveOccurred())
By("deleting for the ClusterRole")
err = clients.Kubernetes.RbacV1().ClusterRoles().Delete(context.Background(), "system:aro-sre", metav1.DeleteOptions{})
Expect(err).NotTo(HaveOccurred())
By("waiting for the ClusterRole to make sure it was restored")
err = wait.PollImmediate(30*time.Second, 15*time.Minute, clusterRoleExists)
Expect(err).NotTo(HaveOccurred())
_, err = clients.Kubernetes.RbacV1().ClusterRoles().Get(context.Background(), "system:aro-sre", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
})
})
var _ = Describe("ARO Operator - Conditions", func() {
It("must have all the conditions set to true", func() {
// Save the last got conditions so that we can print them in the case of
// the test failing
var lastConditions []operatorv1.OperatorCondition
Specify("Cluster check conditions should not be failing", func() {
clusterOperatorConditionsValid := func() (bool, error) {
co, err := clients.AROClusters.AroV1alpha1().Clusters().Get(context.Background(), "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
lastConditions = co.Status.Conditions
valid := true
for _, condition := range arov1alpha1.ClusterChecksTypes() {
@ -287,7 +275,62 @@ var _ = Describe("ARO Operator - Conditions", func() {
}
err := wait.PollImmediate(30*time.Second, 15*time.Minute, clusterOperatorConditionsValid)
Expect(err).NotTo(HaveOccurred(), "last conditions: %v", lastConditions)
Expect(err).NotTo(HaveOccurred())
})
})
var _ = Describe("ARO Operator - MachineSet Controller", func() {
Specify("operator should maintain at least two worker replicas", func() {
ctx := context.Background()
instance, err := clients.AROClusters.AroV1alpha1().Clusters().Get(ctx, "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
if !instance.Spec.OperatorFlags.GetSimpleBoolean(machineset.ControllerEnabled) {
Skip("MachineSet Controller is not enabled, skipping this test")
}
mss, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).List(ctx, metav1.ListOptions{})
Expect(err).NotTo(HaveOccurred())
Expect(mss.Items).NotTo(BeEmpty())
// Zero all machinesets, wait for reconcile
for _, object := range mss.Items {
err = scale(object.Name, 0)
Expect(err).NotTo(HaveOccurred())
}
for _, object := range mss.Items {
err = waitForScale(object.Name)
Expect(err).NotTo(HaveOccurred())
}
// Re-count and assert that operator added back replicas
modifiedMachineSets, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).List(ctx, metav1.ListOptions{})
Expect(err).NotTo(HaveOccurred())
replicaCount := 0
for _, machineset := range modifiedMachineSets.Items {
if machineset.Spec.Replicas != nil {
replicaCount += int(*machineset.Spec.Replicas)
}
}
Expect(replicaCount).To(BeEquivalentTo(minSupportedReplicas))
// Scale back to previous state
for _, ms := range mss.Items {
err = scale(ms.Name, *ms.Spec.Replicas)
Expect(err).NotTo(HaveOccurred())
}
for _, ms := range mss.Items {
err = waitForScale(ms.Name)
Expect(err).NotTo(HaveOccurred())
}
// Wait for old machine objects to delete
err = waitForMachines()
Expect(err).NotTo(HaveOccurred())
})
})
@ -299,8 +342,8 @@ var _ = Describe("ARO Operator - Azure Subnet Reconciler", func() {
const nsg = "e2e-nsg"
// Gathers vnet name, resource group, location, and adds master/worker subnets to list to reconcile.
gatherNetworkInfo := func() {
By("gathering vnet name, resource group, location, and adds master/worker subnets to list to reconcile")
oc, err := clients.OpenshiftClustersv20200430.Get(ctx, vnetResourceGroup, clusterName)
Expect(err).NotTo(HaveOccurred())
location = *oc.Location
@ -321,18 +364,16 @@ var _ = Describe("ARO Operator - Azure Subnet Reconciler", func() {
vnetName = r.ResourceName
}
// Creates an empty NSG that gets assigned to master/worker subnets.
createE2ENSG := func() {
By("creating an empty test NSG")
testnsg = mgmtnetwork.SecurityGroup{
Location: &location,
Location: to.StringPtr(location),
Name: to.StringPtr(nsg),
Type: to.StringPtr("Microsoft.Network/networkSecurityGroups"),
SecurityGroupPropertiesFormat: &mgmtnetwork.SecurityGroupPropertiesFormat{},
}
err := clients.NetworkSecurityGroups.CreateOrUpdateAndWait(ctx, resourceGroup, nsg, testnsg)
Expect(err).NotTo(HaveOccurred())
By("getting the freshly created test NSG resource")
testnsg, err = clients.NetworkSecurityGroups.Get(ctx, resourceGroup, nsg, "")
Expect(err).NotTo(HaveOccurred())
}
@ -342,7 +383,6 @@ var _ = Describe("ARO Operator - Azure Subnet Reconciler", func() {
createE2ENSG()
})
AfterEach(func() {
By("deleting test NSG")
err := clients.NetworkSecurityGroups.DeleteAndWait(context.Background(), resourceGroup, nsg)
if err != nil {
log.Warn(err)
@ -350,7 +390,6 @@ var _ = Describe("ARO Operator - Azure Subnet Reconciler", func() {
})
It("must reconcile list of subnets when NSG is changed", func() {
for subnet := range subnetsToReconcile {
By(fmt.Sprintf("assigning test NSG to subnet %q", subnet))
// Gets current subnet NSG and then updates it to testnsg.
subnetObject, err := clients.Subnet.Get(ctx, resourceGroup, vnetName, subnet, "")
Expect(err).NotTo(HaveOccurred())
@ -360,159 +399,21 @@ var _ = Describe("ARO Operator - Azure Subnet Reconciler", func() {
err = clients.Subnet.CreateOrUpdateAndWait(ctx, resourceGroup, vnetName, subnet, subnetObject)
Expect(err).NotTo(HaveOccurred())
}
for subnet, correctNSG := range subnetsToReconcile {
By(fmt.Sprintf("waiting for the subnet %q to be reconciled so it includes the original cluster NSG", subnet))
// Validate subnet reconciles to original NSG.
err := wait.PollImmediate(30*time.Second, 10*time.Minute, func() (bool, error) {
s, err := clients.Subnet.Get(ctx, resourceGroup, vnetName, subnet, "")
if err != nil {
return false, err
}
if *s.NetworkSecurityGroup.ID == *correctNSG {
log.Infof("%s subnet's nsg matched expected value", subnet)
return true, nil
}
log.Errorf("%s nsg: %s did not match expected value: %s", subnet, *s.NetworkSecurityGroup.ID, *correctNSG)
return false, nil
})
Expect(err).NotTo(HaveOccurred())
}
})
})
var _ = Describe("ARO Operator - MUO Deployment", func() {
ctx := context.Background()
It("must be deployed by default with FIPS crypto mandated", func() {
muoIsDeployed := func() (bool, error) {
By("getting MUO pods")
pods, err := clients.Kubernetes.CoreV1().Pods("openshift-managed-upgrade-operator").List(ctx, metav1.ListOptions{
LabelSelector: "name=managed-upgrade-operator",
})
if err != nil {
return false, err
}
if len(pods.Items) != 1 {
return false, fmt.Errorf("%d managed-upgrade-operator pods found", len(pods.Items))
}
By("getting logs from MUO")
b, err := clients.Kubernetes.CoreV1().Pods("openshift-managed-upgrade-operator").GetLogs(pods.Items[0].Name, &corev1.PodLogOptions{}).DoRaw(ctx)
if err != nil {
return false, err
}
By("verifying that MUO has FIPS crypto mandated by reading logs")
return strings.Contains(string(b), `msg="FIPS crypto mandated: true"`), nil
}
err := wait.PollImmediate(30*time.Second, 10*time.Minute, muoIsDeployed)
Expect(err).NotTo(HaveOccurred())
})
})
var _ = Describe("ARO Operator - ImageConfig Reconciler", func() {
const (
imageconfigFlag = "aro.imageconfig.enabled"
optionalRegistry = "quay.io"
timeout = 5 * time.Minute
)
ctx := context.Background()
var requiredRegistries []string
var imageconfig *configv1.Image
sliceEqual := func(a, b []string) bool {
if len(a) != len(b) {
return false
}
sort.Strings(a)
sort.Strings(b)
for idx, entry := range b {
if a[idx] != entry {
return false
}
}
return true
}
verifyLists := func(expectedAllowlist, expectedBlocklist []string) (bool, error) {
By("getting the actual Image config state")
// have to do this because using declaration assignment in following line results in pre-declared imageconfig var not being used
var err error
imageconfig, err = clients.ConfigClient.ConfigV1().Images().Get(ctx, "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
allowList := imageconfig.Spec.RegistrySources.AllowedRegistries
blockList := imageconfig.Spec.RegistrySources.BlockedRegistries
By("comparing the actual allow and block lists with expected lists")
return sliceEqual(allowList, expectedAllowlist) && sliceEqual(blockList, expectedBlocklist), nil
}
BeforeEach(func() {
By("checking whether Image config reconciliation is enabled in ARO operator config")
instance, err := clients.AROClusters.AroV1alpha1().Clusters().Get(ctx, "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
if !instance.Spec.OperatorFlags.GetSimpleBoolean(imageconfigFlag) {
Skip("ImageConfig Controller is not enabled, skipping test")
}
By("getting a list of required registries from the ARO operator config")
requiredRegistries, err = imageController.GetCloudAwareRegistries(instance)
Expect(err).NotTo(HaveOccurred())
By("getting the Image config")
imageconfig, err = clients.ConfigClient.ConfigV1().Images().Get(ctx, "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
})
AfterEach(func() {
By("resetting Image config")
imageconfig.Spec.RegistrySources.AllowedRegistries = nil
imageconfig.Spec.RegistrySources.BlockedRegistries = nil
_, err := clients.ConfigClient.ConfigV1().Images().Update(ctx, imageconfig, metav1.UpdateOptions{})
Expect(err).NotTo(HaveOccurred())
By("waiting for the Image config to be reset")
Eventually(func(g Gomega) {
g.Expect(verifyLists(nil, nil)).To(BeTrue())
}).WithTimeout(timeout).Should(Succeed())
})
It("must set empty allow and block lists in Image config by default", func() {
allowList := imageconfig.Spec.RegistrySources.AllowedRegistries
blockList := imageconfig.Spec.RegistrySources.BlockedRegistries
By("checking that the allow and block lists are empty")
Expect(allowList).To(BeEmpty())
Expect(blockList).To(BeEmpty())
})
It("must add the ARO service registries to the allow list alongside the customer added registries", func() {
By("adding the test registry to the allow list of the Image config")
imageconfig.Spec.RegistrySources.AllowedRegistries = append(imageconfig.Spec.RegistrySources.AllowedRegistries, optionalRegistry)
_, err := clients.ConfigClient.ConfigV1().Images().Update(ctx, imageconfig, metav1.UpdateOptions{})
Expect(err).NotTo(HaveOccurred())
By("checking that Image config eventually has ARO service registries and the test registry in the allow list")
expectedAllowlist := append(requiredRegistries, optionalRegistry)
Eventually(func(g Gomega) {
g.Expect(verifyLists(expectedAllowlist, nil)).To(BeTrue())
}).WithTimeout(timeout).Should(Succeed())
})
It("must remove ARO service registries from the block lists, but keep customer added registries", func() {
By("adding the test registry and one of the ARO service registry to the block list of the Image config")
imageconfig.Spec.RegistrySources.BlockedRegistries = append(imageconfig.Spec.RegistrySources.BlockedRegistries, optionalRegistry, requiredRegistries[0])
_, err := clients.ConfigClient.ConfigV1().Images().Update(ctx, imageconfig, metav1.UpdateOptions{})
Expect(err).NotTo(HaveOccurred())
By("checking that Image config eventually doesn't include ARO service registries")
expectedBlocklist := []string{optionalRegistry}
Eventually(func(g Gomega) {
g.Expect(verifyLists(nil, expectedBlocklist)).To(BeTrue())
}).WithTimeout(timeout).Should(Succeed())
})
})

145
test/e2e/scalenodes.go Normal file
Просмотреть файл

@ -0,0 +1,145 @@
package e2e
// Copyright (c) Microsoft Corporation.
// Licensed under the Apache License 2.0.
import (
"context"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/Azure/go-autorest/autorest/to"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/util/retry"
"github.com/Azure/ARO-RP/pkg/operator/controllers/machineset"
"github.com/Azure/ARO-RP/pkg/util/ready"
)
const (
machineSetsNamespace = "openshift-machine-api"
minSupportedReplicas = 2
)
var _ = Describe("Scale nodes", func() {
// hack: do this before we scale down, because it takes a while for the
// nodes to settle after scale down
Specify("node count should match the cluster resource and nodes should be ready", func() {
ctx := context.Background()
machinesets, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).List(ctx, metav1.ListOptions{})
Expect(err).NotTo(HaveOccurred())
expectedNodeCount := 3 // for masters
for _, machineset := range machinesets.Items {
expectedNodeCount += int(*machineset.Spec.Replicas)
}
// another hack: we don't currently instantaneously expect all nodes to
// be ready, it could be that the workaround operator is busy rotating
// them, which we don't currently wait for on create
err = wait.PollImmediate(10*time.Second, 30*time.Minute, func() (bool, error) {
nodes, err := clients.Kubernetes.CoreV1().Nodes().List(ctx, metav1.ListOptions{})
if err != nil {
log.Warn(err)
return false, nil // swallow error
}
var nodeCount int
for _, node := range nodes.Items {
if ready.NodeIsReady(&node) {
nodeCount++
} else {
for _, c := range node.Status.Conditions {
log.Warnf("node %s status %s", node.Name, c.String())
}
}
}
return nodeCount == expectedNodeCount, nil
})
Expect(err).NotTo(HaveOccurred())
})
Specify("nodes should scale up and down", func() {
ctx := context.Background()
instance, err := clients.AROClusters.AroV1alpha1().Clusters().Get(ctx, "cluster", metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
if instance.Spec.OperatorFlags.GetSimpleBoolean(machineset.ControllerEnabled) {
Skip("MachineSet Controller is enabled, skipping this test")
}
mss, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).List(ctx, metav1.ListOptions{})
Expect(err).NotTo(HaveOccurred())
Expect(mss.Items).NotTo(BeEmpty())
err = scale(mss.Items[0].Name, *mss.Items[0].Spec.Replicas+1)
Expect(err).NotTo(HaveOccurred())
err = waitForScale(mss.Items[0].Name)
Expect(err).NotTo(HaveOccurred())
err = scale(mss.Items[0].Name, *mss.Items[0].Spec.Replicas-1)
Expect(err).NotTo(HaveOccurred())
err = waitForScale(mss.Items[0].Name)
Expect(err).NotTo(HaveOccurred())
})
})
func scale(name string, replicas int32) error {
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
ctx := context.Background()
ms, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return err
}
if ms.Spec.Replicas == nil {
ms.Spec.Replicas = to.Int32Ptr(0)
}
*ms.Spec.Replicas = replicas
_, err = clients.MachineAPI.MachineV1beta1().MachineSets(ms.Namespace).Update(ctx, ms, metav1.UpdateOptions{})
return err
})
}
func waitForScale(name string) error {
return wait.PollImmediate(10*time.Second, 30*time.Minute, func() (bool, error) {
machineset, err := clients.MachineAPI.MachineV1beta1().MachineSets(machineSetsNamespace).Get(context.Background(), name, metav1.GetOptions{})
if err != nil {
log.Warn(err)
return false, nil // swallow error
}
if machineset.Spec.Replicas == nil {
return false, nil
}
return machineset.Status.ObservedGeneration == machineset.Generation &&
machineset.Status.AvailableReplicas == *machineset.Spec.Replicas, nil
})
}
func waitForMachines() error {
return wait.PollImmediate(1*time.Second, 30*time.Minute, func() (bool, error) {
machines, err := clients.MachineAPI.MachineV1beta1().Machines(machineSetsNamespace).List(context.Background(), metav1.ListOptions{})
if err != nil {
log.Warn(err)
return false, nil
}
// Wait for all machines to be in Running phase before continuing
for _, m := range machines.Items {
if *m.Status.Phase != "Running" {
return false, nil
}
}
return true, nil
})
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-compute-2017-03",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-compute-2017-03 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-compute-2017-03 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2018-10-01",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-10-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-10-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2020-06-01",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2020-06-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2020-06-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/compute/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2021-01",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2021-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/cosmos-db/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2021-01 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/cosmos-db/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

4
vendor/github.com/Azure/azure-sdk-for-go/services/dns/mgmt/2016-04-01/dns/_meta.json сгенерированный поставляемый
Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2016-04",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2016-04 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/dns/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2016-04 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/dns/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

4
vendor/github.com/Azure/azure-sdk-for-go/services/dns/mgmt/2018-05-01/dns/_meta.json сгенерированный поставляемый
Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2018-05",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-05 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/dns/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-05 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/dns/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "1.6",
"use": "@microsoft.azure/autorest.go@2.1.183",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.183 --tag=1.6 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/graphrbac/data-plane/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.183 --tag=1.6 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/graphrbac/data-plane/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2019-09",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2019-09 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/keyvault/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2019-09 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/keyvault/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-7.0",
"use": "@microsoft.azure/autorest.go@2.1.183",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.183 --tag=package-7.0 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/keyvault/data-plane/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.183 --tag=package-7.0 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/keyvault/data-plane/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

4
vendor/github.com/Azure/azure-sdk-for-go/services/msi/mgmt/2018-11-30/msi/_meta.json сгенерированный поставляемый
Просмотреть файл

@ -4,8 +4,8 @@
"tag": "package-2018-11-30",
"use": "@microsoft.azure/autorest.go@2.1.187",
"repository_url": "https://github.com/Azure/azure-rest-api-specs.git",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-11-30 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/msi/resource-manager/readme.md",
"autorest_command": "autorest --use=@microsoft.azure/autorest.go@2.1.187 --tag=package-2018-11-30 --go-sdk-folder=/_/azure-sdk-for-go --go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION /_/azure-rest-api-specs/specification/msi/resource-manager/readme.md",
"additional_properties": {
"additional_options": "--go --verbose --use-onever --version=2.0.4421 --go.license-header=MICROSOFT_MIT_NO_VERSION"
"additional_options": "--go --verbose --use-onever --version=V2 --go.license-header=MICROSOFT_MIT_NO_VERSION"
}
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше