зеркало из https://github.com/Azure/ARO-RP.git
Moved cookie generation back out to hack file and refactored test
Fixed linting of dot imports Initial files + dependencies for the react-fluent portal Initial POC for portal UI Finished front end API for cluster information Co-authored-by: Brett Embery <bembery@redhat.com> Adding cluster detail pane Co-authored-by: Ellis Johnson <elljohns@redhat.com> Format tsx source Add cluster detail nav + tweaks Co-authored-by: Ellis Johnson <elljohns@redhat.com> Cluster detail MVP Co-authored-by: Brett Embery <bembery@redhat.com> bump deps fixes update deps cleanups and style improvements for the portal, as well as a new copy resource ID button update package deps Added base eslint config Fixed linter errors in SRE Portal Added linter step for e2e pipeline Reverting package-lock json to appease PR testing Another attempt to test admin portal linting in e2e pipeline Another fix for e2e admin portal linting Yet another attempt Reordered e2e jobs Added fix to commands Modifying linting settings to try and working e2e pipeline More config changes More changes Modified eslintrc Modified eslintrc Perform npm install before running container Debugging Trying npm install as a seperate task Moved admin portal lint from e2e pipeline to ci pipeline Fixed formatting Fixed formatting Fixed formatting Fixed image name Added dockerfile for SRE Portal linting Using new docker image in ADO CI pipeline Removed old dockerfile and modified package.json Split portal into v1 and v2 Modified portal backend to allow v1 and v2 portals to run at the same time Modified makefile to make both v1 and v2 portal Added option to change portal hostname locally whether wanting to run dev server or compiled build code Created initial selenium script Fixed linter Added documentation for new admin portal Added makefile command for linting admin portal Remove accident commit Refactored portal backend code Renamed temp to template in portal code Modified documentation to explain NO_NPM env var Renamed portal v1 compilation directory from dist to build and fixed TODOs in typescript Fixed SSHModal indexing Fixed SSHModal indexing Commit generated bindata code Added vscode folders to gitignore Made minor changes based on review feedback Added conditional statements for linting Fixed booleans Added vm image to first stage Modified powershell to bash Made small changes based on review feedback Update Makefile Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Update docs/admin-portal.md Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Update docs/admin-portal.md Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Small documentation change Small ci fix Small ci fix Small ci fix Small ci fix Still fixing CI Still fixing CI Still fixing CI Still fixing CI Fix CI again Fix CI again Fix CI again Fix CI again Fix CI again Removing conditional linting and moving to future PR Remove stage from CI yaml to pass github check Fixed off by one error with SSH in admin portal First 3 e2e test cases complete Test image pull Rewrote first test in golang on e2e pipeline Added second test Fixed tests for CT Added 2 more tests Added 1 more test and fixed others Finished initial e2e tests Fixed linting errors Fixed validation and linting Still trying to fix linting issues Moved cookie generation back out to hack file and refactored test Fixed linting of dot imports Remove test focus for e2e Fixed potential infinite for loop Removed test command from makefile Removed test pipeline step Fixed vendoring removals Update az cli extension to use api v2022_04_01 (#2042) * Bumping az aro extenion api version to v2022_04_01 * Adding new command flags and data structures to az aro create * linting Update cluster Update pkg/util/cluster/cluster.go Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Better err handling to customer remove installconfig dependency from deploystorage Remove unnecessary to.StringPtr usages Fixing exception handling for missing subnet (#2117) * Fixing exception handling for missing subnet * use isinstance * Another err.message fix Added a new function for a hardcoded filter of namespaces (#1994) Added unit test for the makeURLSegments function of dynamichelper (#2031) add minor version Master resize (#1889) * master resize GA move arm template deploy to util use the ARM deploytemplate code directly in pkg/cluster Add David Newman to CODEOWNERS il5 series support, vm.go improvements and tests (#2086) Add improvements to `deploy-full-rp-service-in-dev.md` doc (#2048) * Add improvements to full rp service doc * Update docs/deploy-full-rp-service-in-dev.md Co-authored-by: Spencer Amann <samann@redhat.com> NSG controller - reconcile nil NSG (#2116) * adding test case for NSGs = nil * Adding handling of empty NSG Fix deleteNic when the nic is in failed provisioning state Add documentation outlining our keyvaults, certificates, and secrets Provide clearer error for a particular type of PUCM failure Instead of "subnet ID "" has incorrect length", catch the error earlier and provide a clearer "lastAdminUpdateError" message. This particular PUCM failure occurs when a machineset object fails to decode during cluster document enriching. increase the timeout to 10 minutes, since a rebuild can trigger the timeout Fixed dodgy e2e test Vendor installer release 4.10 Switches to go.1.17, OCP 4.10, and Kubernetes 1.23 modules. Automated updates from "make generate" Set default InstallStream to OCP 4.10.15 Automated updates from "make discoverycache". pipelines: Require agents with go-1.17 capability for CI/E2E Update documentation for Go 1.17 and installer 4.10 Switch from the azureprovider to the new machinev1.AzureMachineProviderSpec machine API * Due to the move of the AzureMachineProviderSpec into the openshift/api we need to marshal the existing clusters machine provider spec into the new struct. * Switches tests to use the new machine API struct. Ref:f9725ddd94
Switch to building with golang 1.17 Switch maoclient -> machineclient and maofake -> machinefake gofmt: add "go:build e2e" Switch to using the ubi8 go-toolset for building. Add additional values to CloudError and Cluster Operation Logs (#2094) * Added additional values to CloudError * Update pkg/api/error.go Co-authored-by: Weinong Wang <weinong@outlook.com> * Add details for cluster logs in terminal state * Fixed issue with logging clusterResult * Changed to generic name, add String() func * Update logging comments Co-authored-by: Weinong Wang <weinong@outlook.com> * Add prefix to cloudErrorMessage String() * Add additional json monikers * Fix bug with resultType output * Defined CloudErrorCategory string type * Empty-Commit to retrigger test * Shift logs, remove code for next PR * Added log fields, removed category * Shift resultType to Logs * Empty-Commit to retrigger test * Remove all error changes * Update openshiftcluster.go change logs to lowercase Co-authored-by: BCarvalheira <bcarvalheira@microsoft.com> Co-authored-by: Weinong Wang <weinong@outlook.com> Improved the unit test coverage for the merge function of dynamichelper Fixed the validate golang code errors in the pipeline Updated the code based on Mikalai's feedback Fixed a go validation error added yaml lint (#2132) * added yaml lint * updated the doc Build the MSFT Go fips enabled code and tag the CI Agent as having Go 1.17. Bump to the latest Microsoft Golang FIPS release. Updated bindata. Switch back to the vanilla ci vmss names. Revert the address prefix and keyvault name changes necessary to deploy to CI. Switch back to using the RHEL go-toolset now that 8.6 is available on Azure. Double the OS Disk size. Increase the disk size of the CI vmss to 200GB. Updated bindata and move disk size to the correct vmss spec. Add an option to send metrics via UDP instead of Unix Domain Sockets (#2074) replace allowOCM flag with a forceLocalOnly flag upgrade image to b4 when mhc is managed create an alert for frequent remediation (#2123) allow overriding the operator version in the admin API (#2134) Update pipelines to demand go 1.17 and update OB container to go 1.17 (#2146) update mdm/mdsd Add new ARO regions to pipelines - australiacentral - australiacentral2 - swedencentral test for infra ID generation this does not need installconfig, and so can be moved upwards in the install replace it with a vendored version, so that we don't need to utilise the installer portion validate apimachinery rand as utilrand split ensuregraph into applying customisations and then saving it to the storage account. if we use the vanilla installer, we will likely still need to save the graph (after fetching it from hive) but we will not change things inside of it like currently. Testing test in isolation refactored muo to extract deployer (#2122) removed go-bindata from pkg/operator (#2119) add: Getpodlogs kubeaction api (#1885) Migrate from AD to MS Graph Also changed the AADManager so that it only returns values instead of the data structure. This hides the implementation details so that in the future if MSAL changes the internal representation, any required changes will be contained within the class (vs. right now custom.py has to be changed accordingly). fixed conflict created when moving to the new library (#2150) Bump eventsource from 1.1.0 to 1.1.1 in /portal/v2 Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1. - [Release notes](https://github.com/EventSource/eventsource/releases) - [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md) - [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1) --- updated-dependencies: - dependency-name: eventsource dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Bump eventsource from 1.1.0 to 1.1.1 in /portal/v1 Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1. - [Release notes](https://github.com/EventSource/eventsource/releases) - [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md) - [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1) --- updated-dependencies: - dependency-name: eventsource dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> clean up of validate import, now uses a yaml file for maintainability (#2136) Added more checks for cluster panel test to figure out test failure enable reconciling azuresubnets/NSGs by default refector e2e for removing dependency. Update 2 removed old code. make test to fail on getting error. Expect(err).NotTo(HaveOccurred()) Formating done White-spaces removed. handle the use of the AddressPrefixes field alongside AddressPrefix improved ValidateCIDRRanges test add vnet names to help with debugging if needed in the future comment improvement Bump follow-redirects from 1.14.0 to 1.14.7 in /portal Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.14.0 to 1.14.7. - [Release notes](https://github.com/follow-redirects/follow-redirects/releases) - [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.14.0...v1.14.7) --- updated-dependencies: - dependency-name: follow-redirects dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Store downloaded cert only when it differs When systemd downloader downloads fresh certificate check whether it differs from the stored one. Replace old one with fresh when there is a difference. Signed-off-by: Petr Kotas <pkotas@redhat.com> Restart mdm service on cert change Forces MDM container to pick up changed certificate. Signed-off-by: Petr Kotas <pkotas@redhat.com> doc: Document fp cert rotation Add doc file with information how the first party certificate is rotated in the RP and on the host VM. Signed-off-by: Petr Kotas <pkotas@redhat.com> Replace artifacts with direct code checkout Replaces configuration fetching via build pipeline with direct code checkout. Signed-off-by: Petr Kotas <pkotas@redhat.com> Update .pipelines/int-release.yml Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> provide the ability to specify an overridden fluentbit image in operator feature flags Download aro deployer from tagged image Pull aro deployer from tagged container instead of pipeline artifact. Signed-off-by: Petr Kotas <pkotas@redhat.com> Add deploy pipelines using tag Add new pipelines using tagged deployment Signed-off-by: Petr Kotas <pkotas@redhat.com> Set XDG_RUNTIME_DIR explicitly on CI VMs Add tagged aro image Add annotated tag build and push into makefile. Without annotation, the TAG is empty and action is not performed. Signed-off-by: Petr Kotas <pkotas@redhat.com> Build and push tagged aro image into ACR When annotated TAG is not set the new step fails. Otherwise it builds the tagged image and pushes it to the ACR. Signed-off-by: Petr Kotas <pkotas@redhat.com> Build release on tag When CI started from tag build image and push to registry. Extract annotation from the tag and use it as summary for changelog. Automated summary is extracted from commits titles. Signed-off-by: Petr Kotas <pkotas@redhat.com> mdm/mdsd++ make generate Revert "[PIPELINES 4] Create release based on annotated git tag" Fix: Broken pull path The original path is not working as it is blocked for writing, Using the pipeline default instead Signed-off-by: Petr Kotas <pkotas@redhat.com> Fix: Broken checkout code path The checkout behaves differently when checking out single repository. It checkout to /s Signed-off-by: Petr Kotas <pkotas@redhat.com> Update prod pipeline params to be consistent Enable SBOM on all OneBranch pipelines Fixing typo in paths Add Documentation and Scripts for ARO Monitor Metric testing Fix typo Co-authored-by: Caden Marchese <56140267+cadenmarchese@users.noreply.github.com> Handle cleanup of spawned processes. Clarify a few things in the procdure. Add example script to directly inject test data Revert "Revert "[PIPELINES 4] Create release based on annotated git tag"" Fix: Remove build to run after e2e Signed-off-by: Petr Kotas <pkotas@redhat.com> Bump nanoid from 3.1.22 to 3.2.0 in /portal Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.22 to 3.2.0. - [Release notes](https://github.com/ai/nanoid/releases) - [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md) - [Commits](https://github.com/ai/nanoid/compare/3.1.22...3.2.0) --- updated-dependencies: - dependency-name: nanoid dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Add uaenorth to non-zonal regions imageconfig controller Fixing bug where incorrect ACR domain name was being generated added doc for cert rotation Signed-off-by: Karan.Magdani <kmagdani@redhat.com> Vendor installer release 4.9 This also forces the RP from Go 1.14 to Go 1.16. Aside from requiring OCP 4.9 / Kubernetes 1.22 modules, the other go.mod changes are all manual workarounds from failed "make vendor" runs. Automated updates from "make vendor" Alter client-gen command to stay within repo The way this is written seems to assume the ARO-RP repo is cloned under the user's $GOPATH tree. That's not where I typically clone git repos for development. Use relative paths in the client-gen command and arguments to stay within the ARO-RP git repo. Automated updates from "make generate" Set InstallStream to OCP 4.9.8 Automated updates from "make discoverycache" pipelines: Demand agents with go-1.16 capability for CI/E2E Update documentation for Go 1.16 and installer 4.9 Fix: Remove the wrong git pull path Removes the wrong git pull path for ADO RP-config Removes unused parameter Signed-off-by: Petr Kotas <pkotas@redhat.com> fix: Add go1.16 requirement to run pipelines With addition of 4.9 release, the go build have to run with go1.16 Signed-off-by: Petr Kotas <pkotas@redhat.com> Add geneva action to reconcile a failed NIC Suppress stderr within Makefile command Do not overwrite FIPs environment variable in CI VMs fix: fix service connection to the github existing service connection does not meet requirement for the github release Signed-off-by: Petr Kotas <pkotas@redhat.com> ADO Pipelines make no sense Ensure TAG environment var is consistent case Incorrect quoting on variables in pipeline Clean up debug print statement in pipelines Add INT/Prod variable group requirements Update correct directory path for pipeline template files Update release tag pipeline parameters Vendor updated autorest adal to fix nil pointer exception in MSI add fl to owners :-) Fix: use the correct variable syntax for updated variables in pipelines Bump 4.9.8 to 4.9.9 as it contains a bugfix that prevents cluster creation success Vendor openshift installer carry patch Bump golang version to 1.16 in CI VMs Fix wrongly updated parameters and variables in prod release Feedback follow up on image config controller Use INT E2E Creds in Prod pipeline as we pull from the INT image registry and spin up our resources in our INT sub clean temporary gomock folders (#1912) Signed-off-by: Karan.Magdani <kmagdani@redhat.com> fix 2 cred scan findings by adding suppression settings (#1960) add tsaoptions json file, enable tsa in build rp official pipeline (#1959) chore: removed logging onebranch pipelines files from aro-rp repo (#1942) quick fixes in docs (#1956) Removes unneeded field (#1962) Updated linux container image for build (#1964) Updating go-toolset tag to 1.16.12 (#1965) Bump follow-redirects from 1.14.7 to 1.14.8 in /portal Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.14.7 to 1.14.8. - [Release notes](https://github.com/follow-redirects/follow-redirects/releases) - [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.14.7...v1.14.8) --- updated-dependencies: - dependency-name: follow-redirects dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> add fips validation scripts and ci step drop net_raw and make generate Adding norwaywest to deploy from tag ALL regions Pipeline. (#1968) Include variable groups for prod single region release (#1957) Add Central US EUAP to nonZonalRegions (#1927) remove network acceleration due to issues discovered reapply the primary tag make generate Add metric gauge for nohost present on request to gateway Fix net_raw caps, make generate (#1971) Refactors operator requeues * Adds the clarifying comment on requeues into the checker controller * Removes `Requeue: true` in places where we use `RequeueAfter` as it is has no effect. add a field to indicate spotInstances in node.conditions metric (#1928) Bump url-parse from 1.5.3 to 1.5.7 in /portal Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.7. - [Release notes](https://github.com/unshiftio/url-parse/releases) - [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.7) --- updated-dependencies: - dependency-name: url-parse dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> docs: add cleaner info to shared env docs add westus3 to pipeline manifests add additional logging to redeploy to help understand state when this job fails in e2e Re-enable Egress Lockdown Enable egress lockdown feature by default on new clusters while also allowing current clusters to be admin-upgraded with the new feature Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> fix: use the tag/commit as the aro version ARO uses both tags and commits as its version. The commits are used for the development scenario, tags are used when building and deploing to production. add: copy ARO iamge to integration Signed-off-by: Petr Kotas <petr@kotas.tech> add: release pipeline documentation Signed-off-by: Petr Kotas <petr@kotas.tech> fix: HTTP 500 from "List cluster Azure resource" Geneva Action for unknown resource types (#1978) * If don't have an apiVersion defined for a resource, then skip over it instead of returning an error. * Reword the comment. * Double quote the resource type in the log warning message. Co-authored-by: Mikalai Radchuk <509198+m1kola@users.noreply.github.com> add operator storage acc and endpoints reconcilers operator tests storageacc handling for install/update generate vendor review feedback Add dev env rules exception Comply with the Authorizer changes Fix tests Fix merge conflicts Add operator flags Fix tests Change operator flags Addressing feedback generate Operator flag tests Addressing feedback FIx update cluster spec Add an Operator controller for Managed Upgrade Operator add MUO deployment manifests run go generate add a mocks directory in the operator make dynamichelper produce less spurious changes for MUO fix: move int mirroring to separate pipelines integration requires it own set of credentials, this can only by provided in a separate pipeline Signed-off-by: Petr Kotas <pkotas@redhat.com> fix: provide the correct dependent pipeline (#1982) Signed-off-by: Petr Kotas <pkotas@redhat.com> Update mirror-aro-to-int.yml for Azure Pipelines Remove unused parameter fix: replace parameter with variable (#1984) Signed-off-by: Petr Kotas <pkotas@redhat.com> Update mirror-aro-to-int.yml for Azure Pipelines Fix typo Cleans up unused args in `muo.NewReconciler` Bump url-parse from 1.5.7 to 1.5.10 in /portal Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.7 to 1.5.10. - [Release notes](https://github.com/unshiftio/url-parse/releases) - [Commits](https://github.com/unshiftio/url-parse/compare/1.5.7...1.5.10) --- updated-dependencies: - dependency-name: url-parse dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Removes a explicit `gomock.Eq()` matcher calls (#1983) `gomock.Eq()` is a default matcher in gomock so it doesn't have to be explicitly called in these cases Docs: Set GOPATH (#1987) - A few developers on various OS flavors have seen make generate fail after the upgrade to golang 1.16 due to client-gen updates. This appears to fix. Adds extra fields to the PreviewFeature CRD Adds the controller implementation It currently implements only one feature: NSG flow logs preview feature controller and NSG flow log feature implementation L series support - RP changes (#1751) * add L-series SKUs to internal, admin, validate api * make client Add SKU availability and restriction checks to dynamic validation (#1790) * add sku filtering and restriction checks * add install-time instance validation Minor ARO operator refactoring * Gets rid of exported constants like `ENABLED` where exported constants are not required * Gets rid of constant concatenations like `CONFIG_NAMESPACE + ".enabled"` to make search easier * Removes unnecessary `Copy` method of `OperatorFlags` stuct as well as package level `DefaultOperatorFlags` variable. Introduces `DefaultOperatorFlags()` instead. Removing call to listByResourceGroup due to flakyness in the Azure API add validate-fips step into onebranch build rp template exclude vuln protobuf exclude vulnerable containerd versions Changed CloudErrorCodes from vars to consts. (#1997) Co-authored-by: Jeremy Facchetti <jfacchet@jfacchet.remote.csb> Add sourcebranchname to build_tag (#1996) adding a way to pass additional flags to E2E tests (#1998) Fix typo in deploy-development-rp doc (#2005) Better documentation support for multiple envs (#1932) - Now there are two env files: standard, and int-like files - Instructions modified for int envs to create the new file and source it - Fixed a small typo in the instructions that was being masked by indentation vendor: fake operator client Signed-off-by: Petr Kotas <pkotas@redhat.com> feature: add autosizednodes reconciler Introduce autosizednodes reconciler which watches aro cluster object feature flags for ReconcileAutoSizedNodes. When feature flag is present new KubeletConfig is created enabling the AutoSizingReserver feature which auto computes the system reserved for nodes. feature: add aro cluster to workaround Adds aro cluster instance to IsRequires check to allow for feature flags checking. Signed-off-by: Petr Kotas <pkotas@redhat.com> feature: disable systemreserved when autosizednodes enabled Signed-off-by: Petr Kotas <pkotas@redhat.com> Avoid AdminUpdate panic when Nodes are down (#1972) * Skip ensureAROOperator and aroDeploymentReady when the IngressProfiles data is missing, esp after cluster VM restarts as part of the update call * Refactor Cluster Manager code to make ensureAROOperator code testable * Add unit test for ensureAROOperator code Co-authored-by: Ulrich Schlueter <uschlueter@redhat.com> update go-cosmosdb version to incorporate the latest change (#2006) Filter out unwanted data from azure list geneva action (#1969) * filter our Microsoft.Compute/snapshots from azure list geneva action * change filter input for test Doc to create & push ARO Operator image to ACR/Quay (#1888) * Doc to create/push AROOperator image ACR/Quay A document on How to create & publish ARO Operator image to ACR/Quay. Added alternative to go get command (#2015) Update Makefile (#2020) The ARO-RP returns special characters in color encoding special character, which is not decoded as of now. This change removes the color encoding characters by default in e2e tests Update node-selector on muo namespace Dockerfile for MUO image (#1993) Update OB Build Pipeline to Pass Build Tag as Var (#2011) * adding release_tag functionality to support releasing by tag or commit add managed upgrade operator configuration settings and connected MUO if allowed and a pullsecret exists add muo config yaml add openshift-azure-logging to the ignored namespaces run go generate Fix VM Redeploy Test Flake - Removing test to check k8s Events for Node readiness - Adding test for Azure VM readiness (power state) - Adding test for Linux Kernel uptime to guarantee reboot disable ipv6 router advertisements on rp/gateway vmss Install python3 on RP and gateway VMs make pullspec an optional flag add enabled and managed by default add e2e test Bump minimist from 1.2.5 to 1.2.6 in /portal Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6. - [Release notes](https://github.com/substack/minimist/releases) - [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6) --- updated-dependencies: - dependency-name: minimist dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> cleanup: proxy now uses idiomatic waitgroup. cleanup: removed useless anonymous function definition. add containers_image_openpgp tag (#2032) Change secrets-update to allow subsequent updates (#2038) Co-authored-by: Nont <nthanonchai@microsoft.com> add containers_image_openpgp everywhere add controller into operator for machine health check (#1950) * add worker only controller with operator for machine health check * align mhc node selector pattern with osd Create 2022-04-01 API (#1876) check for default ingressIP when ingressProfiles > 1 (#2021) Signed-off-by: Karan.Magdani <kmagdani@redhat.com> Skip Linux AZ Sec Pack policies from running on VMSS creation (#2041) Admin Portal v2 (#2019) Add in sre portal v2, still default to v1 Co-authored-by: Amber Brown <ambrown@redhat.com> Co-authored-by: Brett Embery <bembery@redhat.com> Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Bump minimist from 1.2.5 to 1.2.6 in /portal/v2 (#2043) Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6. - [Release notes](https://github.com/substack/minimist/releases) - [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6) --- updated-dependencies: - dependency-name: minimist dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> added changes to make local e2e test work/ update doc (#2036) * added changes to make local e2e test work/ update doc updated operator README to include instructions for running the ARO operator locally for a private cluster (#2045) Fix off by one error when truncating name Now it truncates to 14 instead of 15. the corresponding arm templates truncate to 15. Refactors createOrUpdateRouterIPFromCluster Make it reuse isIngressProfileAvailable to check IngressProfile Adds an extra case into TestAroDeploymentReady Updates dev env docs * Removes mention of Python virtualenv as it comes by default with Python 3 * Updates macOS docs to make sure that steps work for Intel and ARM macs * Markdown formatting fixes give /tmp a bit more room for when the CI VM gets busy refactor+test: refactored some functions to test refactored tests added license to test file added err check on validateProxyResquest made the errors more explicit fixed typo in function name removed useless test case renamed oddly named metrics.Interface to Emitter update codeowners renamed github username updated path to quota file (#2058) refactor/add-test : refactored linkid and gateway to add tests (#2013) Enable first basic linters in ARO (#2060) * Enable first basic linters in ARO * Remove modules-download-mode from the linter run config Commit to allow password auth for VMSS jit access (#2027) * Commit to allow password auth for VMSS jit access fix: now uses renamed interface metricsEmitter fix issues with linting new test files added doc.go for imgconfig controller (#2064) Signed-off-by: Karan.Magdani <kmagdani@redhat.com> Revert 2027: Commit to allow password auth for VMSS jit access Add logic to reconcile failed Nic on az aro delete Co-authored-by: Ben Vesel <bennerv@users.noreply.github.com> Update pull secret references from cloud.redhat.com to cloud.openshift.com (#2084) Enables go fmt simplify (#2081) update reference to cloud.redhat.com in README file (#2085) ensure apiserverready check redesigned the quota computation to something understandable (#2059) Bump 4.9 install image to latest stable 4.9.28 to address etcd split brain issue Fail MUO test if we expect an error but don't get one Bump fluentbit, mdm, and mdsd images to mitigate P0/P1s Bump async from 2.6.3 to 2.6.4 in /portal/v2 Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4. - [Release notes](https://github.com/caolan/async/releases) - [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md) - [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4) --- updated-dependencies: - dependency-name: async dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Update the secret rotate time to 7 days during RP deploy (#2051) Remove dead mirror code referencing 4.3 version which isn't mirrored (#2092) add MTU to the internal OCP Document make generate before mock added unit tests for two new functions fix import order remove trailing spaces make validate-go wants to add trailing lines again found/fixed trailing new line add new line at end of test file added admin update method to adminupdate tests newlinw fixed unit test issue add helper method Improve comment gofmt Remove ACR Image Override (#2090) added stylecheck and moved golangci-lint to a github action (#2083) * enabled github action instead of running from ADO * fixed style * fixed some style fixed styling fixed failing tests because of case on errs Small updates to shared rp docs (#2079) "note" syntax adjustments Small updates to shared rp docs from working sessions added note related to gwy keyvault not being in dev Update docs/prepare-a-shared-rp-development-environment.md Language adjustment. Committing syntax change per Caden's suggestion. Co-Authored-By: Caden Marchese <56140267+cadenmarchese@users.noreply.github.com> Co-authored-by: Caden Marchese <56140267+cadenmarchese@users.noreply.github.com> Additional gateway tests (#2062) * Add coverage for pkg/gateway. Gateway creation now fails fast when env properties are missing. * refactor large test into multiple test cases Move gateway fluentbit to container Bump async from 2.6.3 to 2.6.4 in /portal/v1 Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4. - [Release notes](https://github.com/caolan/async/releases) - [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md) - [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4) --- updated-dependencies: - dependency-name: async dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> set MDSD_MSGPACK_SORT_COLUMNS to perf column sorting on MDSD side and try to avoid hitting max schema count (#2095) Remove mwoodson from codeowners (#2106) Updated FIPs e2e test for 2022-04-01 API Development subscription migration prepare for dns migration Signed-off-by: Karan.Magdani <kmagdani@redhat.com> Update az cli extension to use api v2022_04_01 (#2042) * Bumping az aro extenion api version to v2022_04_01 * Adding new command flags and data structures to az aro create * linting Update cluster Update pkg/util/cluster/cluster.go Co-authored-by: Ben Vesel <10840174+bennerv@users.noreply.github.com> Better err handling to customer remove installconfig dependency from deploystorage Remove unnecessary to.StringPtr usages Fixing exception handling for missing subnet (#2117) * Fixing exception handling for missing subnet * use isinstance * Another err.message fix Added a new function for a hardcoded filter of namespaces (#1994) Added unit test for the makeURLSegments function of dynamichelper (#2031) add minor version Master resize (#1889) * master resize GA move arm template deploy to util use the ARM deploytemplate code directly in pkg/cluster Add David Newman to CODEOWNERS il5 series support, vm.go improvements and tests (#2086) Add improvements to `deploy-full-rp-service-in-dev.md` doc (#2048) * Add improvements to full rp service doc * Update docs/deploy-full-rp-service-in-dev.md Co-authored-by: Spencer Amann <samann@redhat.com> NSG controller - reconcile nil NSG (#2116) * adding test case for NSGs = nil * Adding handling of empty NSG Fix deleteNic when the nic is in failed provisioning state Add documentation outlining our keyvaults, certificates, and secrets Provide clearer error for a particular type of PUCM failure Instead of "subnet ID "" has incorrect length", catch the error earlier and provide a clearer "lastAdminUpdateError" message. This particular PUCM failure occurs when a machineset object fails to decode during cluster document enriching. increase the timeout to 10 minutes, since a rebuild can trigger the timeout Vendor installer release 4.10 Switches to go.1.17, OCP 4.10, and Kubernetes 1.23 modules. Automated updates from "make generate" Set default InstallStream to OCP 4.10.15 Automated updates from "make discoverycache". pipelines: Require agents with go-1.17 capability for CI/E2E Update documentation for Go 1.17 and installer 4.10 Switch from the azureprovider to the new machinev1.AzureMachineProviderSpec machine API * Due to the move of the AzureMachineProviderSpec into the openshift/api we need to marshal the existing clusters machine provider spec into the new struct. * Switches tests to use the new machine API struct. Ref:f9725ddd94
Switch to building with golang 1.17 Switch maoclient -> machineclient and maofake -> machinefake gofmt: add "go:build e2e" Switch to using the ubi8 go-toolset for building. Add additional values to CloudError and Cluster Operation Logs (#2094) * Added additional values to CloudError * Update pkg/api/error.go Co-authored-by: Weinong Wang <weinong@outlook.com> * Add details for cluster logs in terminal state * Fixed issue with logging clusterResult * Changed to generic name, add String() func * Update logging comments Co-authored-by: Weinong Wang <weinong@outlook.com> * Add prefix to cloudErrorMessage String() * Add additional json monikers * Fix bug with resultType output * Defined CloudErrorCategory string type * Empty-Commit to retrigger test * Shift logs, remove code for next PR * Added log fields, removed category * Shift resultType to Logs * Empty-Commit to retrigger test * Remove all error changes * Update openshiftcluster.go change logs to lowercase Co-authored-by: BCarvalheira <bcarvalheira@microsoft.com> Co-authored-by: Weinong Wang <weinong@outlook.com> Improved the unit test coverage for the merge function of dynamichelper Fixed the validate golang code errors in the pipeline Updated the code based on Mikalai's feedback Fixed a go validation error added yaml lint (#2132) * added yaml lint * updated the doc Build the MSFT Go fips enabled code and tag the CI Agent as having Go 1.17. Bump to the latest Microsoft Golang FIPS release. Updated bindata. Switch back to the vanilla ci vmss names. Revert the address prefix and keyvault name changes necessary to deploy to CI. Switch back to using the RHEL go-toolset now that 8.6 is available on Azure. Double the OS Disk size. Increase the disk size of the CI vmss to 200GB. Updated bindata and move disk size to the correct vmss spec. Add an option to send metrics via UDP instead of Unix Domain Sockets (#2074) replace allowOCM flag with a forceLocalOnly flag upgrade image to b4 when mhc is managed create an alert for frequent remediation (#2123) allow overriding the operator version in the admin API (#2134) Update pipelines to demand go 1.17 and update OB container to go 1.17 (#2146) update mdm/mdsd Add new ARO regions to pipelines - australiacentral - australiacentral2 - swedencentral test for infra ID generation this does not need installconfig, and so can be moved upwards in the install replace it with a vendored version, so that we don't need to utilise the installer portion validate apimachinery rand as utilrand split ensuregraph into applying customisations and then saving it to the storage account. if we use the vanilla installer, we will likely still need to save the graph (after fetching it from hive) but we will not change things inside of it like currently. refactored muo to extract deployer (#2122) removed go-bindata from pkg/operator (#2119) add: Getpodlogs kubeaction api (#1885) Migrate from AD to MS Graph Also changed the AADManager so that it only returns values instead of the data structure. This hides the implementation details so that in the future if MSAL changes the internal representation, any required changes will be contained within the class (vs. right now custom.py has to be changed accordingly). fixed conflict created when moving to the new library (#2150) Bump eventsource from 1.1.0 to 1.1.1 in /portal/v2 Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1. - [Release notes](https://github.com/EventSource/eventsource/releases) - [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md) - [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1) --- updated-dependencies: - dependency-name: eventsource dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Bump eventsource from 1.1.0 to 1.1.1 in /portal/v1 Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1. - [Release notes](https://github.com/EventSource/eventsource/releases) - [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md) - [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1) --- updated-dependencies: - dependency-name: eventsource dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> clean up of validate import, now uses a yaml file for maintainability (#2136) Updated portal bindata create lint-go script and call it from Makefile (#2118) Co-authored-by: Jeremy Facchetti <facchettos@gmail.com> Add name length validation on ARO clusters for non-zonal regions Truncate cluster names to 19 char in e2e pipelines Typo in pipeline script Added cookie as part of test and added extra error output Seperated image pull and container start for selenium Fixing up docker command
This commit is contained in:
Родитель
075f322eb1
Коммит
71febeb26c
|
@ -1 +1 @@
|
|||
* @jewzaam @m1kola @bennerv @hawkowl @rogbas @petrkotas @ross-bryan
|
||||
* @jewzaam @m1kola @bennerv @hawkowl @rogbas @petrkotas @ross-bryan @darthhexx
|
||||
|
|
|
@ -16,8 +16,8 @@ jobs:
|
|||
name: lint
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- run: |
|
||||
sudo apt-get update
|
||||
- run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install libgpgme-dev libgpgme11
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
|
@ -28,7 +28,7 @@ jobs:
|
|||
with:
|
||||
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
|
||||
version: v1.45.2
|
||||
|
||||
args: -v --timeout 15m
|
||||
# Optional: working directory, useful for monorepos
|
||||
#working-directory: pkg
|
||||
|
||||
|
|
|
@ -30,6 +30,5 @@ gomock_reflect_*
|
|||
/portal/v1/node_modules/
|
||||
/portal/v2/node_modules/
|
||||
.idea*
|
||||
/hack/hive-config/crds
|
||||
/hack/hive-config/hive-deployment.yaml
|
||||
cmd/aro/__debug_bin
|
||||
.vscode/
|
||||
/portal/v2/.vscode
|
||||
|
|
|
@ -1,7 +1,10 @@
|
|||
run:
|
||||
timeout: 5m
|
||||
timeout: 10m
|
||||
skip-dirs:
|
||||
- vendor/portal
|
||||
- vendor
|
||||
skip-dirs-use-default: true
|
||||
modules-download-mode: vendor
|
||||
|
||||
issues:
|
||||
exclude-rules:
|
||||
|
@ -21,7 +24,6 @@ linters-settings:
|
|||
- github.com/onsi/ginkgo/v2
|
||||
- github.com/onsi/gomega
|
||||
|
||||
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
|
|
|
@ -9,7 +9,7 @@ jobs:
|
|||
- job: Build_and_push_images
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
|
||||
steps:
|
||||
- template: ./templates/template-checkout.yml
|
||||
|
|
|
@ -36,7 +36,7 @@ jobs:
|
|||
- job: Golang_Unit_Tests
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
steps:
|
||||
- template: ./templates/template-checkout.yml
|
||||
|
||||
|
@ -96,12 +96,3 @@ jobs:
|
|||
set -xe
|
||||
make lint-admin-portal
|
||||
displayName: 🧹 Lint Admin Portal
|
||||
|
||||
- job: Check_Image_Pull
|
||||
pool:
|
||||
name: ARO-CI
|
||||
steps:
|
||||
- script: |
|
||||
set -xe
|
||||
make test-image-pull
|
||||
displayName: Just checking
|
||||
|
|
|
@ -15,7 +15,7 @@ jobs:
|
|||
timeoutInMinutes: 180
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
steps:
|
||||
- template: ./templates/template-checkout.yml
|
||||
- template: ./templates/template-az-cli-login.yml
|
||||
|
@ -46,14 +46,12 @@ jobs:
|
|||
export PRIVATE_CLUSTER=true
|
||||
|
||||
. ./hack/e2e/run-rp-and-e2e.sh
|
||||
trap 'set +e; kill_rp; kill_portal; clean_e2e_db' EXIT
|
||||
trap 'set +e; kill_rp; clean_e2e_db' EXIT
|
||||
|
||||
deploy_e2e_db
|
||||
|
||||
run_rp
|
||||
run_portal
|
||||
validate_rp_running
|
||||
validate_portal_running
|
||||
register_sub
|
||||
|
||||
export CI=true
|
||||
|
|
|
@ -13,7 +13,7 @@ jobs:
|
|||
displayName: Build release
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
|
||||
steps:
|
||||
- template: ./templates/template-checkout.yml
|
||||
|
|
|
@ -20,7 +20,7 @@ jobs:
|
|||
condition: startsWith(variables['build.sourceBranch'], 'refs/tags/v2')
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
|
||||
steps:
|
||||
- template: ./templates/template-checkout.yml
|
||||
|
|
|
@ -13,7 +13,7 @@ pr: none
|
|||
|
||||
variables:
|
||||
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
|
||||
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.16.12-4:20220202 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
|
||||
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.17.7-13:20220526 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
|
||||
Debian_Frontend: noninteractive
|
||||
|
||||
resources:
|
||||
|
|
|
@ -12,8 +12,8 @@ trigger: none
|
|||
pr: none
|
||||
|
||||
variables:
|
||||
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
|
||||
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.16.12-4:20220202 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
|
||||
Cdp_Definition_Build_Count: $[counter('', 0)] # needed for onebranch.pipeline.version task https://aka.ms/obpipelines/versioning
|
||||
LinuxContainerImage: cdpxlinux.azurecr.io/user/aro/ubi8-gotoolset-1.17.7-13:20220526 # Docker image which is used to build the project https://aka.ms/obpipelines/containers
|
||||
Debian_Frontend: noninteractive
|
||||
|
||||
resources:
|
||||
|
|
|
@ -45,6 +45,8 @@ stages:
|
|||
rpMode: ''
|
||||
aroVersionStorageAccount: $(aro-version-storage-account)
|
||||
locations:
|
||||
- australiacentral
|
||||
- australiacentral2
|
||||
- australiaeast
|
||||
- australiasoutheast
|
||||
- centralindia
|
||||
|
@ -107,6 +109,7 @@ stages:
|
|||
- northeurope
|
||||
- norwayeast
|
||||
- norwaywest
|
||||
- swedencentral
|
||||
- switzerlandnorth
|
||||
- switzerlandwest
|
||||
- westeurope
|
||||
|
|
|
@ -21,7 +21,7 @@ jobs:
|
|||
- template: ../vars.yml
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
environment: ${{ parameters.environment }}
|
||||
strategy:
|
||||
runOnce:
|
||||
|
@ -87,7 +87,17 @@ jobs:
|
|||
- script: |
|
||||
# Pass variables between tasks: https://medium.com/microsoftazure/how-to-pass-variables-in-azure-pipelines-yaml-tasks-5c81c5d31763
|
||||
echo "##vso[task.setvariable variable=REGION]${{ location }}"
|
||||
CLUSTER="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
# TODO: Remove this hack after AvailabilitySet name too long bug is fixed.
|
||||
LOCATION=${{ location }}
|
||||
NONZONAL_REGIONS="australiacentral australiacentral2 australiasoutheast brazilsoutheast canadaeast japanwest northcentralus norwaywest southindia switzerlandwest uaenorth ukwest westcentralus westus"
|
||||
if echo $NONZONAL_REGIONS | grep -wq $LOCATION
|
||||
then
|
||||
CLUSTER=$(head -c 19 <<< "v4-e2e-V$BUILD_BUILDID-$LOCATION")
|
||||
else
|
||||
CLUSTER="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
fi
|
||||
# TODO: Uncomment next line after above hack is removed.
|
||||
# CLUSTER="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
echo "##vso[task.setvariable variable=CLUSTER]$CLUSTER"
|
||||
CLUSTER_RESOURCEGROUP="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
echo "##vso[task.setvariable variable=CLUSTER_RESOURCEGROUP]$CLUSTER_RESOURCEGROUP"
|
||||
|
|
|
@ -22,7 +22,7 @@ jobs:
|
|||
- template: ../vars.yml
|
||||
pool:
|
||||
name: ARO-CI
|
||||
demands: go-1.16
|
||||
demands: go-1.17
|
||||
environment: ${{ parameters.environment }}
|
||||
strategy:
|
||||
runOnce:
|
||||
|
@ -83,7 +83,17 @@ jobs:
|
|||
- script: |
|
||||
# Pass variables between tasks: https://medium.com/microsoftazure/how-to-pass-variables-in-azure-pipelines-yaml-tasks-5c81c5d31763
|
||||
echo "##vso[task.setvariable variable=REGION]${{ location }}"
|
||||
CLUSTER="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
# TODO: Remove this hack after AvailabilitySet name too long bug is fixed.
|
||||
LOCATION=${{ location }}
|
||||
NONZONAL_REGIONS="australiacentral australiacentral2 australiasoutheast brazilsoutheast canadaeast japanwest northcentralus norwaywest southindia switzerlandwest uaenorth ukwest westcentralus westus"
|
||||
if echo $NONZONAL_REGIONS | grep -wq $LOCATION
|
||||
then
|
||||
CLUSTER=$(head -c 19 <<< "v4-e2e-V$BUILD_BUILDID-$LOCATION")
|
||||
else
|
||||
CLUSTER="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
fi
|
||||
# TODO: Uncomment next line after above hack is removed.
|
||||
# CLUSTER="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
echo "##vso[task.setvariable variable=CLUSTER]$CLUSTER"
|
||||
CLUSTER_RESOURCEGROUP="v4-e2e-V$BUILD_BUILDID-${{ location }}"
|
||||
echo "##vso[task.setvariable variable=CLUSTER_RESOURCEGROUP]$CLUSTER_RESOURCEGROUP"
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
# Currently the docker version on our RHEL7 VMSS uses a version which
|
||||
# does not support multi-stage builds. This is a temporary stop-gap
|
||||
# until we get podman working without issue
|
||||
FROM registry.access.redhat.com/ubi7/go-toolset:1.16.12 AS builder
|
||||
FROM registry.access.redhat.com/ubi8/go-toolset:1.17.7 AS builder
|
||||
ENV GOOS=linux \
|
||||
GOPATH=/go/
|
||||
WORKDIR ${GOPATH}/src/github.com/Azure/ARO-RP
|
||||
|
|
8
Makefile
8
Makefile
|
@ -165,20 +165,16 @@ unit-test-go:
|
|||
go run ./vendor/gotest.tools/gotestsum/main.go --format pkgname --junitfile report.xml -- -tags=aro,containers_image_openpgp -coverprofile=cover.out ./...
|
||||
|
||||
lint-go:
|
||||
go run ./vendor/github.com/golangci/golangci-lint/cmd/golangci-lint run
|
||||
hack/lint-go.sh
|
||||
|
||||
lint-admin-portal:
|
||||
docker build -f Dockerfile.portal_lint . -t linter
|
||||
docker run -it --rm localhost/linter ./src --ext .ts
|
||||
|
||||
test-image-pull:
|
||||
docker run --network="host" -d -p 4444:4444 selenium/standalone-chrome:latest
|
||||
|
||||
test-python: pyenv az
|
||||
. pyenv/bin/activate && \
|
||||
azdev linter && \
|
||||
azdev style && \
|
||||
hack/format-yaml/format-yaml.py .pipelines
|
||||
azdev style
|
||||
|
||||
admin.kubeconfig:
|
||||
hack/get-admin-kubeconfig.sh /subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${RESOURCEGROUP}/providers/Microsoft.RedHatOpenShift/openShiftClusters/${CLUSTER} >admin.kubeconfig
|
||||
|
|
|
@ -116,11 +116,7 @@ questions or comments.
|
|||
|
||||
* machineset: Ensures that a minimum of two worker replicas are met.
|
||||
|
||||
* machinehealthcheck: Ensures the MachineHealthCheck resource is running as configured so that at most one worker node at a time is automatically
|
||||
reconciled when not ready for at least 5 minutes.
|
||||
* The CR will only be applied when both `aro.machinehealthcheck.managed` and `aro.machinehealthcheck.enabled` are set to `"true"`.
|
||||
* When `aro.machinehealthcheck.enabled` is `"false"` and `aro.machinehealthcheck.managed` is `"false"` the CR will be removed from the cluster.
|
||||
* If `aro.machinehealthcheck.enabled` is `"false"` no actions will be taken to modify the CR.
|
||||
* machinehealthcheck: Ensures the MachineHealthCheck resource is running as configured. See [machinehealthcheck/doc.go](pkg/operator/controllers/machinehealthcheck/doc.go)
|
||||
* More information around the MHC CR can be found [in openshift documentation of MHC](https://docs.openshift.com/container-platform/4.9/machine_management/deploying-machine-health-checks.html)
|
||||
|
||||
* monitoring: Ensures that the OpenShift monitoring configuration in the `openshift-monitoring` namespace is consistent and immutable.
|
||||
|
|
|
@ -11,8 +11,8 @@ import (
|
|||
configclient "github.com/openshift/client-go/config/clientset/versioned"
|
||||
consoleclient "github.com/openshift/client-go/console/clientset/versioned"
|
||||
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
securityclient "github.com/openshift/client-go/security/clientset/versioned"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
@ -92,7 +92,7 @@ func operator(ctx context.Context, log *logrus.Entry) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
maocli, err := maoclient.NewForConfig(restConfig)
|
||||
maocli, err := machineclient.NewForConfig(restConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -70,7 +70,7 @@ func rp(ctx context.Context, log, audit *logrus.Entry) error {
|
|||
return err
|
||||
}
|
||||
|
||||
m := statsd.New(ctx, log.WithField("component", "metrics"), _env, os.Getenv("MDM_ACCOUNT"), os.Getenv("MDM_NAMESPACE"))
|
||||
m := statsd.New(ctx, log.WithField("component", "metrics"), _env, os.Getenv("MDM_ACCOUNT"), os.Getenv("MDM_NAMESPACE"), os.Getenv("MDM_STATSD_SOCKET"))
|
||||
|
||||
g, err := golang.NewMetrics(log.WithField("component", "metrics"), m)
|
||||
if err != nil {
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
#!/bin/bash
|
||||
|
||||
git rm pkg/util/arm/deploy.go
|
||||
git rm pkg/util/arm/deploy_test.go
|
||||
git rm vendor/github.com/aliyun/alibaba-cloud-sdk-go/sdk/auth/credential.go
|
||||
git rm vendor/github.com/aliyun/alibaba-cloud-sdk-go/sdk/errors/error.go
|
||||
git rm vendor/github.com/chai2010/gettext-go/LICENSE
|
||||
git rm vendor/github.com/golangci/golangci-lint/pkg/config/linters_settings_gocritic.go
|
||||
git rm vendor/github.com/jgautheron/goconst/README.md
|
||||
git rm vendor/github.com/metal3-io/baremetal-operator/pkg/hardwareutils/bmc/idrac.go
|
||||
git rm vendor/github.com/mgechev/dots/resolve.go
|
||||
git rm vendor/github.com/mgechev/revive/rule/cyclomatic.go
|
||||
git rm vendor/github.com/openshift/api/machine/v1beta1/types_vsphereprovider.go
|
||||
git rm vendor/github.com/openshift/client-go/machine/clientset/versioned/clientset.go
|
||||
git rm vendor/github.com/openshift/client-go/machine/clientset/versioned/fake/register.go
|
||||
git rm vendor/github.com/openshift/client-go/machine/clientset/versioned/scheme/register.go
|
||||
git rm vendor/github.com/openshift/client-go/machine/clientset/versioned/typed/machine/v1beta1/fake/fake_machine_client.go
|
||||
git rm vendor/github.com/openshift/client-go/machine/clientset/versioned/typed/machine/v1beta1/machine_client.go
|
||||
git rm vendor/github.com/openshift/cluster-api/pkg/apis/machine/v1beta1/common_types.go
|
||||
git rm vendor/github.com/openshift/cluster-api/pkg/apis/machine/v1beta1/doc.go
|
||||
git rm vendor/github.com/openshift/cluster-api/pkg/apis/machine/v1beta1/zz_generated.deepcopy.go
|
||||
git rm vendor/github.com/openshift/installer/pkg/asset/ignition/bootstrap/bootstrap_ignition.go
|
||||
git rm vendor/github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/register.go
|
||||
git rm vendor/github.com/quasilyte/go-ruleguard/internal/gogrep/parse.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/.editorconfig
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/.gitignore
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/analyzer.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/checks/argument.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/checks/assign.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/checks/operation.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/checks/return.go
|
||||
git rm vendor/github.com/tommy-muehle/go-mnd/v2/config/config.go
|
||||
git rm vendor/honnef.co/go/tools/analysis/code/code.go
|
||||
git rm vendor/honnef.co/go/tools/analysis/code/visit.go
|
||||
git rm vendor/honnef.co/go/tools/analysis/facts/deprecated.go
|
||||
git rm vendor/honnef.co/go/tools/analysis/report/report.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/doc.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/exits.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/html.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/irutil/load.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/irutil/visit.go
|
||||
git rm vendor/honnef.co/go/tools/go/ir/util.go
|
||||
git rm vendor/honnef.co/go/tools/knowledge/arg.go
|
||||
git rm vendor/k8s.io/api/flowcontrol/v1beta2/doc.go
|
||||
git rm vendor/k8s.io/apimachinery/third_party/forked/golang/LICENSE
|
||||
git rm vendor/k8s.io/client-go/applyconfigurations/core/v1/lifecyclehandler.go
|
||||
git rm vendor/sigs.k8s.io/kustomize/api/filters/prefix/prefix.go
|
|
@ -69,7 +69,7 @@
|
|||
az deployment group create \
|
||||
-g "$RESOURCEGROUP" \
|
||||
-n "databases-development-$USER" \
|
||||
--template-file pkg/deploy/assets/databases-development.json \
|
||||
--template-file deploy/databases-development.json \
|
||||
--parameters \
|
||||
"databaseAccountName=$DATABASE_ACCOUNT_NAME" \
|
||||
"databaseName=$DATABASE_NAME" \
|
||||
|
@ -99,17 +99,15 @@
|
|||
OR use the create utility:
|
||||
|
||||
```bash
|
||||
CLUSTER=<cluster-name> go run ./hack/cluster create
|
||||
CLUSTER=cluster go run ./hack/cluster create
|
||||
```
|
||||
|
||||
Later the cluster can be deleted as follows:
|
||||
|
||||
```bash
|
||||
CLUSTER=<cluster-name> go run ./hack/cluster delete
|
||||
CLUSTER=cluster go run ./hack/cluster delete
|
||||
```
|
||||
|
||||
By default, a public cluster will be created. In order to create a private cluster, set the `PRIVATE_CLUSTER` environment variable to `true` prior to creation. Internet access from the cluster can also be restricted by setting the `NO_INTERNET` environment variable to `true`.
|
||||
|
||||
[1]: https://docs.microsoft.com/en-us/azure/openshift/tutorial-create-cluster
|
||||
|
||||
1. The following additional RP endpoints are available but not exposed via `az
|
||||
|
@ -163,36 +161,6 @@
|
|||
curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/serialconsole?vmName=$VMNAME" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* Redeploy node of a dev cluster
|
||||
```bash
|
||||
VMNAME="aro-cluster-qplnw-master-0"
|
||||
curl -X POST -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/redeployvm?vmName=$VMNAME" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* Stop node of a dev cluster
|
||||
```bash
|
||||
VMNAME="aro-cluster-qplnw-master-0"
|
||||
curl -X POST -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/stopvm?vmName=$VMNAME" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* Start node of a dev cluster
|
||||
```bash
|
||||
VMNAME="aro-cluster-qplnw-master-0"
|
||||
curl -X POST -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/startvm?vmName=$VMNAME" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* List VM Resize Options for a master node of dev cluster
|
||||
```bash
|
||||
curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/skus" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* Resize master node of a dev cluster
|
||||
```bash
|
||||
VMNAME="aro-cluster-qplnw-master-0"
|
||||
VMSIZE="Standard_D16s_v3"
|
||||
curl -X POST -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/resize?vmName=$VMNAME&vmSize=$VMSIZE" --header "Content-Type: application/json" -d "{}"
|
||||
```
|
||||
|
||||
* List Clusters of a local-rp
|
||||
```bash
|
||||
curl -X GET -k "https://localhost:8443/admin/providers/microsoft.redhatopenshift/openshiftclusters"
|
||||
|
@ -216,35 +184,7 @@
|
|||
curl -X GET -k "https://localhost:8443/admin/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/kubernetespodlogs?podname=$POD&namespace=$NAMESPACE&container=$CONTAINER"
|
||||
```
|
||||
|
||||
## OpenShift Version
|
||||
|
||||
* We have a cosmos container which contains supported installable OCP versions, more information on the definition in `pkg/api/openshiftversion.go`.
|
||||
|
||||
* Admin - List OpenShift installation versions
|
||||
```bash
|
||||
curl -X GET -k "https://localhost:8443/admin/versions"
|
||||
```
|
||||
|
||||
* Admin - Put a new OpenShift installation version
|
||||
```bash
|
||||
curl -X PUT -k "https://localhost:8443/admin/versions" --header "Content-Type: application/json" -d '{ "properties": { "version": "4.10.0", "enabled": true, "openShiftPullspec": "test.com/a:b", "installerPullspec": "test.com/a:b" }}'
|
||||
```
|
||||
|
||||
* List the enabled OpenShift installation versions within a region
|
||||
```bash
|
||||
curl -X GET -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/providers/Microsoft.RedHatOpenShift/locations/$LOCATION/listinstallversions?api-version=2022-09-04"
|
||||
```
|
||||
|
||||
## OpenShift Cluster Manager (OCM) Configuration API Actions
|
||||
|
||||
* Create a new OCM configuration
|
||||
* You can find example payloads in the projects `./hack/ocm` folder.
|
||||
|
||||
```bash
|
||||
curl -X PUT -k "https://localhost:8443/subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$RESOURCEGROUP/providers/Microsoft.RedHatOpenShift/openShiftClusters/$CLUSTER/syncsets/mySyncSet?api-version=2022-09-04" --header "Content-Type: application/json" -d @./hack/ocm/syncset.b64
|
||||
|
||||
|
||||
## Debugging OpenShift Cluster
|
||||
## Debugging
|
||||
|
||||
* SSH to the bootstrap node:
|
||||
> __NOTE:__ If you have a password-based `sudo` command, you must first authenticate before running `sudo` in the background
|
||||
|
@ -284,42 +224,6 @@
|
|||
CLUSTER=cluster hack/ssh-agent.sh bootstrap # the bootstrap node used to provision cluster
|
||||
```
|
||||
|
||||
# Debugging AKS Cluster
|
||||
|
||||
* Connect to the VPN:
|
||||
|
||||
To access the cluster for oc / kubectl or SSH'ing into the cluster you need to connect to the VPN first.
|
||||
> __NOTE:__ If you have a password-based `sudo` command, you must first authenticate before running `sudo` in the background
|
||||
```bash
|
||||
sudo openvpn secrets/vpn-aks-$LOCATION.ovpn &
|
||||
```
|
||||
|
||||
* Access the cluster via API (oc / kubectl):
|
||||
|
||||
```bash
|
||||
make aks.kubeconfig
|
||||
export KUBECONFIG=aks.kubeconfig
|
||||
|
||||
$ oc get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
aks-systempool-99744725-vmss000000 Ready agent 9h v1.23.5
|
||||
aks-systempool-99744725-vmss000001 Ready agent 9h v1.23.5
|
||||
aks-systempool-99744725-vmss000002 Ready agent 9h v1.23.5
|
||||
```
|
||||
|
||||
* "SSH" into a cluster node:
|
||||
|
||||
* Run the ssh-aks.sh script, specifying the cluster name and the node number of the VM you are trying to ssh to.
|
||||
```
|
||||
hack/ssk-aks.sh aro-aks-cluster 0 # The first VM node in 'aro-aks-cluster'
|
||||
hack/ssk-aks.sh aro-aks-cluster 1 # The second VM node in 'aro-aks-cluster'
|
||||
hack/ssk-aks.sh aro-aks-cluster 2 # The third VM node in 'aro-aks-cluster'
|
||||
```
|
||||
|
||||
* Access via Azure Portal
|
||||
|
||||
Due to the fact that the AKS cluster is private, you need to be connected to the VPN in order to view certain AKS cluster properties, because the UI interrogates k8s via the VPN.
|
||||
|
||||
### Metrics
|
||||
|
||||
To run fake metrics socket:
|
||||
|
|
|
@ -58,10 +58,8 @@
|
|||
|
||||
1. Push the ARO and Fluentbit images to your ACR
|
||||
|
||||
> If running this step from a VM separate from your workstation, ensure the commit tag used to build the image matches the commit tag where `make deploy` is run.
|
||||
|
||||
> Due to security compliance requirements, `make publish-image-*` targets pull from `arointsvc.azurecr.io`. You can either authenticate to this registry using `az acr login --name arointsvc` to pull the image, or modify the $RP_IMAGE_ACR environment variable locally to point to `registry.access.redhat.com` instead.
|
||||
|
||||
__NOTE:__ If running this step from a VM separate from your workstation, ensure the commit tag used to build the image matches the commit tag where `make deploy` is run.
|
||||
|
||||
```bash
|
||||
make publish-image-aro-multistage
|
||||
make publish-image-fluentbit
|
||||
|
|
Двоичные данные
docs/img/AROMonitor.png
Двоичные данные
docs/img/AROMonitor.png
Двоичный файл не отображается.
До Ширина: | Высота: | Размер: 51 KiB После Ширина: | Высота: | Размер: 104 KiB |
|
@ -45,7 +45,7 @@ locations.
|
|||
PULL_SECRET=...
|
||||
```
|
||||
|
||||
1. Install [Go 1.16](https://golang.org/dl) or later, if you haven't already.
|
||||
1. Install [Go 1.17](https://golang.org/dl) or later, if you haven't already.
|
||||
|
||||
1. Install the [Azure
|
||||
CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli), if you
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
This document goes through the development dependencies one requires in order to build the RP code.
|
||||
|
||||
## Software Required
|
||||
1. Install [Go 1.16](https://golang.org/dl) or later, if you haven't already.
|
||||
1. Install [Go 1.17](https://golang.org/dl) or later, if you haven't already.
|
||||
|
||||
1. Configure `GOPATH` as an OS environment variable in your shell (a requirement of some dependencies for `make generate`). If you want to keep the default path, you can add something like `GOPATH=$(go env GOPATH)` to your shell's profile/RC file.
|
||||
|
||||
|
@ -22,6 +22,8 @@ This document goes through the development dependencies one requires in order to
|
|||
sudo touch /etc/containers/nodocker
|
||||
```
|
||||
|
||||
1. Install [golangci-lint](https://golangci-lint.run/) and [yamllint](https://yamllint.readthedocs.io/en/stable/quickstart.html#installing-yamllint) (optional but your code is required to comply to pass the CI)
|
||||
|
||||
### Fedora Packages
|
||||
1. Install the `gpgme-devel`, `libassuan-devel`, and `openssl` packages.
|
||||
> `sudo dnf install -y gpgme-devel libassuan-devel openssl`
|
||||
|
|
|
@ -10,43 +10,37 @@ The ARO monitor component (the part of the aro binary you activate when you exec
|
|||
|
||||
![Aro Monitor Architecture](img/AROMonitor.png "Aro Monitor Architecture")
|
||||
|
||||
To send data to Geneva the monitor uses an instance of a Geneva MDM container as a proxy of the Geneva API. The MDM container accepts statsd formatted data (the Azure Geneva version of statsd, that is) over a UNIX (Domain) socket. The MDM container then forwards the metric data over a https link to the Geneva API. Please note that using a Unix socket can only be accessed from the same machine.
|
||||
To send data to Geneva the monitor uses an instance of a Geneva MDM container as a proxy of the Geneva API. The MDM container accepts statsd formatted data (the Azure Geneva version of statsd, that is) over a UNIX (Domain) socket. The MDM container then forwards the metric data over a https link to the Geneva API. Please note that a Unix socket can only be accessed from the same machine.
|
||||
|
||||
The monitor picks the required information about which clusters should actually monitor from its corresponding Cosmos DB. If multiple monitor instances run in parallel (i.e. connect to the same database instance) as is the case in production, they negotiate which instance monitors what cluster (see : [monitoring.md](./monitoring.md)).
|
||||
|
||||
|
||||
|
||||
|
||||
## Unit Testing Setup
|
||||
# Unit Testing Setup
|
||||
|
||||
If you work on monitor metrics in local dev mode (RP_MODE=Development) you most likely want to see your data somewhere in Geneva INT (https://jarvis-west-int.cloudapp.net/) before you ship your code.
|
||||
|
||||
There are two ways to set to acchieve this:
|
||||
- Run the Geneva MDM container locally (won't work on macOS, see Remote Container section below)
|
||||
There are two ways to set to achieve this:
|
||||
- Run the Geneva MDM container locally
|
||||
- Spawn a VM, start the Geneva container there and connect/tunnel to it.
|
||||
|
||||
### Local Container Setup
|
||||
and two protocols to chose from:
|
||||
- Unix Domain Sockets, which is the way production is currently (April 2022) run
|
||||
- or UDP, which is much easier to use and is the way it will be used on kubernetes clusters in the future
|
||||
|
||||
## Local Container Setup
|
||||
|
||||
Before you start, make sure :
|
||||
- to run `source ./env`
|
||||
- you ran `SECRET_SA_ACCOUNT_NAME=rharosecretsdev make secrets` before
|
||||
- know which "account" and "namespace" value you want to use on Geneva INT for your metric data and
|
||||
update your env to set the
|
||||
- know which "account" and "namespace" value you want to use on Geneva INT for your metric data and update your env to set the following variables before you start the monitor:
|
||||
- CLUSTER_MDM_ACCOUNT
|
||||
- CLUSTER_MDM_NAMESPACE
|
||||
|
||||
- CLUSTER_MDM_NAMESPACE
|
||||
|
||||
variables before you start the monitor.
|
||||
The container needs to be provided with the Geneva key and certificate. For the INT instance that is the rp-metrics-int.pem you find in the secrets folder after running the `make secrets` command above.
|
||||
|
||||
An example docker command to start the container locally is here (you may need to adapt some parameters):
|
||||
[Example](../hack/local-monitor-testing/sample/dockerStartCommand.sh). The script will configure the mdm container to connect to Geneva INT
|
||||
|
||||
Two things to be aware of :
|
||||
* The container needs to be provided with the Geneva key and certificate. For the INT instance that is the rp-metrics-int.pem you find in the secrets folder after running `make secrets`. The sample scripts tries to copy it to /etc/mdm.pem (to mimic production).
|
||||
* When you start the montitor locally in local dev mode, the monitor looks for the Unix Socket file mdm_statsd.socket in the current directory. Adapt the path in the start command accordingly, if it's not `./cmd/aro folder`'
|
||||
|
||||
### Remote Container Setup
|
||||
|
||||
## Remote Container Setup
|
||||
If you can't run the container locally (because you run on macOS and your container tooling does not support Unix Sockets, which is true both for Docker for Desktop or podman) and or don't want to, you can bring up the container on a Linux VM and connect via a socat/ssh chain:
|
||||
![alt text](img/SOCATConnection.png "SOCAT chain")
|
||||
|
||||
|
@ -84,12 +78,13 @@ socat -v UNIX-LISTEN:$SOCKETFILE,fork TCP-CONNECT:127.0.0.1:12345
|
|||
For debugging it might be useful to run these commands manually in three different terminals to see where the connection might break down. The docker log file should show if data flows through or not, too.
|
||||
|
||||
|
||||
### Stopping the Network script
|
||||
#### Stopping the Network script
|
||||
|
||||
Stop the script with Ctrl-C. The script then will do its best to stop the ssh and socal processes it spawned.
|
||||
|
||||
|
||||
### Starting the monitor
|
||||
|
||||
## Starting the monitor
|
||||
|
||||
When starting the monitor , make sure to have your
|
||||
|
||||
|
@ -115,23 +110,22 @@ A VS Code launch config that does the same would look like.
|
|||
"monitor",
|
||||
],
|
||||
"env": {"CLUSTER_MDM_ACCOUNT": "<PUT YOUR ACCOUNT HERE>",
|
||||
"CLUSTER_MDM_NAMESPACE":"<PUT YOUR NAMESPACE HERE>" }
|
||||
"CLUSTER_MDM_NAMESPACE":"<PUT YOUR NAMESPACE HERE>"
|
||||
}
|
||||
},
|
||||
````
|
||||
|
||||
|
||||
### Finding your data
|
||||
## Finding your data
|
||||
|
||||
If all goes well, you should see your metric data in the Jarvis metrics list (Geneva INT (https://jarvis-west-int.cloudapp.net/) -> Manage -> Metrics) under the account and namespace you specified in CLUSTER_MDM_ACCOUNT and CLUSTER_MDM_NAMESPACE and also be available is the dashboard settings.
|
||||
|
||||
|
||||
### Injecting Test Data into Geneva INT
|
||||
## Injecting Test Data into Geneva INT
|
||||
|
||||
Once your monitor code is done you will want to create pre-aggregates, dashboards and alert on the Geneva side and test with a variety of data.
|
||||
Your end-2-end testing with real cluster will generate some data and cover many test scenarios, but if that's not feasible or too time-consuming you can inject data directly into the Genava mdm container via the socat/ssh network chain.
|
||||
|
||||
An example metric script is shown below, you can connect it to
|
||||
|
||||
An example metric script is shown below.
|
||||
|
||||
````
|
||||
myscript.sh | socat TCP-CONNECT:127.0.0.1:12345 -
|
||||
|
@ -143,8 +137,7 @@ myscript.sh | socat UNIX-CONNECT:$SOCKETFILE -
|
|||
(see above of the $SOCKETFILE )
|
||||
|
||||
|
||||
|
||||
#### Sample metric script
|
||||
### Sample metric script
|
||||
|
||||
````
|
||||
#!/bin/bash
|
||||
|
@ -166,6 +159,7 @@ DIM_RESOURCENAME=$CLUSTER
|
|||
data="10 11 12 13 13 13 13 15 16 19 20 21 25"
|
||||
SLEEPTIME=60
|
||||
for MET in $data ;do
|
||||
DATESTRING=$( date -u +'%Y-%m-%dT%H:%M:%S.%3N' )
|
||||
OUT=$( cat << EOF
|
||||
{"Metric":"$METRIC",
|
||||
"Account":"$ACCOUNT",
|
||||
|
|
|
@ -5,7 +5,7 @@ upstream OCP.
|
|||
|
||||
## Installer carry patches
|
||||
|
||||
See https://github.com/openshift/installer/compare/release-4.9...jewzaam:release-4.9-azure.
|
||||
See https://github.com/openshift/installer/compare/release-4.10...jewzaam:release-4.10-azure.
|
||||
|
||||
## Installation differences
|
||||
|
||||
|
@ -127,3 +127,5 @@ Once installer fork is ready:
|
|||
1. After this point, you should be able to create a dev cluster using the RP and it should use the new release.
|
||||
1. `make discoverycache`.
|
||||
* This command requires a running cluster with the new version.
|
||||
1. The list of the hard-coded namespaces in `pkg/util/namespace/namespace.go` needs to be updated regularly as every
|
||||
minor version of upstream OCP introduces a new namespace or two.
|
491
go.mod
491
go.mod
|
@ -1,132 +1,356 @@
|
|||
module github.com/Azure/ARO-RP
|
||||
|
||||
go 1.16
|
||||
go 1.17
|
||||
|
||||
require (
|
||||
cloud.google.com/go/compute v1.1.0 // indirect
|
||||
github.com/AlecAivazis/survey/v2 v2.3.2 // indirect
|
||||
github.com/AlekSi/gocov-xml v0.0.0-20190121064608-3a14fb1c4737
|
||||
github.com/Azure/azure-sdk-for-go v61.3.0+incompatible
|
||||
github.com/Azure/go-autorest/autorest v0.11.24
|
||||
github.com/Azure/azure-sdk-for-go v63.1.0+incompatible
|
||||
github.com/Azure/go-autorest/autorest v0.11.25
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.18
|
||||
github.com/Azure/go-autorest/autorest/azure/auth v0.5.11
|
||||
github.com/Azure/go-autorest/autorest/date v0.3.0
|
||||
github.com/Azure/go-autorest/autorest/to v0.4.0
|
||||
github.com/Azure/go-autorest/autorest/validation v0.3.1
|
||||
github.com/Azure/go-autorest/tracing v0.6.0
|
||||
github.com/IBM-Cloud/bluemix-go v0.0.0-20220119131246-2af2dee48688 // indirect
|
||||
github.com/IBM/go-sdk-core/v5 v5.9.1 // indirect
|
||||
github.com/IBM/networking-go-sdk v0.24.0 // indirect
|
||||
github.com/IBM/platform-services-go-sdk v0.22.7 // indirect
|
||||
github.com/alvaroloes/enumer v1.1.2
|
||||
github.com/apparentlymart/go-cidr v1.1.0
|
||||
github.com/aws/aws-sdk-go v1.42.40 // indirect
|
||||
github.com/axw/gocov v1.0.0
|
||||
github.com/clarketm/json v1.17.1 // indirect
|
||||
github.com/codahale/etm v0.0.0-20141003032925-c00c9e6fb4c9
|
||||
github.com/containers/image/v5 v5.18.0
|
||||
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a // indirect
|
||||
github.com/containers/storage v1.38.1 // indirect
|
||||
github.com/containers/image/v5 v5.21.0
|
||||
github.com/coreos/go-oidc v2.2.1+incompatible
|
||||
github.com/coreos/go-systemd/v22 v22.3.2
|
||||
github.com/coreos/ignition/v2 v2.13.0
|
||||
github.com/coreos/stream-metadata-go v0.1.6
|
||||
github.com/coreos/stream-metadata-go v0.2.0
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
|
||||
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
|
||||
github.com/form3tech-oss/jwt-go v3.2.5+incompatible
|
||||
github.com/fsnotify/fsnotify v1.5.1 // indirect
|
||||
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32
|
||||
github.com/go-bindata/go-bindata v3.1.2+incompatible
|
||||
github.com/go-errors/errors v1.4.2 // indirect
|
||||
github.com/go-logr/logr v1.2.2
|
||||
github.com/go-openapi/errors v0.20.2 // indirect
|
||||
github.com/go-openapi/jsonreference v0.19.6 // indirect
|
||||
github.com/go-playground/validator/v10 v10.10.0 // indirect
|
||||
github.com/go-stack/stack v1.8.1 // indirect
|
||||
github.com/go-logr/logr v1.2.3
|
||||
github.com/go-test/deep v1.0.8
|
||||
github.com/gofrs/uuid v4.2.0+incompatible
|
||||
github.com/golang/mock v1.6.0
|
||||
github.com/golangci/golangci-lint v1.32.2
|
||||
github.com/golangci/golangci-lint v1.42.1
|
||||
github.com/google/go-cmp v0.5.7
|
||||
github.com/googleapis/gnostic v0.6.6
|
||||
github.com/gophercloud/gophercloud v0.24.0 // indirect
|
||||
github.com/gophercloud/utils v0.0.0-20210909165623-d7085207ff6d // indirect
|
||||
github.com/googleapis/gnostic v0.6.8
|
||||
github.com/gorilla/csrf v1.7.1
|
||||
github.com/gorilla/mux v1.8.0
|
||||
github.com/gorilla/securecookie v1.1.1
|
||||
github.com/gorilla/sessions v1.2.1
|
||||
github.com/h2non/filetype v1.1.3 // indirect
|
||||
github.com/jewzaam/go-cosmosdb v0.0.0-20220315232836-282b67c5b234
|
||||
github.com/jstemmer/go-junit-report v0.9.1
|
||||
github.com/klauspost/compress v1.14.2 // indirect
|
||||
github.com/onsi/ginkgo v1.16.5
|
||||
github.com/onsi/gomega v1.19.0
|
||||
github.com/openshift/api v0.0.0-20220124143425-d74727069f6f
|
||||
github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3
|
||||
github.com/openshift/console-operator v0.0.0-20220407014945-45d37e70e0c2
|
||||
github.com/openshift/installer v0.16.1
|
||||
github.com/openshift/library-go v0.0.0-20220405134141-226b07263a02
|
||||
github.com/openshift/machine-config-operator v3.11.0+incompatible
|
||||
github.com/pires/go-proxyproto v0.6.2
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.48.1
|
||||
github.com/prometheus/client_golang v1.12.1
|
||||
github.com/prometheus/common v0.33.0
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/stretchr/testify v1.7.1
|
||||
github.com/ugorji/go/codec v1.2.7
|
||||
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29
|
||||
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b
|
||||
golang.org/x/oauth2 v0.0.0-20220309155454-6242fa91716a
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
|
||||
golang.org/x/tools v0.1.10
|
||||
gotest.tools/gotestsum v1.6.4
|
||||
k8s.io/api v0.23.5
|
||||
k8s.io/apiextensions-apiserver v0.23.5
|
||||
k8s.io/apimachinery v0.23.5
|
||||
k8s.io/client-go v12.0.0+incompatible
|
||||
k8s.io/code-generator v0.23.2
|
||||
k8s.io/kubectl v0.23.5
|
||||
k8s.io/kubernetes v1.23.5
|
||||
sigs.k8s.io/cluster-api-provider-azure v1.2.1
|
||||
sigs.k8s.io/controller-runtime v0.11.2
|
||||
sigs.k8s.io/controller-tools v0.7.0
|
||||
)
|
||||
|
||||
require (
|
||||
4d63.com/gochecknoglobals v0.0.0-20201008074935-acfc0b28355a // indirect
|
||||
cloud.google.com/go/compute v1.5.0 // indirect
|
||||
github.com/AlecAivazis/survey/v2 v2.3.4 // indirect
|
||||
github.com/Antonboom/errname v0.1.4 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
|
||||
github.com/Azure/go-autorest/autorest/azure/cli v0.4.5 // indirect
|
||||
github.com/Azure/go-autorest/logger v0.2.1 // indirect
|
||||
github.com/BurntSushi/toml v1.1.0 // indirect
|
||||
github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect
|
||||
github.com/IBM-Cloud/bluemix-go v0.0.0-20220407050707-b4cd0d4da813 // indirect
|
||||
github.com/IBM/go-sdk-core/v5 v5.9.5 // indirect
|
||||
github.com/IBM/networking-go-sdk v0.28.0 // indirect
|
||||
github.com/IBM/platform-services-go-sdk v0.24.0 // indirect
|
||||
github.com/IBM/vpc-go-sdk v1.0.1 // indirect
|
||||
github.com/MakeNowJust/heredoc v1.0.0 // indirect
|
||||
github.com/Masterminds/semver v1.5.0 // indirect
|
||||
github.com/Microsoft/go-winio v0.5.2 // indirect
|
||||
github.com/OpenPeeDeeP/depguard v1.0.1 // indirect
|
||||
github.com/PuerkitoBio/purell v1.1.1 // indirect
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
|
||||
github.com/VividCortex/ewma v1.2.0 // indirect
|
||||
github.com/acarl005/stripansi v0.0.0-20180116102854-5a71ef0e047d // indirect
|
||||
github.com/alexkohler/prealloc v1.0.0 // indirect
|
||||
github.com/aliyun/alibaba-cloud-sdk-go v1.61.1550 // indirect
|
||||
github.com/aliyun/aliyun-oss-go-sdk v2.2.2+incompatible // indirect
|
||||
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d // indirect
|
||||
github.com/ashanbrown/forbidigo v1.2.0 // indirect
|
||||
github.com/ashanbrown/makezero v0.0.0-20210520155254-b6261585ddde // indirect
|
||||
github.com/aws/aws-sdk-go v1.43.34 // indirect
|
||||
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/bkielbasa/cyclop v1.2.0 // indirect
|
||||
github.com/bombsimon/wsl/v3 v3.3.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.1.2 // indirect
|
||||
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
|
||||
github.com/charithe/durationcheck v0.0.8 // indirect
|
||||
github.com/chavacava/garif v0.0.0-20210405164556-e8a0a408d6af // indirect
|
||||
github.com/clarketm/json v1.17.1 // indirect
|
||||
github.com/containers/image v3.0.2+incompatible // indirect
|
||||
github.com/containers/libtrust v0.0.0-20200511145503-9c3a6c22cd9a // indirect
|
||||
github.com/containers/ocicrypt v1.1.3 // indirect
|
||||
github.com/containers/storage v1.39.0 // indirect
|
||||
github.com/coreos/go-semver v0.3.0 // indirect
|
||||
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect
|
||||
github.com/coreos/ignition v0.35.0 // indirect
|
||||
github.com/coreos/vcontext v0.0.0-20220326205524-7fcaf69e7050 // indirect
|
||||
github.com/daixiang0/gci v0.2.9 // indirect
|
||||
github.com/denis-tingajkin/go-header v0.4.2 // indirect
|
||||
github.com/dimchansky/utfbom v1.1.1 // indirect
|
||||
github.com/dnephin/pflag v1.0.7 // indirect
|
||||
github.com/docker/distribution v2.8.1+incompatible // indirect
|
||||
github.com/docker/docker v20.10.14+incompatible // indirect
|
||||
github.com/docker/docker-credential-helpers v0.6.4 // indirect
|
||||
github.com/docker/go-connections v0.4.0 // indirect
|
||||
github.com/docker/go-metrics v0.0.1 // indirect
|
||||
github.com/docker/go-units v0.4.0 // indirect
|
||||
github.com/esimonov/ifshort v1.0.2 // indirect
|
||||
github.com/ettle/strcase v0.1.1 // indirect
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
|
||||
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
|
||||
github.com/fatih/color v1.12.0 // indirect
|
||||
github.com/fatih/structtag v1.2.0 // indirect
|
||||
github.com/fsnotify/fsnotify v1.5.1 // indirect
|
||||
github.com/fzipp/gocyclo v0.3.1 // indirect
|
||||
github.com/go-critic/go-critic v0.5.6 // indirect
|
||||
github.com/go-errors/errors v1.4.2 // indirect
|
||||
github.com/go-openapi/errors v0.20.2 // indirect
|
||||
github.com/go-openapi/jsonpointer v0.19.5 // indirect
|
||||
github.com/go-openapi/jsonreference v0.19.6 // indirect
|
||||
github.com/go-openapi/strfmt v0.21.2 // indirect
|
||||
github.com/go-openapi/swag v0.21.1 // indirect
|
||||
github.com/go-playground/locales v0.14.0 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.0 // indirect
|
||||
github.com/go-playground/validator/v10 v10.10.1 // indirect
|
||||
github.com/go-stack/stack v1.8.1 // indirect
|
||||
github.com/go-toolsmith/astcast v1.0.0 // indirect
|
||||
github.com/go-toolsmith/astcopy v1.0.0 // indirect
|
||||
github.com/go-toolsmith/astequal v1.0.0 // indirect
|
||||
github.com/go-toolsmith/astfmt v1.0.0 // indirect
|
||||
github.com/go-toolsmith/astp v1.0.0 // indirect
|
||||
github.com/go-toolsmith/strparse v1.0.0 // indirect
|
||||
github.com/go-toolsmith/typep v1.0.2 // indirect
|
||||
github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b // indirect
|
||||
github.com/gobuffalo/flect v0.2.3 // indirect
|
||||
github.com/gobwas/glob v0.2.3 // indirect
|
||||
github.com/gofrs/flock v0.8.1 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang-jwt/jwt/v4 v4.4.1 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect
|
||||
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a // indirect
|
||||
github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 // indirect
|
||||
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a // indirect
|
||||
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 // indirect
|
||||
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca // indirect
|
||||
github.com/golangci/misspell v0.3.5 // indirect
|
||||
github.com/golangci/revgrep v0.0.0-20210208091834-cd28932614b5 // indirect
|
||||
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect
|
||||
github.com/google/btree v1.0.1 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/google/renameio v1.0.1 // indirect
|
||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
||||
github.com/google/uuid v1.3.0 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.2.0 // indirect
|
||||
github.com/gophercloud/gophercloud v0.24.0 // indirect
|
||||
github.com/gophercloud/utils v0.0.0-20220307143606-8e7800759d16 // indirect
|
||||
github.com/gordonklaus/ineffassign v0.0.0-20210225214923-2e10b2664254 // indirect
|
||||
github.com/gostaticanalysis/analysisutil v0.4.1 // indirect
|
||||
github.com/gostaticanalysis/comment v1.4.1 // indirect
|
||||
github.com/gostaticanalysis/forcetypeassert v0.0.0-20200621232751-01d4955beaa5 // indirect
|
||||
github.com/gostaticanalysis/nilerr v0.1.1 // indirect
|
||||
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
|
||||
github.com/h2non/filetype v1.1.3 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
|
||||
github.com/hashicorp/go-hclog v0.16.1 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/hashicorp/go-retryablehttp v0.7.0 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
github.com/imdario/mergo v0.3.12 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/jgautheron/goconst v1.5.1 // indirect
|
||||
github.com/jingyugao/rowserrcheck v1.1.0 // indirect
|
||||
github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af // indirect
|
||||
github.com/jmespath/go-jmespath v0.4.0 // indirect
|
||||
github.com/jonboulle/clockwork v0.2.2 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/julz/importas v0.0.0-20210419104244-841f0c0fe66d // indirect
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
|
||||
github.com/kisielk/errcheck v1.6.0 // indirect
|
||||
github.com/kisielk/gotool v1.0.0 // indirect
|
||||
github.com/klauspost/compress v1.15.1 // indirect
|
||||
github.com/klauspost/pgzip v1.2.5 // indirect
|
||||
github.com/kulti/thelper v0.4.0 // indirect
|
||||
github.com/kunwardeep/paralleltest v1.0.2 // indirect
|
||||
github.com/kyoh86/exportloopref v0.1.8 // indirect
|
||||
github.com/ldez/gomoddirectives v0.2.2 // indirect
|
||||
github.com/ldez/tagliatelle v0.2.0 // indirect
|
||||
github.com/leodido/go-urn v1.2.1 // indirect
|
||||
github.com/libvirt/libvirt-go v7.4.0+incompatible // indirect
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
|
||||
github.com/magiconair/properties v1.8.5 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/maratori/testpackage v1.0.1 // indirect
|
||||
github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 // indirect
|
||||
github.com/mattn/go-colorable v0.1.12 // indirect
|
||||
github.com/metal3-io/baremetal-operator v0.0.0-20220125095243-13add0bfb3be // indirect
|
||||
github.com/metal3-io/cluster-api-provider-baremetal v0.2.2 // indirect
|
||||
github.com/mattn/go-isatty v0.0.14 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
|
||||
github.com/mbilski/exhaustivestruct v1.2.0 // indirect
|
||||
github.com/metal3-io/baremetal-operator v0.0.0-20220405082045-575f5c90718a // indirect
|
||||
github.com/metal3-io/baremetal-operator/apis v0.0.0 // indirect
|
||||
github.com/metal3-io/baremetal-operator/pkg/hardwareutils v0.0.0 // indirect
|
||||
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81 // indirect
|
||||
github.com/mgechev/revive v1.1.1 // indirect
|
||||
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
|
||||
github.com/miekg/pkcs11 v1.1.1 // indirect
|
||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
|
||||
github.com/mitchellh/mapstructure v1.4.3 // indirect
|
||||
github.com/moby/spdystream v0.2.0 // indirect
|
||||
github.com/moby/sys/mountinfo v0.6.0 // indirect
|
||||
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
|
||||
github.com/onsi/ginkgo v1.16.5
|
||||
github.com/onsi/gomega v1.18.0
|
||||
github.com/openshift/api v0.0.0-20210831091943-07e756545ac1
|
||||
github.com/openshift/client-go v0.0.0-20210831095141-e19a065e79f7
|
||||
github.com/openshift/cloud-credential-operator v0.0.0-20220121204927-85a406b6d4b1 // indirect
|
||||
github.com/openshift/console-operator v0.0.0-20220120123728-4789dbf7c1d3
|
||||
github.com/openshift/installer v0.16.1
|
||||
github.com/openshift/library-go v0.0.0-20220125143545-df4228ff1215
|
||||
github.com/openshift/machine-api-operator v0.2.1-0.20210820103535-d50698c302f5
|
||||
github.com/openshift/machine-config-operator v0.0.1-0.20201009041932-4fe8559913b8
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
|
||||
github.com/moricho/tparallel v0.2.1 // indirect
|
||||
github.com/nakabonne/nestif v0.3.0 // indirect
|
||||
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect
|
||||
github.com/nishanths/exhaustive v0.2.3 // indirect
|
||||
github.com/nishanths/predeclared v0.2.1 // indirect
|
||||
github.com/nxadm/tail v1.4.8 // indirect
|
||||
github.com/oklog/ulid v1.3.1 // indirect
|
||||
github.com/olekukonko/tablewriter v0.0.5 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.0.3-0.20211202193544-a5463b7f9c84 // indirect
|
||||
github.com/opencontainers/runc v1.1.1 // indirect
|
||||
github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417 // indirect
|
||||
github.com/openshift/cloud-credential-operator v0.0.0-20220316185125-ed0612946f4b // indirect
|
||||
github.com/openshift/cluster-api v0.0.0-20190805113604-f8de78af80fc // indirect
|
||||
github.com/openshift/cluster-api-provider-baremetal v0.0.0-20220218121658-fc0acaaec338 // indirect
|
||||
github.com/openshift/cluster-api-provider-ibmcloud v0.0.0-20211008100740-4d7907adbd6b // indirect
|
||||
github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20191219173431-2336783d4603 // indirect
|
||||
github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20211111151530-06177b773958 // indirect
|
||||
github.com/ovirt/go-ovirt v0.0.0-20210308100159-ac0bcbc88d7c // indirect
|
||||
github.com/pascaldekloe/name v0.0.0-20180628100202-0fd16699aae1 // indirect
|
||||
github.com/pborman/uuid v1.2.1 // indirect
|
||||
github.com/pires/go-proxyproto v0.6.1
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/pelletier/go-toml v1.9.3 // indirect
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
|
||||
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/polyfloyd/go-errorlint v0.0.0-20210722154253-910bb7978349 // indirect
|
||||
github.com/pquerna/cachecontrol v0.1.0 // indirect
|
||||
github.com/prometheus/client_golang v1.12.0
|
||||
github.com/prometheus/common v0.32.1
|
||||
github.com/serge1peshcoff/selenium-go-conditions v0.0.0-20170824121757-5afbdb74596b
|
||||
github.com/sirupsen/logrus v1.8.1
|
||||
github.com/spf13/cobra v1.3.0 // indirect
|
||||
github.com/stretchr/testify v1.7.0
|
||||
github.com/tebeka/selenium v0.9.9
|
||||
github.com/ugorji/go/codec v1.2.6
|
||||
github.com/vbauerster/mpb/v7 v7.3.2 // indirect
|
||||
github.com/vmware/govmomi v0.27.2 // indirect
|
||||
go.mongodb.org/mongo-driver v1.8.2 // indirect
|
||||
github.com/proglottis/gpgme v0.1.1 // indirect
|
||||
github.com/prometheus/client_model v0.2.0 // indirect
|
||||
github.com/prometheus/procfs v0.7.3 // indirect
|
||||
github.com/quasilyte/go-ruleguard v0.3.4 // indirect
|
||||
github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 // indirect
|
||||
github.com/rivo/uniseg v0.2.0 // indirect
|
||||
github.com/russross/blackfriday v1.6.0 // indirect
|
||||
github.com/ryancurrah/gomodguard v1.2.3 // indirect
|
||||
github.com/ryanrolds/sqlclosecheck v0.3.0 // indirect
|
||||
github.com/sanposhiho/wastedassign/v2 v2.0.6 // indirect
|
||||
github.com/securego/gosec/v2 v2.8.1 // indirect
|
||||
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c // indirect
|
||||
github.com/sonatard/noctx v0.0.1 // indirect
|
||||
github.com/sourcegraph/go-diff v0.6.1 // indirect
|
||||
github.com/spf13/afero v1.6.0 // indirect
|
||||
github.com/spf13/cast v1.3.1 // indirect
|
||||
github.com/spf13/cobra v1.4.0 // indirect
|
||||
github.com/spf13/jwalterweatherman v1.1.0 // indirect
|
||||
github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace // indirect
|
||||
github.com/spf13/viper v1.10.0 // indirect
|
||||
github.com/ssgreg/nlreturn/v2 v2.1.0 // indirect
|
||||
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
|
||||
github.com/stretchr/objx v0.2.0 // indirect
|
||||
github.com/subosito/gotenv v1.2.0 // indirect
|
||||
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 // indirect
|
||||
github.com/tdakkota/asciicheck v0.0.0-20200416200610-e657995f937b // indirect
|
||||
github.com/tetafro/godot v1.4.9 // indirect
|
||||
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 // indirect
|
||||
github.com/tomarrell/wrapcheck/v2 v2.3.0 // indirect
|
||||
github.com/tommy-muehle/go-mnd/v2 v2.4.0 // indirect
|
||||
github.com/ulikunitz/xz v0.5.10 // indirect
|
||||
github.com/ultraware/funlen v0.0.3 // indirect
|
||||
github.com/ultraware/whitespace v0.0.4 // indirect
|
||||
github.com/uudashr/gocognit v1.0.5 // indirect
|
||||
github.com/vbatts/tar-split v0.11.2 // indirect
|
||||
github.com/vbauerster/mpb/v7 v7.4.1 // indirect
|
||||
github.com/vincent-petithory/dataurl v1.0.0 // indirect
|
||||
github.com/vmware/govmomi v0.27.4 // indirect
|
||||
github.com/xlab/treeprint v1.1.0 // indirect
|
||||
github.com/yeya24/promlinter v0.1.0 // indirect
|
||||
go.etcd.io/bbolt v1.3.6 // indirect
|
||||
go.mongodb.org/mongo-driver v1.9.0 // indirect
|
||||
go.mozilla.org/pkcs7 v0.0.0-20210826202110-33d05740a352 // indirect
|
||||
go.starlark.net v0.0.0-20211203141949-70c0e40ae128 // indirect
|
||||
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
|
||||
golang.org/x/net v0.0.0-20220121210141-e204ce36a2ba
|
||||
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
|
||||
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect
|
||||
golang.org/x/tools v0.1.8
|
||||
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 // indirect
|
||||
google.golang.org/grpc v1.43.0 // indirect
|
||||
gopkg.in/ini.v1 v1.66.3 // indirect
|
||||
go.opencensus.io v0.23.0 // indirect
|
||||
go.starlark.net v0.0.0-20220328144851-d1966c6b9fcd // indirect
|
||||
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 // indirect
|
||||
golang.org/x/sys v0.0.0-20220406163625-3f8b81556e12 // indirect
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
golang.org/x/time v0.0.0-20220224211638-0e9765cccd65 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
|
||||
google.golang.org/api v0.74.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220405205423-9d709892a2bf // indirect
|
||||
google.golang.org/grpc v1.45.0 // indirect
|
||||
google.golang.org/protobuf v1.28.0 // indirect
|
||||
gopkg.in/go-playground/validator.v9 v9.31.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/ini.v1 v1.66.4 // indirect
|
||||
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
|
||||
gotest.tools/gotestsum v1.6.4
|
||||
k8s.io/api v0.23.2
|
||||
k8s.io/apiextensions-apiserver v0.23.2
|
||||
k8s.io/apimachinery v0.23.2
|
||||
k8s.io/apiserver v0.23.2 // indirect
|
||||
k8s.io/cli-runtime v0.23.2 // indirect
|
||||
k8s.io/client-go v12.0.0+incompatible
|
||||
k8s.io/code-generator v0.22.1
|
||||
k8s.io/component-base v0.23.2 // indirect
|
||||
k8s.io/klog/v2 v2.40.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf // indirect
|
||||
k8s.io/kubectl v0.23.2
|
||||
k8s.io/kubernetes v1.23.2
|
||||
k8s.io/utils v0.0.0-20211208161948-7d6a63dca704 // indirect
|
||||
sigs.k8s.io/cluster-api-provider-aws v1.2.0 // indirect
|
||||
sigs.k8s.io/cluster-api-provider-azure v1.1.0
|
||||
sigs.k8s.io/cluster-api-provider-openstack v0.5.0 // indirect
|
||||
sigs.k8s.io/controller-runtime v0.11.0
|
||||
sigs.k8s.io/controller-tools v0.6.3-0.20210916130746-94401651a6c3
|
||||
sigs.k8s.io/kustomize/api v0.10.1 // indirect
|
||||
sigs.k8s.io/kustomize/kyaml v0.13.1 // indirect
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
|
||||
honnef.co/go/tools v0.2.1 // indirect
|
||||
k8s.io/apiserver v0.23.5 // indirect
|
||||
k8s.io/cli-runtime v0.23.5 // indirect
|
||||
k8s.io/component-base v0.23.5 // indirect
|
||||
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c // indirect
|
||||
k8s.io/klog v1.0.0 // indirect
|
||||
k8s.io/klog/v2 v2.60.1 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20220401212409-b28bf2818661 // indirect
|
||||
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
|
||||
mvdan.cc/gofumpt v0.1.1 // indirect
|
||||
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
|
||||
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect
|
||||
mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7 // indirect
|
||||
sigs.k8s.io/cluster-api-provider-aws v1.4.0 // indirect
|
||||
sigs.k8s.io/cluster-api-provider-openstack v0.5.3 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
|
||||
sigs.k8s.io/kustomize/api v0.11.4 // indirect
|
||||
sigs.k8s.io/kustomize/kyaml v0.13.6 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
|
||||
sigs.k8s.io/yaml v1.3.0 // indirect
|
||||
)
|
||||
|
@ -240,6 +464,7 @@ replace (
|
|||
github.com/Unknwon/com => github.com/unknwon/com v1.0.1
|
||||
github.com/clarketm/json => github.com/clarketm/json v1.15.7 // Later versions not compatible with Go 1.16
|
||||
github.com/cockroachdb/sentry-go => github.com/getsentry/sentry-go v0.11.0
|
||||
github.com/docker/spdystream => github.com/docker/spdystream v0.1.0
|
||||
github.com/go-openapi/spec => github.com/go-openapi/spec v0.19.8
|
||||
// Replace old GoGo Protobuf versions https://nvd.nist.gov/vuln/detail/CVE-2021-3121
|
||||
github.com/gogo/protobuf => github.com/gogo/protobuf v1.3.2
|
||||
|
@ -248,36 +473,37 @@ replace (
|
|||
// https://www.whitesourcesoftware.com/vulnerability-database/WS-2018-0594
|
||||
github.com/satori/go.uuid => github.com/satori/go.uuid v1.2.1-0.20181028125025-b2ce2384e17b
|
||||
github.com/satori/uuid => github.com/satori/uuid v1.2.1-0.20181028125025-b2ce2384e17b
|
||||
github.com/spf13/pflag => github.com/spf13/pflag v1.0.6-0.20210604193023-d5e0c0615ace
|
||||
github.com/spf13/viper => github.com/spf13/viper v1.7.1
|
||||
github.com/terraform-providers/terraform-provider-aws => github.com/openshift/terraform-provider-aws v1.60.1-0.20200630224953-76d1fb4e5699
|
||||
github.com/terraform-providers/terraform-provider-azurerm => github.com/openshift/terraform-provider-azurerm v1.40.1-0.20200707062554-97ea089cc12a
|
||||
github.com/terraform-providers/terraform-provider-ignition/v2 => github.com/community-terraform-providers/terraform-provider-ignition/v2 v2.1.0
|
||||
k8s.io/api => k8s.io/api v0.22.0
|
||||
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.22.0
|
||||
k8s.io/apimachinery => k8s.io/apimachinery v0.22.0
|
||||
k8s.io/apiserver => k8s.io/apiserver v0.22.0
|
||||
k8s.io/cli-runtime => k8s.io/cli-runtime v0.22.0
|
||||
k8s.io/client-go => k8s.io/client-go v0.22.0
|
||||
k8s.io/cloud-provider => k8s.io/cloud-provider v0.22.0
|
||||
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.22.0
|
||||
k8s.io/code-generator => k8s.io/code-generator v0.22.0
|
||||
k8s.io/component-base => k8s.io/component-base v0.22.0
|
||||
k8s.io/component-helpers => k8s.io/component-helpers v0.22.0
|
||||
k8s.io/controller-manager => k8s.io/controller-manager v0.22.0
|
||||
k8s.io/cri-api => k8s.io/cri-api v0.22.0
|
||||
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.22.0
|
||||
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.22.0
|
||||
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.22.0
|
||||
k8s.io/kube-proxy => k8s.io/kube-proxy v0.22.0
|
||||
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.22.0
|
||||
k8s.io/kubectl => k8s.io/kubectl v0.22.0
|
||||
k8s.io/kubelet => k8s.io/kubelet v0.22.0
|
||||
k8s.io/kubernetes => k8s.io/kubernetes v1.22.0
|
||||
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.22.0
|
||||
k8s.io/metrics => k8s.io/metrics v0.22.0
|
||||
k8s.io/mount-utils => k8s.io/mount-utils v0.22.0
|
||||
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.22.0
|
||||
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.22.0
|
||||
k8s.io/api => k8s.io/api v0.23.0
|
||||
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.23.0
|
||||
k8s.io/apimachinery => k8s.io/apimachinery v0.23.0
|
||||
k8s.io/apiserver => k8s.io/apiserver v0.23.0
|
||||
k8s.io/cli-runtime => k8s.io/cli-runtime v0.23.0
|
||||
k8s.io/client-go => k8s.io/client-go v0.23.0
|
||||
k8s.io/cloud-provider => k8s.io/cloud-provider v0.23.0
|
||||
k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.23.0
|
||||
k8s.io/code-generator => k8s.io/code-generator v0.23.0
|
||||
k8s.io/component-base => k8s.io/component-base v0.23.0
|
||||
k8s.io/component-helpers => k8s.io/component-helpers v0.23.0
|
||||
k8s.io/controller-manager => k8s.io/controller-manager v0.23.0
|
||||
k8s.io/cri-api => k8s.io/cri-api v0.23.0
|
||||
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.23.0
|
||||
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.23.0
|
||||
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.23.0
|
||||
k8s.io/kube-proxy => k8s.io/kube-proxy v0.23.0
|
||||
k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.23.0
|
||||
k8s.io/kubectl => k8s.io/kubectl v0.23.0
|
||||
k8s.io/kubelet => k8s.io/kubelet v0.23.0
|
||||
k8s.io/kubernetes => k8s.io/kubernetes v1.23.0
|
||||
k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.23.0
|
||||
k8s.io/metrics => k8s.io/metrics v0.23.0
|
||||
k8s.io/mount-utils => k8s.io/mount-utils v0.23.0
|
||||
k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.23.0
|
||||
k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.23.0
|
||||
sigs.k8s.io/controller-runtime => sigs.k8s.io/controller-runtime v0.9.1
|
||||
sigs.k8s.io/controller-tools => sigs.k8s.io/controller-tools v0.5.0
|
||||
)
|
||||
|
@ -296,7 +522,7 @@ replace (
|
|||
github.com/coreos/bbolt => go.etcd.io/bbolt v1.3.6
|
||||
github.com/coreos/fcct => github.com/coreos/butane v0.13.1
|
||||
github.com/coreos/prometheus-operator => github.com/prometheus-operator/prometheus-operator v0.48.1
|
||||
github.com/coreos/stream-metadata-go => github.com/coreos/stream-metadata-go v0.0.0-20210225230131-70edb9eb47b3
|
||||
github.com/coreos/stream-metadata-go => github.com/coreos/stream-metadata-go v0.1.3
|
||||
github.com/cortexproject/cortex => github.com/cortexproject/cortex v1.10.0
|
||||
github.com/deislabs/oras => github.com/oras-project/oras v0.12.0
|
||||
github.com/etcd-io/bbolt => go.etcd.io/bbolt v1.3.6
|
||||
|
@ -310,23 +536,24 @@ replace (
|
|||
github.com/influxdata/flux => github.com/influxdata/flux v0.132.0
|
||||
github.com/knq/sysutil => github.com/chromedp/sysutil v1.0.0
|
||||
github.com/kshvakov/clickhouse => github.com/ClickHouse/clickhouse-go v1.4.9
|
||||
github.com/metal3-io/baremetal-operator => github.com/openshift/baremetal-operator v0.0.0-20210706141527-5240e42f012a // Use OpenShift fork
|
||||
github.com/metal3-io/baremetal-operator/apis => github.com/openshift/baremetal-operator/apis v0.0.0-20210706141527-5240e42f012a // Use OpenShift fork
|
||||
github.com/metal3-io/baremetal-operator => github.com/openshift/baremetal-operator v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
|
||||
github.com/metal3-io/baremetal-operator/apis => github.com/openshift/baremetal-operator/apis v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
|
||||
github.com/metal3-io/baremetal-operator/pkg/hardwareutils => github.com/openshift/baremetal-operator/pkg/hardwareutils v0.0.0-20211201170610-92ffa60c683d // Use OpenShift fork
|
||||
github.com/metal3-io/cluster-api-provider-baremetal => github.com/openshift/cluster-api-provider-baremetal v0.0.0-20190821174549-a2a477909c1d // Pin OpenShift fork
|
||||
github.com/mholt/certmagic => github.com/caddyserver/certmagic v0.15.0
|
||||
github.com/openshift/api => github.com/openshift/api v0.0.0-20211028023115-7224b732cc14
|
||||
github.com/openshift/client-go => github.com/openshift/client-go v0.0.0-20210831095141-e19a065e79f7
|
||||
github.com/openshift/api => github.com/openshift/api v0.0.0-20220124143425-d74727069f6f
|
||||
github.com/openshift/client-go => github.com/openshift/client-go v0.0.0-20211209144617-7385dd6338e3
|
||||
github.com/openshift/cloud-credential-operator => github.com/openshift/cloud-credential-operator v0.0.0-20200316201045-d10080b52c9e
|
||||
github.com/openshift/cluster-api-provider-gcp => github.com/openshift/cluster-api-provider-gcp v0.0.1-0.20211001174514-d92b08844a2b
|
||||
github.com/openshift/cluster-api-provider-ibmcloud => github.com/openshift/cluster-api-provider-ibmcloud v0.0.1-0.20210806145144-04491027caa8
|
||||
github.com/openshift/cluster-api-provider-gcp => github.com/openshift/cluster-api-provider-gcp v0.0.1-0.20211123160814-0d569513f9fa
|
||||
github.com/openshift/cluster-api-provider-ibmcloud => github.com/openshift/cluster-api-provider-ibmcloud v0.0.0-20211008100740-4d7907adbd6b
|
||||
github.com/openshift/cluster-api-provider-kubevirt => github.com/openshift/cluster-api-provider-kubevirt v0.0.0-20210719100556-9b8bc3666720
|
||||
github.com/openshift/cluster-api-provider-libvirt => github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20210623230745-59ae2edf8875
|
||||
github.com/openshift/cluster-api-provider-ovirt => github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20220120123528-15a6add2ff5b
|
||||
github.com/openshift/console-operator => github.com/openshift/console-operator v0.0.0-20220124105820-fdcb82f487fb
|
||||
github.com/openshift/installer => github.com/jewzaam/installer-aro v0.9.0-master.0.20220208140934-766bcf74e25c
|
||||
github.com/openshift/library-go => github.com/openshift/library-go v0.0.0-20220125122342-ff51c8a74c7b
|
||||
github.com/openshift/machine-api-operator => github.com/openshift/machine-api-operator v0.2.1-0.20211203013047-383c9b959b69
|
||||
github.com/openshift/machine-config-operator => github.com/openshift/machine-config-operator v0.0.1-0.20211215135312-23d93af42378
|
||||
github.com/openshift/cluster-api-provider-libvirt => github.com/openshift/cluster-api-provider-libvirt v0.2.1-0.20191219173431-2336783d4603
|
||||
github.com/openshift/cluster-api-provider-ovirt => github.com/openshift/cluster-api-provider-ovirt v0.1.1-0.20211215231458-35ce9aafee1f
|
||||
github.com/openshift/console-operator => github.com/openshift/console-operator v0.0.0-20220318130441-e44516b9c315
|
||||
github.com/openshift/installer => github.com/jewzaam/installer-aro v0.9.0-master.0.20220524230743-7e2aa7a0cc1a
|
||||
github.com/openshift/library-go => github.com/openshift/library-go v0.0.0-20220303081124-fb4e7a2872f0
|
||||
github.com/openshift/machine-api-operator => github.com/openshift/machine-api-operator v0.2.1-0.20220124104622-668c5b52b104
|
||||
github.com/openshift/machine-config-operator => github.com/openshift/machine-config-operator v0.0.1-0.20220319215057-e6ba00b88555
|
||||
github.com/oras-project/oras-go => oras.land/oras-go v0.4.0
|
||||
github.com/ovirt/go-ovirt => github.com/ovirt/go-ovirt v0.0.0-20210112072624-e4d3b104de71
|
||||
github.com/prometheus/prometheus => github.com/prometheus/prometheus v1.8.2-0.20210421143221-52df5ef7a3be
|
||||
|
@ -338,12 +565,16 @@ replace (
|
|||
google.golang.org/cloud => cloud.google.com/go v0.97.0
|
||||
google.golang.org/grpc => google.golang.org/grpc v1.40.0
|
||||
k8s.io/klog/v2 => k8s.io/klog/v2 v2.8.0
|
||||
k8s.io/kube-openapi => k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65
|
||||
k8s.io/kube-state-metrics => k8s.io/kube-state-metrics v1.9.7
|
||||
mvdan.cc/unparam => mvdan.cc/unparam v0.0.0-20211002133954-f839ab2b2b11
|
||||
sigs.k8s.io/cluster-api-provider-aws => github.com/openshift/cluster-api-provider-aws v0.2.1-0.20211213011328-8226e86fa06e
|
||||
sigs.k8s.io/cluster-api-provider-azure => github.com/openshift/cluster-api-provider-azure v0.1.0-alpha.3.0.20211202014309-184ccedc799e
|
||||
sigs.k8s.io/cluster-api-provider-openstack => github.com/openshift/cluster-api-provider-openstack v0.0.0-20210820223719-a7442bb18bce
|
||||
sigs.k8s.io/kustomize/kyaml => sigs.k8s.io/kustomize/kyaml v0.13.0
|
||||
sigs.k8s.io/cluster-api-provider-aws => github.com/openshift/cluster-api-provider-aws v0.2.1-0.20210121023454-5ffc5f422a80
|
||||
sigs.k8s.io/cluster-api-provider-azure => github.com/openshift/cluster-api-provider-azure v0.1.0-alpha.3.0.20210626224711-5d94c794092f
|
||||
sigs.k8s.io/cluster-api-provider-openstack => github.com/openshift/cluster-api-provider-openstack v0.0.0-20211111204942-611d320170af
|
||||
//sigs.k8s.io/controller-tools => sigs.k8s.io/controller-tools v0.3.1-0.20200617211605-651903477185
|
||||
sigs.k8s.io/kustomize/api => sigs.k8s.io/kustomize/api v0.11.2
|
||||
sigs.k8s.io/kustomize/kyaml => sigs.k8s.io/kustomize/kyaml v0.13.3
|
||||
sigs.k8s.io/structured-merge-diff => sigs.k8s.io/structured-merge-diff v1.0.1-0.20191108220359-b1b620dd3f06
|
||||
sourcegraph.com/sourcegraph/go-diff => github.com/sourcegraph/go-diff v0.5.1
|
||||
vbom.ml/util => github.com/fvbommel/util v0.0.3
|
||||
)
|
||||
|
|
1132
go.sum
1132
go.sum
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -118,18 +118,28 @@ kill_vpn() {
|
|||
|
||||
|
||||
# if LOCAL_E2E is set, set the value with the local test names
|
||||
# If it it not set, it defaults to the build ID
|
||||
if [ -z "${LOCAL_E2E}" ] ; then
|
||||
export CLUSTER="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
# If it it not set, it defaults to the build ID
|
||||
if [ -z "${LOCAL_E2E}" ] ; then
|
||||
# TODO: Remove this hack after AvailabilitySet name too long bug is fixed.
|
||||
NONZONAL_REGIONS="australiacentral australiacentral2 australiasoutheast brazilsoutheast canadaeast japanwest northcentralus norwaywest southindia switzerlandwest uaenorth ukwest westcentralus westus"
|
||||
|
||||
if echo $NONZONAL_REGIONS | grep -wq $LOCATION
|
||||
then
|
||||
export CLUSTER=$(head -c 19 <<< "v4-e2e-V$BUILD_BUILDID-$LOCATION")
|
||||
else
|
||||
export CLUSTER="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
fi
|
||||
# TODO: uncomment after above hack is removed.
|
||||
# export CLUSTER="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
export DATABASE_NAME="v4-e2e-V$BUILD_BUILDID-$LOCATION"
|
||||
fi
|
||||
|
||||
if [ -z "${CLUSTER}" ] ; then
|
||||
if [ -z "${CLUSTER}" ] ; then
|
||||
echo "CLUSTER is not set , aborting"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "${DATABASE_NAME}" ] ; then
|
||||
if [ -z "${DATABASE_NAME}" ] ; then
|
||||
echo "DATABASE_NAME is not set , aborting"
|
||||
exit 1
|
||||
fi
|
||||
|
|
|
@ -67,7 +67,7 @@ docker run \
|
|||
-CertFile /etc/mdm.pem \
|
||||
-FrontEndUrl $MDMFRONTENDURL \
|
||||
-Logger Console \
|
||||
-LogLevel Warning \
|
||||
-LogLevel Debug \
|
||||
-PrivateKeyFile /etc/mdm.pem \
|
||||
-SourceEnvironment $MDMSOURCEENVIRONMENT \
|
||||
-SourceRole $MDMSOURCEROLE \
|
||||
|
@ -86,7 +86,6 @@ ssh $CLOUDUSER@$PUBLICIP "sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/g'
|
|||
ssh $CLOUDUSER@$PUBLICIP "sudo firewall-cmd --zone=public --add-port=12345/tcp --permanent"
|
||||
ssh $CLOUDUSER@$PUBLICIP "sudo firewall-cmd --reload"
|
||||
|
||||
|
||||
scp $BASE/dockerStartCommand.sh $CLOUDUSER@$PUBLICIP:
|
||||
ssh $CLOUDUSER@$PUBLICIP "chmod +x dockerStartCommand.sh"
|
||||
ssh $CLOUDUSER@$PUBLICIP "sudo ./dockerStartCommand.sh &"
|
||||
|
|
|
@ -1,52 +0,0 @@
|
|||
|
||||
|
||||
BASE=$( git rev-parse --show-toplevel)
|
||||
|
||||
SOCKETPATH="$BASE/cmd/aro"
|
||||
|
||||
HOSTNAME=$( hostname )
|
||||
NAME="mdm"
|
||||
MDMIMAGE=linuxgeneva-microsoft.azurecr.io/genevamdm:master_20211120.1
|
||||
MDMFRONTENDURL=https://int2.int.microsoftmetrics.com/
|
||||
MDMSOURCEENVIRONMENT=$LOCATION
|
||||
MDMSOURCEROLE=rp
|
||||
MDMSOURCEROLEINSTANCE=$HOSTNAME
|
||||
|
||||
|
||||
echo "Using:"
|
||||
|
||||
echo "Resourcegroup = $RESOURCEGROUP"
|
||||
echo "User = $USER"
|
||||
echo "HOSTNAME = $HOSTNAME"
|
||||
echo "Containername = $NAME"
|
||||
echo "Location = $LOCATION"
|
||||
echo "MDM image = $MDMIMAGE"
|
||||
echo " (version hardcoded. Check against pkg/util/version/const.go if things don't work)"
|
||||
echo "Geneva API URL= $MDMFRONTENDURL"
|
||||
echo "MDMSOURCEENV = $MDMSOURCEENVIRONMENT"
|
||||
echo "MDMSOURCEROLE = $MDMSOURCEROLE"
|
||||
echo "MDMSOURCEROLEINSTANCE = $MDMSOURCEROLEINSTANCE"
|
||||
|
||||
cp $BASE/secrets/rp-metrics-int.pem /etc/mdm.pem
|
||||
|
||||
|
||||
|
||||
|
||||
podman run \
|
||||
--entrypoint /usr/sbin/MetricsExtension \
|
||||
--hostname $HOSTNAME \
|
||||
--name $NAME \
|
||||
-d \
|
||||
--restart=always \
|
||||
-m 2g \
|
||||
-v /etc/mdm.pem:/etc/mdm.pem \
|
||||
-v $SOCKETPATH:/var/etw:z \
|
||||
$MDMIMAGE \
|
||||
-CertFile /etc/mdm.pem \
|
||||
-FrontEndUrl $MDMFRONTENDURL \
|
||||
-Logger Console \
|
||||
-LogLevel Debug \
|
||||
-PrivateKeyFile /etc/mdm.pem \
|
||||
-SourceEnvironment $MDMSOURCEENVIRONMENT \
|
||||
-SourceRole $MDMSOURCEROLE \
|
||||
-SourceRoleInstance $MDMSOURCEROLEINSTANCE
|
|
@ -80,7 +80,7 @@ func run(ctx context.Context, log *logrus.Entry) error {
|
|||
}
|
||||
|
||||
// encoded
|
||||
fmt.Printf("session=%s", encoded)
|
||||
fmt.Printf("%s", encoded)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -55,16 +55,16 @@ for x in vendor/github.com/openshift/*; do
|
|||
;;
|
||||
|
||||
*)
|
||||
go mod edit -replace ${x##vendor/}=$(go list -mod=mod -m ${x##vendor/}@release-4.9 | sed -e 's/ /@/')
|
||||
go mod edit -replace ${x##vendor/}=$(go list -mod=mod -m ${x##vendor/}@release-4.10 | sed -e 's/ /@/')
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
for x in aws azure openstack; do
|
||||
go mod edit -replace sigs.k8s.io/cluster-api-provider-$x=$(go list -mod=mod -m github.com/openshift/cluster-api-provider-$x@release-4.9 | sed -e 's/ /@/')
|
||||
go mod edit -replace sigs.k8s.io/cluster-api-provider-$x=$(go list -mod=mod -m github.com/openshift/cluster-api-provider-$x@release-4.10 | sed -e 's/ /@/')
|
||||
done
|
||||
|
||||
go mod edit -replace github.com/openshift/installer=$(go list -mod=mod -m github.com/jewzaam/installer-aro@release-4.9-azure | sed -e 's/ /@/')
|
||||
go mod edit -replace github.com/openshift/installer=$(go list -mod=mod -m github.com/jewzaam/installer-aro@release-4.10-azure | sed -e 's/ /@/')
|
||||
|
||||
go get -u ./...
|
||||
|
||||
|
|
|
@ -19,9 +19,6 @@ allowedImportNames:
|
|||
- utildiscovery
|
||||
github.com/Azure/ARO-RP/pkg/util/embed:
|
||||
- utilembed
|
||||
github.com/Azure/ARO-RP/pkg/util/machine:
|
||||
- utilmachine
|
||||
- ""
|
||||
github.com/Azure/ARO-RP/pkg/util/namespace:
|
||||
- utilnamespace
|
||||
- ""
|
||||
|
|
|
@ -4,13 +4,20 @@ package main
|
|||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/token"
|
||||
"log"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
)
|
||||
|
||||
//go:embed allowed-import-names.yaml
|
||||
var allowedNamesYaml []byte
|
||||
|
||||
func isStandardLibrary(path string) bool {
|
||||
return !strings.ContainsRune(strings.SplitN(path, "/", 2)[0], '.')
|
||||
}
|
||||
|
@ -22,16 +29,68 @@ func validateUnderscoreImport(path string) error {
|
|||
|
||||
switch path {
|
||||
case "net/http/pprof",
|
||||
"github.com/Azure/ARO-RP/pkg/util/scheme":
|
||||
"github.com/Azure/ARO-RP/pkg/util/scheme",
|
||||
"embed":
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("invalid _ import %s", path)
|
||||
}
|
||||
|
||||
// acceptableNames returns a list of acceptable names for an import; empty
|
||||
type importValidator struct {
|
||||
AllowedNames map[string][]string `json:"allowedImportNames"`
|
||||
}
|
||||
|
||||
func initValidator() importValidator {
|
||||
allowed := importValidator{}
|
||||
err := yaml.Unmarshal(allowedNamesYaml, &allowed)
|
||||
if err != nil {
|
||||
log.Fatalf("error while unmarshalling allowed import names. err: %s", err)
|
||||
}
|
||||
return allowed
|
||||
}
|
||||
|
||||
func (validator importValidator) isOkFromYaml(name, importedAs string) (bool, []string) {
|
||||
for _, v := range validator.AllowedNames[name] {
|
||||
if importedAs == v {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, validator.AllowedNames[name]
|
||||
}
|
||||
|
||||
func (validator importValidator) validateImportName(name, importedAs string) error {
|
||||
isAllowed, names := validator.isOkFromYaml(name, importedAs)
|
||||
|
||||
if isAllowed {
|
||||
return nil
|
||||
}
|
||||
|
||||
isAllowedFromRegex, namesRegex := isOkFromRegex(name, importedAs)
|
||||
|
||||
if isAllowedFromRegex {
|
||||
return nil
|
||||
}
|
||||
|
||||
names = append(names, namesRegex...)
|
||||
|
||||
return fmt.Errorf("%s is imported as %q, should be %q", name, importedAs, names)
|
||||
}
|
||||
|
||||
func isOkFromRegex(name, importedAs string) (bool, []string) {
|
||||
acceptableNames := acceptableNamesRegex(name)
|
||||
|
||||
for _, v := range acceptableNames {
|
||||
if v == importedAs {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, acceptableNames
|
||||
}
|
||||
|
||||
// acceptableNamesRegex returns a list of acceptable names for an import; empty
|
||||
// string = no import override; nil list = don't care
|
||||
func acceptableNames(path string) []string {
|
||||
func acceptableNamesRegex(path string) []string {
|
||||
m := regexp.MustCompile(`^github\.com/Azure/ARO-RP/pkg/api/(v[^/]*[0-9])$`).FindStringSubmatch(path)
|
||||
if m != nil {
|
||||
return []string{m[1]}
|
||||
|
@ -102,103 +161,22 @@ func acceptableNames(path string) []string {
|
|||
return []string{m[1] + m[2] + "client"}
|
||||
}
|
||||
|
||||
switch path {
|
||||
case "github.com/Azure/ARO-RP/pkg/frontend/middleware":
|
||||
return []string{"", "frontendmiddleware"}
|
||||
case "github.com/Azure/ARO-RP/pkg/metrics/statsd/cosmosdb":
|
||||
return []string{"dbmetrics"}
|
||||
case "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1":
|
||||
return []string{"arov1alpha1"}
|
||||
case "github.com/Azure/ARO-RP/pkg/operator/apis/preview.aro.openshift.io/v1alpha1":
|
||||
return []string{"aropreviewv1alpha1"}
|
||||
case "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned":
|
||||
return []string{"aroclient"}
|
||||
case "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned/fake":
|
||||
return []string{"arofake"}
|
||||
case "github.com/Azure/ARO-RP/pkg/util/dynamichelper/discovery":
|
||||
return []string{"utildiscovery"}
|
||||
case "github.com/Azure/ARO-RP/pkg/util/namespace":
|
||||
return []string{"", "utilnamespace"}
|
||||
case "github.com/Azure/ARO-RP/pkg/util/recover":
|
||||
return []string{"", "utilrecover"}
|
||||
case "github.com/Azure/ARO-RP/pkg/util/azureclient/mgmt/keyvault":
|
||||
return []string{"", "keyvaultclient"}
|
||||
case "github.com/Azure/ARO-RP/test/database":
|
||||
return []string{"testdatabase"}
|
||||
case "github.com/Azure/ARO-RP/test/util/dynamichelper":
|
||||
return []string{"testdynamichelper"}
|
||||
case "github.com/Azure/ARO-RP/test/util/log":
|
||||
return []string{"testlog"}
|
||||
case "github.com/Azure/azure-sdk-for-go/services/graphrbac/1.6/graphrbac":
|
||||
return []string{"azgraphrbac"}
|
||||
case "github.com/Azure/azure-sdk-for-go/services/keyvault/v7.0/keyvault":
|
||||
return []string{"azkeyvault"}
|
||||
case "github.com/Azure/azure-sdk-for-go/storage":
|
||||
return []string{"azstorage"}
|
||||
case "github.com/googleapis/gnostic/openapiv2":
|
||||
return []string{"openapi_v2"}
|
||||
case "github.com/openshift/console-operator/pkg/api":
|
||||
return []string{"consoleapi"}
|
||||
case "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1":
|
||||
return []string{"machinev1beta1"}
|
||||
case "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned":
|
||||
return []string{"maoclient"}
|
||||
case "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned/fake":
|
||||
return []string{"maofake"}
|
||||
case "github.com/openshift/machine-config-operator/pkg/apis/machineconfiguration.openshift.io/v1":
|
||||
return []string{"mcv1"}
|
||||
case "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned":
|
||||
return []string{"mcoclient"}
|
||||
case "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned/fake":
|
||||
return []string{"mcofake"}
|
||||
case "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned/typed/machineconfiguration.openshift.io/v1":
|
||||
return []string{"mcoclientv1"}
|
||||
case "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/fake":
|
||||
return []string{"extensionsfake"}
|
||||
case "github.com/openshift/installer/pkg/asset/installconfig/azure":
|
||||
return []string{"icazure"}
|
||||
case "github.com/openshift/installer/pkg/types/azure":
|
||||
return []string{"azuretypes"}
|
||||
case "github.com/coreos/stream-metadata-go/arch":
|
||||
return []string{"coreosarch"}
|
||||
case "github.com/openshift/installer/pkg/rhcos":
|
||||
return []string{"rhcospkg"}
|
||||
case "golang.org/x/crypto/ssh":
|
||||
return []string{"", "cryptossh"}
|
||||
case "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1":
|
||||
return []string{"extensionsv1beta1"}
|
||||
case "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1":
|
||||
return []string{"extensionsv1"}
|
||||
case "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset":
|
||||
return []string{"extensionsclient"}
|
||||
case "k8s.io/apimachinery/pkg/api/errors":
|
||||
return []string{"kerrors"}
|
||||
case "k8s.io/apimachinery/pkg/runtime":
|
||||
return []string{"kruntime"}
|
||||
case "k8s.io/apimachinery/pkg/apis/meta/v1":
|
||||
return []string{"metav1"}
|
||||
case "k8s.io/apimachinery/pkg/runtime/serializer/json":
|
||||
return []string{"kjson"}
|
||||
case "k8s.io/apimachinery/pkg/util/runtime":
|
||||
return []string{"utilruntime"}
|
||||
case "k8s.io/apimachinery/pkg/version":
|
||||
return []string{"kversion"}
|
||||
case "k8s.io/client-go/testing":
|
||||
return []string{"ktesting"}
|
||||
case "k8s.io/client-go/tools/clientcmd/api/v1":
|
||||
return []string{"clientcmdv1"}
|
||||
case "k8s.io/client-go/tools/metrics":
|
||||
return []string{"kmetrics"}
|
||||
case "sigs.k8s.io/cluster-api-provider-azure/pkg/apis/azureprovider/v1beta1":
|
||||
return []string{"azureproviderv1beta1"}
|
||||
case "sigs.k8s.io/controller-runtime":
|
||||
return []string{"ctrl"}
|
||||
}
|
||||
|
||||
return []string{""}
|
||||
}
|
||||
|
||||
func validateImports(path string, fset *token.FileSet, f *ast.File) (errs []error) {
|
||||
func importedAs(spec *ast.ImportSpec) string {
|
||||
if spec == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
if spec.Name == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
return spec.Name.Name
|
||||
}
|
||||
|
||||
func validateImports(path string, fset *token.FileSet, f *ast.File) []error {
|
||||
for _, prefix := range []string{
|
||||
"pkg/client/",
|
||||
"pkg/database/cosmosdb/zz_generated_",
|
||||
|
@ -212,62 +190,67 @@ func validateImports(path string, fset *token.FileSet, f *ast.File) (errs []erro
|
|||
}
|
||||
}
|
||||
|
||||
nextImport:
|
||||
errs := make([]error, 0)
|
||||
validator := initValidator()
|
||||
for _, imp := range f.Imports {
|
||||
value := strings.Trim(imp.Path.Value, `"`)
|
||||
|
||||
if imp.Name != nil && imp.Name.Name == "." {
|
||||
//accept dotimports because we check them with golangci-lint
|
||||
continue
|
||||
if err := validator.validateImport(imp); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
if imp.Name != nil && imp.Name.Name == "_" {
|
||||
err := validateUnderscoreImport(value)
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch value {
|
||||
case "sigs.k8s.io/yaml", "gopkg.in/yaml.v2":
|
||||
errs = append(errs, fmt.Errorf("%s is imported; use github.com/ghodss/yaml", value))
|
||||
continue nextImport
|
||||
case "github.com/google/uuid", "github.com/satori/go.uuid":
|
||||
errs = append(errs, fmt.Errorf("%s is imported; use github.com/gofrs/uuid", value))
|
||||
continue nextImport
|
||||
}
|
||||
|
||||
if strings.HasPrefix(value, "github.com/Azure/azure-sdk-for-go/profiles") {
|
||||
errs = append(errs, fmt.Errorf("%s is imported; use github.com/Azure/azure-sdk-for-go/services/*", value))
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.HasSuffix(value, "/scheme") &&
|
||||
value != "k8s.io/client-go/kubernetes/scheme" {
|
||||
errs = append(errs, fmt.Errorf("%s is imported; should probably use k8s.io/client-go/kubernetes/scheme", value))
|
||||
continue
|
||||
}
|
||||
|
||||
if isStandardLibrary(value) {
|
||||
if imp.Name != nil {
|
||||
errs = append(errs, fmt.Errorf("overridden import %s", value))
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
names := acceptableNames(value)
|
||||
if names == nil {
|
||||
continue
|
||||
}
|
||||
for _, name := range names {
|
||||
if name == "" && imp.Name == nil ||
|
||||
name != "" && imp.Name != nil && imp.Name.Name == name {
|
||||
continue nextImport
|
||||
}
|
||||
}
|
||||
errs = append(errs, fmt.Errorf("%s is imported as %q, should be %q", value, imp.Name, names))
|
||||
}
|
||||
|
||||
return
|
||||
return errs
|
||||
}
|
||||
|
||||
func (validator importValidator) validateImport(imp *ast.ImportSpec) error {
|
||||
packageName := strings.Trim(imp.Path.Value, `"`)
|
||||
|
||||
if imp.Name != nil && imp.Name.Name == "." {
|
||||
//accept dotimports because we check them with golangci-lint
|
||||
return nil
|
||||
}
|
||||
|
||||
if imp.Name != nil && imp.Name.Name == "_" {
|
||||
err := validateUnderscoreImport(packageName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
switch packageName {
|
||||
case "sigs.k8s.io/yaml", "gopkg.in/yaml.v2":
|
||||
return fmt.Errorf("%s is imported; use github.com/ghodss/yaml", packageName)
|
||||
case "github.com/google/uuid", "github.com/satori/go.uuid":
|
||||
return fmt.Errorf("%s is imported; use github.com/gofrs/uuid", packageName)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(packageName, "github.com/Azure/azure-sdk-for-go/profiles") {
|
||||
return fmt.Errorf("%s is imported; use github.com/Azure/azure-sdk-for-go/services/*", packageName)
|
||||
}
|
||||
|
||||
if strings.HasSuffix(packageName, "/scheme") &&
|
||||
packageName != "k8s.io/client-go/kubernetes/scheme" {
|
||||
return fmt.Errorf("%s is imported; should probably use k8s.io/client-go/kubernetes/scheme", packageName)
|
||||
}
|
||||
|
||||
if isStandardLibrary(packageName) {
|
||||
if imp.Name != nil {
|
||||
return fmt.Errorf("overridden import %s", packageName)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
names := acceptableNamesRegex(packageName)
|
||||
|
||||
if names == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
importedAs := importedAs(imp)
|
||||
err := validator.validateImportName(packageName, importedAs)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -75,6 +75,13 @@ func (sv openShiftClusterStaticValidator) validate(oc *OpenShiftCluster, isCreat
|
|||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "location", "The provided location '%s' is invalid.", oc.Location)
|
||||
}
|
||||
|
||||
// TODO: remove the VM name validation after https://bugzilla.redhat.com/show_bug.cgi?id=2093044 is resolved
|
||||
if isCreate {
|
||||
if !validate.OpenShiftClusterNameLength(oc.Name, oc.Location) {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "name", "The provided cluster name '%s' exceeds the maximum cluster name length of '%d'.", oc.Name, validate.MaxClusterNameLength)
|
||||
}
|
||||
}
|
||||
|
||||
return sv.validateProperties("properties", &oc.Properties, isCreate)
|
||||
}
|
||||
|
||||
|
|
|
@ -11,15 +11,19 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
"github.com/Azure/go-autorest/autorest/to"
|
||||
"github.com/gofrs/uuid"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/util/uuid"
|
||||
apiValidate "github.com/Azure/ARO-RP/pkg/api/validate"
|
||||
"github.com/Azure/ARO-RP/pkg/util/version"
|
||||
"github.com/Azure/ARO-RP/test/validate"
|
||||
)
|
||||
|
||||
type validateTest struct {
|
||||
name string
|
||||
clusterName *string
|
||||
location *string
|
||||
current func(oc *OpenShiftCluster)
|
||||
modify func(oc *OpenShiftCluster)
|
||||
requireD2sV3Workers bool
|
||||
|
@ -35,20 +39,23 @@ const (
|
|||
|
||||
var (
|
||||
subscriptionID = "00000000-0000-0000-0000-000000000000"
|
||||
id = fmt.Sprintf("/subscriptions/%s/resourcegroups/resourceGroup/providers/microsoft.redhatopenshift/openshiftclusters/resourceName", subscriptionID)
|
||||
)
|
||||
|
||||
func validOpenShiftCluster() *OpenShiftCluster {
|
||||
func getResourceID(clusterName string) string {
|
||||
return fmt.Sprintf("/subscriptions/%s/resourcegroups/resourceGroup/providers/microsoft.redhatopenshift/openshiftclusters/%s", subscriptionID, clusterName)
|
||||
}
|
||||
|
||||
func validOpenShiftCluster(name, location string) *OpenShiftCluster {
|
||||
timestamp, err := time.Parse(time.RFC3339, "2021-01-23T12:34:54.0000000Z")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
oc := &OpenShiftCluster{
|
||||
ID: id,
|
||||
Name: "resourceName",
|
||||
ID: getResourceID(name),
|
||||
Name: name,
|
||||
Type: "Microsoft.RedHatOpenShift/OpenShiftClusters",
|
||||
Location: "location",
|
||||
Location: location,
|
||||
Tags: Tags{
|
||||
"key": "value",
|
||||
},
|
||||
|
@ -117,22 +124,31 @@ func runTests(t *testing.T, mode testMode, tests []*validateTest) {
|
|||
t.Run(string(mode), func(t *testing.T) {
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// default values if not set
|
||||
if tt.location == nil {
|
||||
tt.location = to.StringPtr("location")
|
||||
}
|
||||
|
||||
if tt.clusterName == nil {
|
||||
tt.clusterName = to.StringPtr("resourceName")
|
||||
}
|
||||
|
||||
v := &openShiftClusterStaticValidator{
|
||||
location: "location",
|
||||
location: *tt.location,
|
||||
domain: "location.aroapp.io",
|
||||
requireD2sV3Workers: tt.requireD2sV3Workers,
|
||||
resourceID: id,
|
||||
resourceID: getResourceID(*tt.clusterName),
|
||||
r: azure.Resource{
|
||||
SubscriptionID: subscriptionID,
|
||||
ResourceGroup: "resourceGroup",
|
||||
Provider: "Microsoft.RedHatOpenShift",
|
||||
ResourceType: "openshiftClusters",
|
||||
ResourceName: "resourceName",
|
||||
ResourceName: *tt.clusterName,
|
||||
},
|
||||
}
|
||||
|
||||
validOCForTest := func() *OpenShiftCluster {
|
||||
oc := validOpenShiftCluster()
|
||||
oc := validOpenShiftCluster(*tt.clusterName, *tt.location)
|
||||
if tt.current != nil {
|
||||
tt.current(oc)
|
||||
}
|
||||
|
@ -177,7 +193,11 @@ func runTests(t *testing.T, mode testMode, tests []*validateTest) {
|
|||
}
|
||||
|
||||
func TestOpenShiftClusterStaticValidate(t *testing.T) {
|
||||
tests := []*validateTest{
|
||||
clusterName19 := "19characters-aaaaaa"
|
||||
clusterName30 := "thisis30characterslong-aaaaaa"
|
||||
nonZonalRegion := "australiasoutheast"
|
||||
|
||||
commonTests := []*validateTest{
|
||||
{
|
||||
name: "valid",
|
||||
},
|
||||
|
@ -209,10 +229,38 @@ func TestOpenShiftClusterStaticValidate(t *testing.T) {
|
|||
},
|
||||
wantErr: "400: InvalidParameter: location: The provided location 'invalid' is invalid.",
|
||||
},
|
||||
{
|
||||
name: "valid - zonal regions can exceed max cluster name length",
|
||||
clusterName: &clusterName30,
|
||||
},
|
||||
}
|
||||
|
||||
runTests(t, testModeCreate, tests)
|
||||
runTests(t, testModeUpdate, tests)
|
||||
createTests := []*validateTest{
|
||||
{
|
||||
name: "invalid - non-zonal regions cannot exceed max cluster name length on cluster create",
|
||||
clusterName: &clusterName30,
|
||||
location: &nonZonalRegion,
|
||||
wantErr: fmt.Sprintf("400: InvalidParameter: name: The provided cluster name '%s' exceeds the maximum cluster name length of '%d'.", clusterName30, apiValidate.MaxClusterNameLength),
|
||||
},
|
||||
{
|
||||
name: "valid - non-zonal region less than max cluster name length",
|
||||
clusterName: &clusterName19,
|
||||
location: &nonZonalRegion,
|
||||
},
|
||||
}
|
||||
|
||||
updateTests := []*validateTest{
|
||||
{
|
||||
name: "valid - existing cluster names > max cluster name length still work on cluster update",
|
||||
clusterName: &clusterName30,
|
||||
location: &nonZonalRegion,
|
||||
},
|
||||
}
|
||||
|
||||
runTests(t, testModeCreate, commonTests)
|
||||
runTests(t, testModeUpdate, commonTests)
|
||||
runTests(t, testModeCreate, createTests)
|
||||
runTests(t, testModeUpdate, updateTests)
|
||||
}
|
||||
|
||||
func TestOpenShiftClusterStaticValidateProperties(t *testing.T) {
|
||||
|
@ -846,7 +894,7 @@ func TestOpenShiftClusterStaticValidateDelta(t *testing.T) {
|
|||
{
|
||||
name: "clientId change",
|
||||
modify: func(oc *OpenShiftCluster) {
|
||||
oc.Properties.ServicePrincipalProfile.ClientID = uuid.DefaultGenerator.Generate()
|
||||
oc.Properties.ServicePrincipalProfile.ClientID = uuid.Must(uuid.NewV4()).String()
|
||||
},
|
||||
},
|
||||
{
|
||||
|
|
|
@ -28,12 +28,18 @@ func addRequiredResources(requiredResources map[string]int, vmSize api.VMSize, c
|
|||
api.VMSizeStandardD16sV3: {CoreCount: 16, Family: "standardDSv3Family"},
|
||||
api.VMSizeStandardD32sV3: {CoreCount: 32, Family: "standardDSv3Family"},
|
||||
|
||||
api.VMSizeStandardE4sV3: {CoreCount: 4, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE8sV3: {CoreCount: 8, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE16sV3: {CoreCount: 16, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE32sV3: {CoreCount: 32, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE64isV3: {CoreCount: 64, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE64iV3: {CoreCount: 64, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE4sV3: {CoreCount: 4, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE8sV3: {CoreCount: 8, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE16sV3: {CoreCount: 16, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE32sV3: {CoreCount: 32, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE64isV3: {CoreCount: 64, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE64iV3: {CoreCount: 64, Family: "standardESv3Family"},
|
||||
api.VMSizeStandardE80isV4: {CoreCount: 80, Family: "standardEISv4Family"},
|
||||
api.VMSizeStandardE80idsV4: {CoreCount: 80, Family: "standardEIDSv4Family"},
|
||||
api.VMSizeStandardE104iV5: {CoreCount: 104, Family: "standardEIv5Family"},
|
||||
api.VMSizeStandardE104isV5: {CoreCount: 104, Family: "standardEISv5Family"},
|
||||
api.VMSizeStandardE104idV5: {CoreCount: 104, Family: "standardEIDv5Family"},
|
||||
api.VMSizeStandardE104idsV5: {CoreCount: 104, Family: "standardEIDSv5Family"},
|
||||
|
||||
api.VMSizeStandardF4sV2: {CoreCount: 4, Family: "standardFSv2Family"},
|
||||
api.VMSizeStandardF8sV2: {CoreCount: 8, Family: "standardFSv2Family"},
|
||||
|
|
|
@ -23,22 +23,50 @@ func TestValidateVMSku(t *testing.T) {
|
|||
name string
|
||||
restrictions mgmtcompute.ResourceSkuRestrictionsReasonCode
|
||||
restrictionLocation *[]string
|
||||
restrictedZones []string
|
||||
targetLocation string
|
||||
workerProfile1Sku string
|
||||
workerProfile2Sku string
|
||||
masterProfileSku string
|
||||
availableSku string
|
||||
availableSku2 string
|
||||
restrictedSku string
|
||||
resourceSkusClientErr error
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
name: "worker and master sku are valid",
|
||||
name: "worker and master skus are valid",
|
||||
workerProfile1Sku: "Standard_D4s_v2",
|
||||
workerProfile2Sku: "Standard_D4s_v2",
|
||||
masterProfileSku: "Standard_D4s_v2",
|
||||
availableSku: "Standard_D4s_v2",
|
||||
},
|
||||
{
|
||||
name: "worker and master skus are distinct, both valid",
|
||||
workerProfile1Sku: "Standard_E104i_v5",
|
||||
workerProfile2Sku: "Standard_E104i_v5",
|
||||
masterProfileSku: "Standard_D4s_v2",
|
||||
availableSku: "Standard_E104i_v5",
|
||||
availableSku2: "Standard_D4s_v2",
|
||||
},
|
||||
{
|
||||
name: "worker and master skus are distinct, one invalid",
|
||||
workerProfile1Sku: "Standard_E104i_v5",
|
||||
workerProfile2Sku: "Standard_E104i_v5",
|
||||
masterProfileSku: "Standard_D4s_v2",
|
||||
availableSku: "Standard_E104i_v5",
|
||||
availableSku2: "Standard_E104i_v5",
|
||||
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_D4s_v2' is unavailable in region 'eastus'",
|
||||
},
|
||||
{
|
||||
name: "worker and master skus are distinct, both invalid",
|
||||
workerProfile1Sku: "Standard_E104i_v5",
|
||||
workerProfile2Sku: "Standard_E104i_v5",
|
||||
masterProfileSku: "Standard_D4s_v2",
|
||||
availableSku: "Standard_L8s_v2",
|
||||
availableSku2: "Standard_L16s_v2",
|
||||
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_D4s_v2' is unavailable in region 'eastus'",
|
||||
},
|
||||
{
|
||||
name: "unable to retrieve skus information",
|
||||
workerProfile1Sku: "Standard_D4s_v2",
|
||||
|
@ -96,12 +124,30 @@ func TestValidateVMSku(t *testing.T) {
|
|||
restrictedSku: "Standard_L80",
|
||||
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_L80' is restricted in region 'eastus' for selected subscription",
|
||||
},
|
||||
{
|
||||
name: "sku is restricted in a single zone",
|
||||
restrictions: mgmtcompute.NotAvailableForSubscription,
|
||||
restrictionLocation: &[]string{
|
||||
"eastus",
|
||||
},
|
||||
restrictedZones: []string{"3"},
|
||||
workerProfile1Sku: "Standard_D4s_v2",
|
||||
workerProfile2Sku: "Standard_D4s_v2",
|
||||
masterProfileSku: "Standard_L80",
|
||||
availableSku: "Standard_D4s_v2",
|
||||
restrictedSku: "Standard_L80",
|
||||
wantErr: "400: InvalidParameter: properties.masterProfile.VMSize: The selected SKU 'Standard_L80' is restricted in region 'eastus' for selected subscription",
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if tt.targetLocation == "" {
|
||||
tt.targetLocation = "eastus"
|
||||
}
|
||||
|
||||
if tt.restrictedZones == nil {
|
||||
tt.restrictedZones = []string{"1", "2", "3"}
|
||||
}
|
||||
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
|
@ -132,11 +178,21 @@ func TestValidateVMSku(t *testing.T) {
|
|||
Capabilities: &[]mgmtcompute.ResourceSkuCapabilities{},
|
||||
ResourceType: to.StringPtr("virtualMachines"),
|
||||
},
|
||||
{
|
||||
Name: &tt.availableSku2,
|
||||
Locations: &[]string{"eastus"},
|
||||
LocationInfo: &[]mgmtcompute.ResourceSkuLocationInfo{
|
||||
{Zones: &[]string{"1, 2, 3"}},
|
||||
},
|
||||
Restrictions: &[]mgmtcompute.ResourceSkuRestrictions{},
|
||||
Capabilities: &[]mgmtcompute.ResourceSkuCapabilities{},
|
||||
ResourceType: to.StringPtr("virtualMachines"),
|
||||
},
|
||||
{
|
||||
Name: &tt.restrictedSku,
|
||||
Locations: &[]string{tt.targetLocation},
|
||||
LocationInfo: &[]mgmtcompute.ResourceSkuLocationInfo{
|
||||
{Zones: &[]string{"1, 2, 3"}},
|
||||
{Zones: &tt.restrictedZones},
|
||||
},
|
||||
Restrictions: &[]mgmtcompute.ResourceSkuRestrictions{
|
||||
{
|
||||
|
|
|
@ -7,66 +7,88 @@ import (
|
|||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
)
|
||||
|
||||
var supportedMasterVMSizes = map[api.VMSize]bool{
|
||||
// General purpose
|
||||
api.VMSizeStandardD8sV3: true,
|
||||
api.VMSizeStandardD16sV3: true,
|
||||
api.VMSizeStandardD32sV3: true,
|
||||
// Memory optimized
|
||||
api.VMSizeStandardE64iV3: true,
|
||||
api.VMSizeStandardE64isV3: true,
|
||||
api.VMSizeStandardE80isV4: true,
|
||||
api.VMSizeStandardE80idsV4: true,
|
||||
api.VMSizeStandardE104iV5: true,
|
||||
api.VMSizeStandardE104isV5: true,
|
||||
api.VMSizeStandardE104idV5: true,
|
||||
api.VMSizeStandardE104idsV5: true,
|
||||
// Compute optimized
|
||||
api.VMSizeStandardF72sV2: true,
|
||||
// Memory and storage optimized
|
||||
api.VMSizeStandardGS5: true,
|
||||
api.VMSizeStandardG5: true,
|
||||
// Memory and compute optimized
|
||||
api.VMSizeStandardM128ms: true,
|
||||
}
|
||||
|
||||
var supportedWorkerVMSizes = map[api.VMSize]bool{
|
||||
// General purpose
|
||||
api.VMSizeStandardD4asV4: true,
|
||||
api.VMSizeStandardD8asV4: true,
|
||||
api.VMSizeStandardD16asV4: true,
|
||||
api.VMSizeStandardD32asV4: true,
|
||||
api.VMSizeStandardD4sV3: true,
|
||||
api.VMSizeStandardD8sV3: true,
|
||||
api.VMSizeStandardD16sV3: true,
|
||||
api.VMSizeStandardD32sV3: true,
|
||||
// Memory optimized
|
||||
api.VMSizeStandardE4sV3: true,
|
||||
api.VMSizeStandardE8sV3: true,
|
||||
api.VMSizeStandardE16sV3: true,
|
||||
api.VMSizeStandardE32sV3: true,
|
||||
api.VMSizeStandardE64isV3: true,
|
||||
api.VMSizeStandardE64iV3: true,
|
||||
api.VMSizeStandardE80isV4: true,
|
||||
api.VMSizeStandardE80idsV4: true,
|
||||
api.VMSizeStandardE104iV5: true,
|
||||
api.VMSizeStandardE104isV5: true,
|
||||
api.VMSizeStandardE104idV5: true,
|
||||
api.VMSizeStandardE104idsV5: true,
|
||||
// Compute optimized
|
||||
api.VMSizeStandardF4sV2: true,
|
||||
api.VMSizeStandardF8sV2: true,
|
||||
api.VMSizeStandardF16sV2: true,
|
||||
api.VMSizeStandardF32sV2: true,
|
||||
api.VMSizeStandardF72sV2: true,
|
||||
// Memory and storage optimized
|
||||
api.VMSizeStandardG5: true,
|
||||
api.VMSizeStandardGS5: true,
|
||||
// Memory and compute optimized
|
||||
api.VMSizeStandardM128ms: true,
|
||||
// Storage optimized
|
||||
api.VMSizeStandardL4s: true,
|
||||
api.VMSizeStandardL8s: true,
|
||||
api.VMSizeStandardL16s: true,
|
||||
api.VMSizeStandardL32s: true,
|
||||
api.VMSizeStandardL8sV2: true,
|
||||
api.VMSizeStandardL16sV2: true,
|
||||
api.VMSizeStandardL32sV2: true,
|
||||
api.VMSizeStandardL48sV2: true,
|
||||
api.VMSizeStandardL64sV2: true,
|
||||
}
|
||||
|
||||
func DiskSizeIsValid(sizeGB int) bool {
|
||||
return sizeGB >= 128
|
||||
}
|
||||
|
||||
func VMSizeIsValid(vmSize api.VMSize, requiredD2sV3Workers, isMaster bool) bool {
|
||||
if isMaster {
|
||||
switch vmSize {
|
||||
case api.VMSizeStandardD8sV3,
|
||||
api.VMSizeStandardD16sV3,
|
||||
api.VMSizeStandardD32sV3,
|
||||
api.VMSizeStandardE64iV3,
|
||||
api.VMSizeStandardE64isV3,
|
||||
api.VMSizeStandardF72sV2,
|
||||
api.VMSizeStandardGS5,
|
||||
api.VMSizeStandardG5,
|
||||
api.VMSizeStandardM128ms:
|
||||
return true
|
||||
}
|
||||
} else {
|
||||
if requiredD2sV3Workers {
|
||||
switch vmSize {
|
||||
case api.VMSizeStandardD2sV3:
|
||||
return true
|
||||
}
|
||||
} else {
|
||||
switch vmSize {
|
||||
case api.VMSizeStandardD4asV4,
|
||||
api.VMSizeStandardD8asV4,
|
||||
api.VMSizeStandardD16asV4,
|
||||
api.VMSizeStandardD32asV4,
|
||||
api.VMSizeStandardD4sV3,
|
||||
api.VMSizeStandardD8sV3,
|
||||
api.VMSizeStandardD16sV3,
|
||||
api.VMSizeStandardD32sV3,
|
||||
api.VMSizeStandardE4sV3,
|
||||
api.VMSizeStandardE8sV3,
|
||||
api.VMSizeStandardE16sV3,
|
||||
api.VMSizeStandardE32sV3,
|
||||
api.VMSizeStandardE64iV3,
|
||||
api.VMSizeStandardE64isV3,
|
||||
api.VMSizeStandardF4sV2,
|
||||
api.VMSizeStandardF8sV2,
|
||||
api.VMSizeStandardF16sV2,
|
||||
api.VMSizeStandardF32sV2,
|
||||
api.VMSizeStandardF72sV2,
|
||||
api.VMSizeStandardG5,
|
||||
api.VMSizeStandardGS5,
|
||||
api.VMSizeStandardM128ms,
|
||||
api.VMSizeStandardL4s,
|
||||
api.VMSizeStandardL8s,
|
||||
api.VMSizeStandardL16s,
|
||||
api.VMSizeStandardL32s,
|
||||
api.VMSizeStandardL8sV2,
|
||||
api.VMSizeStandardL16sV2,
|
||||
api.VMSizeStandardL32sV2,
|
||||
api.VMSizeStandardL48sV2,
|
||||
api.VMSizeStandardL64sV2:
|
||||
return true
|
||||
}
|
||||
}
|
||||
return supportedMasterVMSizes[vmSize]
|
||||
}
|
||||
|
||||
if (supportedWorkerVMSizes[vmSize] && !requiredD2sV3Workers) ||
|
||||
(requiredD2sV3Workers && vmSize == api.VMSizeStandardD2sV3) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
|
|
@ -10,10 +10,10 @@ import (
|
|||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
configclient "github.com/openshift/client-go/config/clientset/versioned"
|
||||
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
operatorclient "github.com/openshift/client-go/operator/clientset/versioned"
|
||||
samplesclient "github.com/openshift/client-go/samples/clientset/versioned"
|
||||
securityclient "github.com/openshift/client-go/security/clientset/versioned"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
|
||||
|
@ -89,7 +89,7 @@ type manager struct {
|
|||
|
||||
kubernetescli kubernetes.Interface
|
||||
extensionscli extensionsclient.Interface
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
mcocli mcoclient.Interface
|
||||
operatorcli operatorclient.Interface
|
||||
configcli configclient.Interface
|
||||
|
|
|
@ -29,17 +29,36 @@ import (
|
|||
"github.com/Azure/ARO-RP/pkg/util/stringutils"
|
||||
)
|
||||
|
||||
func (m *manager) deleteNic(ctx context.Context, resource mgmtfeatures.GenericResourceExpanded) error {
|
||||
// deleteNic deletes the network interface resource by first fetching the resource using the interface
|
||||
// client, checking the provisioning state to ensure it is 'succeeded', and then deletes it
|
||||
// If the nic is in a failed provisioning state, it will perform an empty CreateOrUpdate on it to put it back into
|
||||
// a succeeded provisioning state.
|
||||
//
|
||||
// The resources client incorrectly reports provisioningState hence we must use the interface client to fetch
|
||||
// this resource again so we get the correct provisioningState instead of always just "Succeeded"
|
||||
func (m *manager) deleteNic(ctx context.Context, nicName string) error {
|
||||
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
|
||||
|
||||
if resource.ProvisioningState != nil && !strings.EqualFold(*resource.ProvisioningState, "succeeded") {
|
||||
m.log.Printf("NIC '%s' is not in a succeeded provisioning state, attempting to reconcile prior to deletion.", *resource.ID)
|
||||
err := m.interfaces.CreateOrUpdateAndWait(ctx, resourceGroup, *resource.Name, mgmtnetwork.Interface{})
|
||||
nic, err := m.interfaces.Get(ctx, resourceGroup, nicName, "")
|
||||
|
||||
// nic is already gone which typically happens on PLS / PE nics
|
||||
// as they are deleted in a different step
|
||||
if detailedErr, ok := err.(autorest.DetailedError); ok &&
|
||||
detailedErr.StatusCode == http.StatusNotFound {
|
||||
return nil
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if nic.ProvisioningState == mgmtnetwork.Failed {
|
||||
m.log.Printf("NIC '%s' is in a Failed provisioning state, attempting to reconcile prior to deletion.", *nic.ID)
|
||||
err := m.interfaces.CreateOrUpdateAndWait(ctx, resourceGroup, *nic.Name, mgmtnetwork.Interface{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return m.interfaces.DeleteAndWait(ctx, resourceGroup, *resource.Name)
|
||||
return m.interfaces.DeleteAndWait(ctx, resourceGroup, *nic.Name)
|
||||
}
|
||||
|
||||
func (m *manager) deletePrivateDNSVirtualNetworkLinks(ctx context.Context, resourceID string) error {
|
||||
|
@ -147,7 +166,7 @@ var deleteOrder = map[string]int{
|
|||
func (m *manager) deleteResources(ctx context.Context) error {
|
||||
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
|
||||
|
||||
resources, err := m.resources.ListByResourceGroup(ctx, resourceGroup, "", "provisioningState", nil)
|
||||
resources, err := m.resources.ListByResourceGroup(ctx, resourceGroup, "", "", nil)
|
||||
if detailedErr, ok := err.(autorest.DetailedError); ok &&
|
||||
(detailedErr.StatusCode == http.StatusNotFound ||
|
||||
detailedErr.StatusCode == http.StatusForbidden) {
|
||||
|
@ -203,7 +222,7 @@ func (m *manager) deleteResources(ctx context.Context) error {
|
|||
}
|
||||
|
||||
case "microsoft.network/networkinterfaces":
|
||||
err = m.deleteNic(ctx, *resource)
|
||||
err = m.deleteNic(ctx, *resource.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -6,10 +6,11 @@ package cluster
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"testing"
|
||||
|
||||
mgmtfeatures "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2019-07-01/features"
|
||||
"github.com/Azure/go-autorest/autorest/to"
|
||||
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
|
||||
"github.com/Azure/go-autorest/autorest"
|
||||
"github.com/golang/mock/gomock"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
|
@ -24,50 +25,63 @@ func TestDeleteNic(t *testing.T) {
|
|||
clusterRG := "cluster-rg"
|
||||
nicName := "nic-name"
|
||||
location := "eastus"
|
||||
resourceId := fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Network/networkInterfaces/%s", subscription, clusterRG, nicName)
|
||||
|
||||
nic := mgmtnetwork.Interface{
|
||||
Name: &nicName,
|
||||
Location: &location,
|
||||
ID: &resourceId,
|
||||
InterfacePropertiesFormat: &mgmtnetwork.InterfacePropertiesFormat{},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
mocks func(*mock_network.MockInterfacesClient)
|
||||
provisioningState *string
|
||||
wantErr string
|
||||
name string
|
||||
mocks func(*mock_network.MockInterfacesClient)
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
name: "nic is in succeeded provisioning state",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
nic.InterfacePropertiesFormat.ProvisioningState = mgmtnetwork.Succeeded
|
||||
networkInterfaces.EXPECT().Get(gomock.Any(), clusterRG, nicName, "").Return(nic, nil)
|
||||
networkInterfaces.EXPECT().DeleteAndWait(gomock.Any(), clusterRG, nicName).Return(nil)
|
||||
},
|
||||
provisioningState: to.StringPtr("SUCCEEDED"),
|
||||
},
|
||||
{
|
||||
name: "nic is in failed provisioning state",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
nic.InterfacePropertiesFormat.ProvisioningState = mgmtnetwork.Failed
|
||||
networkInterfaces.EXPECT().Get(gomock.Any(), clusterRG, nicName, "").Return(nic, nil)
|
||||
networkInterfaces.EXPECT().CreateOrUpdateAndWait(gomock.Any(), clusterRG, nicName, gomock.Any()).Return(nil)
|
||||
networkInterfaces.EXPECT().DeleteAndWait(gomock.Any(), clusterRG, nicName).Return(nil)
|
||||
},
|
||||
provisioningState: to.StringPtr("FAILED"),
|
||||
},
|
||||
{
|
||||
name: "provisioning state is failed and CreateOrUpdateAndWait returns error",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
nic.InterfacePropertiesFormat.ProvisioningState = mgmtnetwork.Failed
|
||||
networkInterfaces.EXPECT().Get(gomock.Any(), clusterRG, nicName, "").Return(nic, nil)
|
||||
networkInterfaces.EXPECT().CreateOrUpdateAndWait(gomock.Any(), clusterRG, nicName, gomock.Any()).Return(fmt.Errorf("Failed to update"))
|
||||
},
|
||||
provisioningState: to.StringPtr("FAILED"),
|
||||
wantErr: "Failed to update",
|
||||
wantErr: "Failed to update",
|
||||
},
|
||||
{
|
||||
name: "nic no longer exists - do nothing",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
notFound := autorest.DetailedError{
|
||||
StatusCode: http.StatusNotFound,
|
||||
}
|
||||
networkInterfaces.EXPECT().Get(gomock.Any(), clusterRG, nicName, "").Return(nic, notFound)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "DeleteAndWait returns error",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
nic.InterfacePropertiesFormat.ProvisioningState = mgmtnetwork.Succeeded
|
||||
networkInterfaces.EXPECT().Get(gomock.Any(), clusterRG, nicName, "").Return(nic, nil)
|
||||
networkInterfaces.EXPECT().DeleteAndWait(gomock.Any(), clusterRG, nicName).Return(fmt.Errorf("Failed to delete"))
|
||||
},
|
||||
provisioningState: to.StringPtr("SUCCEEDED"),
|
||||
wantErr: "Failed to delete",
|
||||
},
|
||||
{
|
||||
name: "provisioningState is nil",
|
||||
mocks: func(networkInterfaces *mock_network.MockInterfacesClient) {
|
||||
networkInterfaces.EXPECT().DeleteAndWait(gomock.Any(), clusterRG, nicName).Return(nil)
|
||||
},
|
||||
provisioningState: nil,
|
||||
wantErr: "Failed to delete",
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -97,13 +111,7 @@ func TestDeleteNic(t *testing.T) {
|
|||
interfaces: networkInterfaces,
|
||||
}
|
||||
|
||||
resource := mgmtfeatures.GenericResourceExpanded{
|
||||
Name: to.StringPtr(nicName),
|
||||
ID: to.StringPtr(fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Network/networkInterfaces/%s", subscription, clusterRG, nicName)),
|
||||
ProvisioningState: tt.provisioningState,
|
||||
}
|
||||
|
||||
err := m.deleteNic(ctx, resource)
|
||||
err := m.deleteNic(ctx, nicName)
|
||||
if err != nil && err.Error() != tt.wantErr {
|
||||
t.Errorf("got error: '%s'", err.Error())
|
||||
}
|
||||
|
|
|
@ -8,7 +8,9 @@ import (
|
|||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
|
||||
|
@ -20,8 +22,7 @@ import (
|
|||
"github.com/openshift/installer/pkg/asset/releaseimage"
|
||||
"github.com/openshift/installer/pkg/asset/targets"
|
||||
"github.com/openshift/installer/pkg/asset/templates/content/bootkube"
|
||||
"github.com/openshift/installer/pkg/types"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
utilrand "k8s.io/apimachinery/pkg/util/rand"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/bootstraplogging"
|
||||
|
@ -36,29 +37,14 @@ func (m *manager) createDNS(ctx context.Context) error {
|
|||
return m.dns.Create(ctx, m.doc.OpenShiftCluster)
|
||||
}
|
||||
|
||||
func (m *manager) ensureInfraID(ctx context.Context, installConfig *installconfig.InstallConfig) error {
|
||||
func (m *manager) ensureInfraID(ctx context.Context) (err error) {
|
||||
if m.doc.OpenShiftCluster.Properties.InfraID != "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
g := graph.Graph{}
|
||||
g.Set(&installconfig.InstallConfig{
|
||||
Config: &types.InstallConfig{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: strings.ToLower(m.doc.OpenShiftCluster.Name),
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
err := g.Resolve(&installconfig.ClusterID{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clusterID := g.Get(&installconfig.ClusterID{}).(*installconfig.ClusterID)
|
||||
|
||||
// generate an infra ID that is 27 characters long with 5 bytes of them random
|
||||
infraID := generateInfraID(strings.ToLower(m.doc.OpenShiftCluster.Name), 27, 5)
|
||||
m.doc, err = m.db.PatchWithLease(ctx, m.doc.Key, func(doc *api.OpenShiftClusterDocument) error {
|
||||
doc.OpenShiftCluster.Properties.InfraID = clusterID.InfraID
|
||||
doc.OpenShiftCluster.Properties.InfraID = infraID
|
||||
return nil
|
||||
})
|
||||
return err
|
||||
|
@ -69,7 +55,7 @@ func (m *manager) ensureResourceGroup(ctx context.Context) error {
|
|||
|
||||
group := mgmtfeatures.ResourceGroup{
|
||||
Location: &m.doc.OpenShiftCluster.Location,
|
||||
ManagedBy: to.StringPtr(m.doc.OpenShiftCluster.ID),
|
||||
ManagedBy: &m.doc.OpenShiftCluster.ID,
|
||||
}
|
||||
if m.env.IsLocalDevelopmentMode() {
|
||||
// grab tags so we do not accidently remove them on createOrUpdate, set purge tag to true for dev clusters
|
||||
|
@ -124,29 +110,30 @@ func (m *manager) ensureResourceGroup(ctx context.Context) error {
|
|||
return m.env.EnsureARMResourceGroupRoleAssignment(ctx, m.fpAuthorizer, resourceGroup)
|
||||
}
|
||||
|
||||
func (m *manager) deployStorageTemplate(ctx context.Context, installConfig *installconfig.InstallConfig) error {
|
||||
func (m *manager) deployStorageTemplate(ctx context.Context) error {
|
||||
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
|
||||
infraID := m.doc.OpenShiftCluster.Properties.InfraID
|
||||
|
||||
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
|
||||
azureRegion := strings.ToLower(m.doc.OpenShiftCluster.Location) // Used in k8s object names, so must pass DNS-1123 validation
|
||||
|
||||
resources := []*arm.Resource{
|
||||
m.storageAccount(clusterStorageAccountName, installConfig.Config.Azure.Region, true),
|
||||
m.storageAccount(clusterStorageAccountName, azureRegion, true),
|
||||
m.storageAccountBlobContainer(clusterStorageAccountName, "ignition"),
|
||||
m.storageAccountBlobContainer(clusterStorageAccountName, "aro"),
|
||||
m.storageAccount(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, installConfig.Config.Azure.Region, true),
|
||||
m.storageAccount(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, azureRegion, true),
|
||||
m.storageAccountBlobContainer(m.doc.OpenShiftCluster.Properties.ImageRegistryStorageAccountName, "image-registry"),
|
||||
m.clusterNSG(infraID, installConfig.Config.Azure.Region),
|
||||
m.clusterNSG(infraID, azureRegion),
|
||||
m.clusterServicePrincipalRBAC(),
|
||||
m.networkPrivateLinkService(installConfig),
|
||||
m.networkPublicIPAddress(installConfig, infraID+"-pip-v4"),
|
||||
m.networkInternalLoadBalancer(installConfig),
|
||||
m.networkPublicLoadBalancer(installConfig),
|
||||
m.networkPrivateLinkService(azureRegion),
|
||||
m.networkPublicIPAddress(azureRegion, infraID+"-pip-v4"),
|
||||
m.networkInternalLoadBalancer(azureRegion),
|
||||
m.networkPublicLoadBalancer(azureRegion),
|
||||
}
|
||||
|
||||
if m.doc.OpenShiftCluster.Properties.IngressProfiles[0].Visibility == api.VisibilityPublic {
|
||||
resources = append(resources,
|
||||
m.networkPublicIPAddress(installConfig, infraID+"-default-v4"),
|
||||
m.networkPublicIPAddress(azureRegion, infraID+"-default-v4"),
|
||||
)
|
||||
}
|
||||
|
||||
|
@ -166,33 +153,27 @@ func (m *manager) deployStorageTemplate(ctx context.Context, installConfig *inst
|
|||
t.Resources = append(t.Resources, m.denyAssignment())
|
||||
}
|
||||
|
||||
return m.deployARMTemplate(ctx, resourceGroup, "storage", t, nil)
|
||||
return arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "storage", t, nil)
|
||||
}
|
||||
|
||||
func (m *manager) ensureGraph(ctx context.Context, installConfig *installconfig.InstallConfig, image *releaseimage.Image) error {
|
||||
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
|
||||
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
|
||||
infraID := m.doc.OpenShiftCluster.Properties.InfraID
|
||||
|
||||
exists, err := m.graph.Exists(ctx, resourceGroup, clusterStorageAccountName)
|
||||
if err != nil || exists {
|
||||
return err
|
||||
}
|
||||
|
||||
// applyInstallConfigCustomisations modifies the InstallConfig and creates
|
||||
// parent assets, then regenerates the InstallConfig for use for Ignition
|
||||
// generation, etc.
|
||||
func (m *manager) applyInstallConfigCustomisations(ctx context.Context, installConfig *installconfig.InstallConfig, image *releaseimage.Image) (graph.Graph, error) {
|
||||
clusterID := &installconfig.ClusterID{
|
||||
UUID: m.doc.ID,
|
||||
InfraID: infraID,
|
||||
InfraID: m.doc.OpenShiftCluster.Properties.InfraID,
|
||||
}
|
||||
|
||||
bootstrapLoggingConfig, err := bootstraplogging.GetConfig(m.env, m.doc)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
httpSecret := make([]byte, 64)
|
||||
_, err = rand.Read(httpSecret)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
|
||||
imageRegistryConfig := &bootkube.AROImageRegistryConfig{
|
||||
|
@ -218,7 +199,7 @@ func (m *manager) ensureGraph(ctx context.Context, installConfig *installconfig.
|
|||
for _, a := range targets.Cluster {
|
||||
err = g.Resolve(a)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -226,10 +207,22 @@ func (m *manager) ensureGraph(ctx context.Context, installConfig *installconfig.
|
|||
if m.doc.OpenShiftCluster.Properties.NetworkProfile.MTUSize == api.MTU3900 {
|
||||
m.log.Printf("applying feature flag %s", api.FeatureFlagMTU3900)
|
||||
if err = m.overrideEthernetMTU(g); err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return g, nil
|
||||
}
|
||||
|
||||
func (m *manager) persistGraph(ctx context.Context, g graph.Graph) error {
|
||||
resourceGroup := stringutils.LastTokenByte(m.doc.OpenShiftCluster.Properties.ClusterProfile.ResourceGroupID, '/')
|
||||
clusterStorageAccountName := "cluster" + m.doc.OpenShiftCluster.Properties.StorageSuffix
|
||||
|
||||
exists, err := m.graph.Exists(ctx, resourceGroup, clusterStorageAccountName)
|
||||
if err != nil || exists {
|
||||
return err
|
||||
}
|
||||
|
||||
// the graph is quite big, so we store it in a storage account instead of in cosmosdb
|
||||
return m.graph.Save(ctx, resourceGroup, clusterStorageAccountName, g)
|
||||
}
|
||||
|
@ -300,3 +293,29 @@ func (m *manager) setMasterSubnetPolicies(ctx context.Context) error {
|
|||
|
||||
return m.subnet.CreateOrUpdate(ctx, m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID, s)
|
||||
}
|
||||
|
||||
// generateInfraID take base and returns a ID that
|
||||
// - is of length maxLen
|
||||
// - contains randomLen random bytes
|
||||
// - only contains `alphanum` or `-`
|
||||
// see openshift/installer/pkg/asset/installconfig/clusterid.go for original implementation
|
||||
func generateInfraID(base string, maxLen int, randomLen int) string {
|
||||
maxBaseLen := maxLen - (randomLen + 1)
|
||||
|
||||
// replace all characters that are not `alphanum` or `-` with `-`
|
||||
re := regexp.MustCompile("[^A-Za-z0-9-]")
|
||||
base = re.ReplaceAllString(base, "-")
|
||||
|
||||
// replace all multiple dashes in a sequence with single one.
|
||||
re = regexp.MustCompile(`-{2,}`)
|
||||
base = re.ReplaceAllString(base, "-")
|
||||
|
||||
// truncate to maxBaseLen
|
||||
if len(base) > maxBaseLen {
|
||||
base = base[:maxBaseLen]
|
||||
}
|
||||
base = strings.TrimRight(base, "-")
|
||||
|
||||
// add random chars to the end to randomize
|
||||
return fmt.Sprintf("%s-%s", base, utilrand.String(randomLen))
|
||||
}
|
||||
|
|
|
@ -10,7 +10,6 @@ import (
|
|||
mgmtauthorization "github.com/Azure/azure-sdk-for-go/services/preview/authorization/mgmt/2018-09-01-preview/authorization"
|
||||
mgmtstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2019-06-01/storage"
|
||||
"github.com/Azure/go-autorest/autorest/to"
|
||||
"github.com/openshift/installer/pkg/asset/installconfig"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/util/arm"
|
||||
|
@ -80,11 +79,11 @@ func (m *manager) clusterServicePrincipalRBAC() *arm.Resource {
|
|||
func (m *manager) storageAccount(name, region string, encrypted bool) *arm.Resource {
|
||||
virtualNetworkRules := []mgmtstorage.VirtualNetworkRule{
|
||||
{
|
||||
VirtualNetworkResourceID: to.StringPtr(m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID),
|
||||
VirtualNetworkResourceID: &m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID,
|
||||
Action: mgmtstorage.Allow,
|
||||
},
|
||||
{
|
||||
VirtualNetworkResourceID: to.StringPtr(m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].SubnetID),
|
||||
VirtualNetworkResourceID: &m.doc.OpenShiftCluster.Properties.WorkerProfiles[0].SubnetID,
|
||||
Action: mgmtstorage.Allow,
|
||||
},
|
||||
{
|
||||
|
@ -177,7 +176,7 @@ func (m *manager) storageAccountBlobContainer(storageAccountName, name string) *
|
|||
}
|
||||
}
|
||||
|
||||
func (m *manager) networkPrivateLinkService(installConfig *installconfig.InstallConfig) *arm.Resource {
|
||||
func (m *manager) networkPrivateLinkService(azureRegion string) *arm.Resource {
|
||||
return &arm.Resource{
|
||||
Resource: &mgmtnetwork.PrivateLinkService{
|
||||
PrivateLinkServiceProperties: &mgmtnetwork.PrivateLinkServiceProperties{
|
||||
|
@ -190,7 +189,7 @@ func (m *manager) networkPrivateLinkService(installConfig *installconfig.Install
|
|||
{
|
||||
PrivateLinkServiceIPConfigurationProperties: &mgmtnetwork.PrivateLinkServiceIPConfigurationProperties{
|
||||
Subnet: &mgmtnetwork.Subnet{
|
||||
ID: to.StringPtr(m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID),
|
||||
ID: &m.doc.OpenShiftCluster.Properties.MasterProfile.SubnetID,
|
||||
},
|
||||
},
|
||||
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-pls-nic"),
|
||||
|
@ -209,7 +208,7 @@ func (m *manager) networkPrivateLinkService(installConfig *installconfig.Install
|
|||
},
|
||||
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-pls"),
|
||||
Type: to.StringPtr("Microsoft.Network/privateLinkServices"),
|
||||
Location: &installConfig.Config.Azure.Region,
|
||||
Location: &azureRegion,
|
||||
},
|
||||
APIVersion: azureclient.APIVersion("Microsoft.Network"),
|
||||
DependsOn: []string{
|
||||
|
@ -245,7 +244,7 @@ func (m *manager) networkPrivateEndpoint() *arm.Resource {
|
|||
}
|
||||
}
|
||||
|
||||
func (m *manager) networkPublicIPAddress(installConfig *installconfig.InstallConfig, name string) *arm.Resource {
|
||||
func (m *manager) networkPublicIPAddress(azureRegion string, name string) *arm.Resource {
|
||||
return &arm.Resource{
|
||||
Resource: &mgmtnetwork.PublicIPAddress{
|
||||
Sku: &mgmtnetwork.PublicIPAddressSku{
|
||||
|
@ -256,13 +255,13 @@ func (m *manager) networkPublicIPAddress(installConfig *installconfig.InstallCon
|
|||
},
|
||||
Name: &name,
|
||||
Type: to.StringPtr("Microsoft.Network/publicIPAddresses"),
|
||||
Location: &installConfig.Config.Azure.Region,
|
||||
Location: &azureRegion,
|
||||
},
|
||||
APIVersion: azureclient.APIVersion("Microsoft.Network"),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manager) networkInternalLoadBalancer(installConfig *installconfig.InstallConfig) *arm.Resource {
|
||||
func (m *manager) networkInternalLoadBalancer(azureRegion string) *arm.Resource {
|
||||
return &arm.Resource{
|
||||
Resource: &mgmtnetwork.LoadBalancer{
|
||||
Sku: &mgmtnetwork.LoadBalancerSku{
|
||||
|
@ -282,7 +281,7 @@ func (m *manager) networkInternalLoadBalancer(installConfig *installconfig.Insta
|
|||
},
|
||||
BackendAddressPools: &[]mgmtnetwork.BackendAddressPool{
|
||||
{
|
||||
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID),
|
||||
Name: &m.doc.OpenShiftCluster.Properties.InfraID,
|
||||
},
|
||||
{
|
||||
Name: to.StringPtr("ssh-0"),
|
||||
|
@ -429,13 +428,13 @@ func (m *manager) networkInternalLoadBalancer(installConfig *installconfig.Insta
|
|||
},
|
||||
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID + "-internal"),
|
||||
Type: to.StringPtr("Microsoft.Network/loadBalancers"),
|
||||
Location: &installConfig.Config.Azure.Region,
|
||||
Location: &azureRegion,
|
||||
},
|
||||
APIVersion: azureclient.APIVersion("Microsoft.Network"),
|
||||
}
|
||||
}
|
||||
|
||||
func (m *manager) networkPublicLoadBalancer(installConfig *installconfig.InstallConfig) *arm.Resource {
|
||||
func (m *manager) networkPublicLoadBalancer(azureRegion string) *arm.Resource {
|
||||
lb := &mgmtnetwork.LoadBalancer{
|
||||
Sku: &mgmtnetwork.LoadBalancerSku{
|
||||
Name: mgmtnetwork.LoadBalancerSkuNameStandard,
|
||||
|
@ -476,9 +475,9 @@ func (m *manager) networkPublicLoadBalancer(installConfig *installconfig.Install
|
|||
},
|
||||
},
|
||||
},
|
||||
Name: to.StringPtr(m.doc.OpenShiftCluster.Properties.InfraID),
|
||||
Name: &m.doc.OpenShiftCluster.Properties.InfraID,
|
||||
Type: to.StringPtr("Microsoft.Network/loadBalancers"),
|
||||
Location: &installConfig.Config.Azure.Region,
|
||||
Location: &azureRegion,
|
||||
}
|
||||
|
||||
if m.doc.OpenShiftCluster.Properties.APIServerProfile.Visibility == api.VisibilityPublic {
|
||||
|
|
|
@ -78,17 +78,18 @@ func (m *manager) enumerateUserDataSecrets(ctx context.Context) map[corev1.Secre
|
|||
}
|
||||
|
||||
func getUserDataSecretReference(objMeta *metav1.ObjectMeta, spec *machinev1beta1.MachineSpec) (*corev1.SecretReference, error) {
|
||||
if spec.ProviderSpec.Value == nil || objMeta == nil {
|
||||
if spec.ProviderSpec.Value == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
o, _, err := scheme.Codecs.UniversalDeserializer().Decode(spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
machineProviderSpec, ok := obj.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
|
||||
machineProviderSpec, ok := o.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("machine %s: failed to read provider spec: %T", spec.Name, obj)
|
||||
return nil, fmt.Errorf("failed to read provider spec: %T", o)
|
||||
}
|
||||
|
||||
if machineProviderSpec.UserDataSecret == nil {
|
||||
|
|
|
@ -33,7 +33,7 @@ func (m *manager) ensureGatewayUpgrade(ctx context.Context) error {
|
|||
ContentVersion: "1.0.0.0",
|
||||
Resources: []*arm.Resource{m.networkPrivateEndpoint()},
|
||||
}
|
||||
err = arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "gatewayprivateendpoint", t, nil)
|
||||
err = arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "storage", t, nil)
|
||||
if err != nil {
|
||||
m.log.Print(err)
|
||||
return nil
|
||||
|
|
|
@ -10,17 +10,18 @@ import (
|
|||
|
||||
configclient "github.com/openshift/client-go/config/clientset/versioned"
|
||||
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
operatorclient "github.com/openshift/client-go/operator/clientset/versioned"
|
||||
samplesclient "github.com/openshift/client-go/samples/clientset/versioned"
|
||||
securityclient "github.com/openshift/client-go/security/clientset/versioned"
|
||||
"github.com/openshift/installer/pkg/asset/installconfig"
|
||||
"github.com/openshift/installer/pkg/asset/releaseimage"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
|
||||
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/cluster/graph"
|
||||
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/deploy"
|
||||
"github.com/Azure/ARO-RP/pkg/util/restconfig"
|
||||
|
@ -161,12 +162,14 @@ func (m *manager) Install(ctx context.Context) error {
|
|||
var (
|
||||
installConfig *installconfig.InstallConfig
|
||||
image *releaseimage.Image
|
||||
g graph.Graph
|
||||
)
|
||||
|
||||
steps := map[api.InstallPhase][]steps.Step{
|
||||
api.InstallPhaseBootstrap: {
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.validateResources)),
|
||||
steps.Action(m.ensureACRToken),
|
||||
steps.Action(m.ensureInfraID),
|
||||
steps.Action(m.generateSSHKey),
|
||||
steps.Action(m.populateMTUSize),
|
||||
steps.Action(func(ctx context.Context) error {
|
||||
|
@ -177,20 +180,22 @@ func (m *manager) Install(ctx context.Context) error {
|
|||
steps.Action(m.createDNS),
|
||||
steps.Action(m.initializeClusterSPClients), // must run before clusterSPObjectID
|
||||
steps.Action(m.clusterSPObjectID),
|
||||
steps.Action(func(ctx context.Context) error {
|
||||
return m.ensureInfraID(ctx, installConfig)
|
||||
}),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureResourceGroup)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.enableServiceEndpoints)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.setMasterSubnetPolicies)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(func(ctx context.Context) error {
|
||||
return m.deployStorageTemplate(ctx, installConfig)
|
||||
})),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.deployStorageTemplate)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.updateAPIIPEarly)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.createOrUpdateRouterIPEarly)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.ensureGatewayCreate)),
|
||||
steps.Action(func(ctx context.Context) error {
|
||||
return m.ensureGraph(ctx, installConfig, image)
|
||||
var err error
|
||||
// Applies ARO-specific customisations to the InstallConfig
|
||||
g, err = m.applyInstallConfigCustomisations(ctx, installConfig, image)
|
||||
return err
|
||||
}),
|
||||
steps.Action(func(ctx context.Context) error {
|
||||
// saves the graph to storage account
|
||||
return m.persistGraph(ctx, g)
|
||||
}),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.attachNSGs)),
|
||||
steps.AuthorizationRefreshingAction(m.fpAuthorizer, steps.Action(m.generateKubeconfigs)),
|
||||
|
@ -296,7 +301,7 @@ func (m *manager) initializeKubernetesClients(ctx context.Context) error {
|
|||
return err
|
||||
}
|
||||
|
||||
m.maocli, err = maoclient.NewForConfig(restConfig)
|
||||
m.maocli, err = machineclient.NewForConfig(restConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@ package cluster
|
|||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
mgmtnetwork "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2020-08-01/network"
|
||||
|
@ -24,7 +25,11 @@ func (m *manager) enableServiceEndpoints(ctx context.Context) error {
|
|||
}
|
||||
|
||||
for _, wp := range m.doc.OpenShiftCluster.Properties.WorkerProfiles {
|
||||
subnets = append(subnets, wp.SubnetID)
|
||||
if len(wp.SubnetID) > 0 {
|
||||
subnets = append(subnets, wp.SubnetID)
|
||||
} else {
|
||||
return fmt.Errorf("WorkerProfile '%s' has no SubnetID; check that the corresponding MachineSet is valid", wp.Name)
|
||||
}
|
||||
}
|
||||
|
||||
for _, subnetId := range subnets {
|
||||
|
@ -81,7 +86,7 @@ func (m *manager) migrateStorageAccounts(ctx context.Context) error {
|
|||
},
|
||||
}
|
||||
|
||||
return m.deployARMTemplate(ctx, resourceGroup, "storage", t, nil)
|
||||
return arm.DeployTemplate(ctx, m.log, m.deployments, resourceGroup, "storage", t, nil)
|
||||
}
|
||||
|
||||
func (m *manager) populateRegistryStorageAccountName(ctx context.Context) error {
|
||||
|
|
|
@ -190,6 +190,7 @@
|
|||
},
|
||||
"osDisk": {
|
||||
"createOption": "FromImage",
|
||||
"diskSizeGB": 200,
|
||||
"managedDisk": {
|
||||
"storageAccountType": "Premium_LRS"
|
||||
}
|
||||
|
@ -235,7 +236,7 @@
|
|||
"autoUpgradeMinorVersion": true,
|
||||
"settings": {},
|
||||
"protectedSettings": {
|
||||
"script": "[base64(concat(base64ToString('c2V0IC1lCgo='),'CIAZPTOKEN=''',parameters('ciAzpToken'),'''\n','CIPOOLNAME=''',parameters('ciPoolName'),'''\n','\n',base64ToString('CiMgSGFjayAtIHdhaXQgb24gY3JlYXRlIGJlY2F1c2UgdGhlIFdBTGludXhBZ2VudCBzb21ldGltZXMgY29uZmxpY3RzIHdpdGggdGhlIHl1bSB1cGRhdGUgLXkgYmVsb3cKc2xlZXAgNjAKCmZvciBhdHRlbXB0IGluIHsxLi41fTsgZG8KICB5dW0gLXkgdXBkYXRlIC14IFdBTGludXhBZ2VudCAmJiBicmVhawogIGlmIFtbICR7YXR0ZW1wdH0gLWx0IDUgXV07IHRoZW4gc2xlZXAgMTA7IGVsc2UgZXhpdCAxOyBmaQpkb25lCgpsdmV4dGVuZCAtbCArNTAlRlJFRSAvZGV2L3Jvb3R2Zy9ob21lbHYKeGZzX2dyb3dmcyAvaG9tZQoKbHZleHRlbmQgLWwgKzUwJUZSRUUgL2Rldi9yb290dmcvdG1wbHYKeGZzX2dyb3dmcyAvdG1wCgpsdmV4dGVuZCAtbCArMTAwJUZSRUUgL2Rldi9yb290dmcvdmFybHYKeGZzX2dyb3dmcyAvdmFyCgpycG0gLS1pbXBvcnQgaHR0cHM6Ly9kbC5mZWRvcmFwcm9qZWN0Lm9yZy9wdWIvZXBlbC9SUE0tR1BHLUtFWS1FUEVMLTgKcnBtIC0taW1wb3J0IGh0dHBzOi8vcGFja2FnZXMubWljcm9zb2Z0LmNvbS9rZXlzL21pY3Jvc29mdC5hc2MKCnl1bSAteSBpbnN0YWxsIGh0dHBzOi8vZGwuZmVkb3JhcHJvamVjdC5vcmcvcHViL2VwZWwvZXBlbC1yZWxlYXNlLWxhdGVzdC04Lm5vYXJjaC5ycG0KCmNhdCA+L2V0Yy95dW0ucmVwb3MuZC9henVyZS5yZXBvIDw8J0VPRicKW2F6dXJlLWNsaV0KbmFtZT1henVyZS1jbGkKYmFzZXVybD1odHRwczovL3BhY2thZ2VzLm1pY3Jvc29mdC5jb20veXVtcmVwb3MvYXp1cmUtY2xpCmVuYWJsZWQ9eWVzCmdwZ2NoZWNrPXllcwpFT0YKCnl1bSAteSBpbnN0YWxsIGF6dXJlLWNsaSBwb2RtYW4gcG9kbWFuLWRvY2tlciBqcSBnY2MgZ3BnbWUtZGV2ZWwgbGliYXNzdWFuLWRldmVsIGdpdCBtYWtlIHRtcHdhdGNoIHB5dGhvbjMtZGV2ZWwgZ28tdG9vbHNldC0xLjE2LjEyLTEubW9kdWxlK2VsOC41LjArMTM2MzcrOTYwYzc3NzEKCiMgU3VwcHJlc3MgZW11bGF0aW9uIG91dHB1dCBmb3IgcG9kbWFuIGluc3RlYWQgb2YgZG9ja2VyIGZvciBheiBhY3IgY29tcGF0YWJpbGl0eQpta2RpciAtcCAvZXRjL2NvbnRhaW5lcnMvCnRvdWNoIC9ldGMvY29udGFpbmVycy9ub2RvY2tlcgoKVlNUU19BR0VOVF9WRVJTSU9OPTIuMTkzLjEKbWtkaXIgL2hvbWUvY2xvdWQtdXNlci9hZ2VudApwdXNoZCAvaG9tZS9jbG91ZC11c2VyL2FnZW50CmN1cmwgaHR0cHM6Ly92c3RzYWdlbnRwYWNrYWdlLmF6dXJlZWRnZS5uZXQvYWdlbnQvJHtWU1RTX0FHRU5UX1ZFUlNJT059L3ZzdHMtYWdlbnQtbGludXgteDY0LSR7VlNUU19BR0VOVF9WRVJTSU9OfS50YXIuZ3ogfCB0YXIgLXh6CmNob3duIC1SIGNsb3VkLXVzZXI6Y2xvdWQtdXNlciAuCgouL2Jpbi9pbnN0YWxsZGVwZW5kZW5jaWVzLnNoCnN1ZG8gLXUgY2xvdWQtdXNlciAuL2NvbmZpZy5zaCAtLXVuYXR0ZW5kZWQgLS11cmwgaHR0cHM6Ly9kZXYuYXp1cmUuY29tL21zYXp1cmUgLS1hdXRoIHBhdCAtLXRva2VuICIkQ0lBWlBUT0tFTiIgLS1wb29sICIkQ0lQT09MTkFNRSIgLS1hZ2VudCAiQVJPLVJIRUwtJEhPU1ROQU1FIiAtLXJlcGxhY2UKLi9zdmMuc2ggaW5zdGFsbCBjbG91ZC11c2VyCnBvcGQKCmNhdCA+L2hvbWUvY2xvdWQtdXNlci9hZ2VudC8ucGF0aCA8PCdFT0YnCi91c3IvbG9jYWwvYmluOi91c3IvYmluOi91c3IvbG9jYWwvc2JpbjovdXNyL3NiaW46L2hvbWUvY2xvdWQtdXNlci8ubG9jYWwvYmluOi9ob21lL2Nsb3VkLXVzZXIvYmluCkVPRgoKIyBIQUNLIGZvciBYREdfUlVOVElNRV9ESVI6IGh0dHBzOi8vZ2l0aHViLmNvbS9jb250YWluZXJzL3BvZG1hbi9pc3N1ZXMvNDI3CmNhdCA+L2hvbWUvY2xvdWQtdXNlci9hZ2VudC8uZW52IDw8J0VPRicKZ28tMS4xNj10cnVlCkdPTEFOR19GSVBTPTEKWERHX1JVTlRJTUVfRElSPS9ydW4vdXNlci8xMDAwCkVPRgoKY2F0ID4vZXRjL2Nyb24uaG91cmx5L3RtcHdhdGNoIDw8J0VPRicKIyEvYmluL2Jhc2gKCmV4ZWMgL3NiaW4vdG1wd2F0Y2ggMjRoIC90bXAKRU9GCmNobW9kICt4IC9ldGMvY3Jvbi5ob3VybHkvdG1wd2F0Y2gKCiMgSEFDSyAtIHBvZG1hbiBkb2Vzbid0IGFsd2F5cyB0ZXJtaW5hdGUgb3IgY2xlYW4gdXAgaXQncyBwYXVzZS5waWQgZmlsZSBjYXVzaW5nCiMgJ2Nhbm5vdCByZWV4ZWMgZXJyb3JzJyBzbyBhdHRlbXB0IHRvIGNsZWFuIGl0IHVwIGV2ZXJ5IG1pbnV0ZSB0byBrZWVwIHBpcGVsaW5lcyBydW5uaW5nCiMgc21vb3RobHkKY2F0ID4vdXNyL2xvY2FsL2Jpbi9maXgtcG9kbWFuLXBhdXNlLnNoIDw8J0VPRicKIyEvYmluL2Jhc2gKClBBVVNFX0ZJTEU9Jy90bXAvcG9kbWFuLXJ1bi0xMDAwL2xpYnBvZC90bXAvcGF1c2UucGlkJwoKaWYgWyAtZiAiJHtQQVVTRV9GSUxFfSIgXTsgdGhlbgoJUElEPSQoY2F0ICR7UEFVU0VfRklMRX0pCglpZiAhIHBzIC1wICRQSUQgPiAvZGV2L251bGw7IHRoZW4KCQlybSAkUEFVU0VfRklMRQoJZmkKZmkKRU9GCmNobW9kICt4IC91c3IvbG9jYWwvYmluL2ZpeC1wb2RtYW4tcGF1c2Uuc2gKCiMgSEFDSyAtIC90bXAgd2lsbCBmaWxsIHVwIGNhdXNpbmcgYnVpbGQgZmFpbHVyZXMKIyBkZWxldGUgYW55dGhpbmcgbm90IGFjY2Vzc2VkIHdpdGhpbiAyIGRheXMKY2F0ID4vdXNyL2xvY2FsL2Jpbi9jbGVhbi10bXAuc2ggPDwnRU9GJwojIS9iaW4vYmFzaAoKZmluZCAvdG1wIC10eXBlIGYgXCggISAtdXNlciByb290IFwpIC1hdGltZSArMiAtZGVsZXRlCgpFT0YKY2htb2QgK3ggL3Vzci9sb2NhbC9iaW4vY2xlYW4tdG1wLnNoCgplY2hvICIwIDAgKi8xICogKiAvdXNyL2xvY2FsL2Jpbi9jbGVhbi10bXAuc2giID4+IGNyb24KZWNobyAiKiAqICogKiAqIC91c3IvbG9jYWwvYmluL2ZpeC1wb2RtYW4tcGF1c2Uuc2giID4+IGNyb24KCiMgSEFDSyAtIGh0dHBzOi8vZ2l0aHViLmNvbS9jb250YWluZXJzL3BvZG1hbi9pc3N1ZXMvOTAwMgplY2hvICJAcmVib290IGxvZ2luY3RsIGVuYWJsZS1saW5nZXIgY2xvdWQtdXNlciIgPj4gY3JvbgoKY3JvbnRhYiBjcm9uCnJtIGNyb24KCihzbGVlcCAzMDsgcmVib290KSAmCg==')))]"
|
||||
"script": "[base64(concat(base64ToString('c2V0IC1lCgo='),'CIAZPTOKEN=''',parameters('ciAzpToken'),'''\n','CIPOOLNAME=''',parameters('ciPoolName'),'''\n','\n',base64ToString('CiMgSGFjayAtIHdhaXQgb24gY3JlYXRlIGJlY2F1c2UgdGhlIFdBTGludXhBZ2VudCBzb21ldGltZXMgY29uZmxpY3RzIHdpdGggdGhlIHl1bSB1cGRhdGUgLXkgYmVsb3cKc2xlZXAgNjAKCmZvciBhdHRlbXB0IGluIHsxLi41fTsgZG8KICB5dW0gLXkgdXBkYXRlIC14IFdBTGludXhBZ2VudCAmJiBicmVhawogIGlmIFtbICR7YXR0ZW1wdH0gLWx0IDUgXV07IHRoZW4gc2xlZXAgMTA7IGVsc2UgZXhpdCAxOyBmaQpkb25lCgpsdmV4dGVuZCAtbCArNTAlRlJFRSAvZGV2L3Jvb3R2Zy9ob21lbHYKeGZzX2dyb3dmcyAvaG9tZQoKbHZleHRlbmQgLWwgKzUwJUZSRUUgL2Rldi9yb290dmcvdG1wbHYKeGZzX2dyb3dmcyAvdG1wCgpsdmV4dGVuZCAtbCArMTAwJUZSRUUgL2Rldi9yb290dmcvdmFybHYKeGZzX2dyb3dmcyAvdmFyCgpycG0gLS1pbXBvcnQgaHR0cHM6Ly9kbC5mZWRvcmFwcm9qZWN0Lm9yZy9wdWIvZXBlbC9SUE0tR1BHLUtFWS1FUEVMLTgKcnBtIC0taW1wb3J0IGh0dHBzOi8vcGFja2FnZXMubWljcm9zb2Z0LmNvbS9rZXlzL21pY3Jvc29mdC5hc2MKCnl1bSAteSBpbnN0YWxsIGh0dHBzOi8vZGwuZmVkb3JhcHJvamVjdC5vcmcvcHViL2VwZWwvZXBlbC1yZWxlYXNlLWxhdGVzdC04Lm5vYXJjaC5ycG0KCmNhdCA+L2V0Yy95dW0ucmVwb3MuZC9henVyZS5yZXBvIDw8J0VPRicKW2F6dXJlLWNsaV0KbmFtZT1henVyZS1jbGkKYmFzZXVybD1odHRwczovL3BhY2thZ2VzLm1pY3Jvc29mdC5jb20veXVtcmVwb3MvYXp1cmUtY2xpCmVuYWJsZWQ9eWVzCmdwZ2NoZWNrPXllcwpFT0YKCnl1bSAteSBpbnN0YWxsIGF6dXJlLWNsaSBwb2RtYW4gcG9kbWFuLWRvY2tlciBqcSBnY2MgZ3BnbWUtZGV2ZWwgbGliYXNzdWFuLWRldmVsIGdpdCBtYWtlIHRtcHdhdGNoIHB5dGhvbjMtZGV2ZWwgaHRvcCBnby10b29sc2V0LTEuMTcuNy0xLm1vZHVsZStlbDguNi4wKzE0Mjk3KzMyYTE1ZTE5CgojIFN1cHByZXNzIGVtdWxhdGlvbiBvdXRwdXQgZm9yIHBvZG1hbiBpbnN0ZWFkIG9mIGRvY2tlciBmb3IgYXogYWNyIGNvbXBhdGFiaWxpdHkKbWtkaXIgLXAgL2V0Yy9jb250YWluZXJzLwp0b3VjaCAvZXRjL2NvbnRhaW5lcnMvbm9kb2NrZXIKClZTVFNfQUdFTlRfVkVSU0lPTj0yLjE5My4xCm1rZGlyIC9ob21lL2Nsb3VkLXVzZXIvYWdlbnQKcHVzaGQgL2hvbWUvY2xvdWQtdXNlci9hZ2VudApjdXJsIC1zIGh0dHBzOi8vdnN0c2FnZW50cGFja2FnZS5henVyZWVkZ2UubmV0L2FnZW50LyR7VlNUU19BR0VOVF9WRVJTSU9OfS92c3RzLWFnZW50LWxpbnV4LXg2NC0ke1ZTVFNfQUdFTlRfVkVSU0lPTn0udGFyLmd6IHwgdGFyIC14egpjaG93biAtUiBjbG91ZC11c2VyOmNsb3VkLXVzZXIgLgoKLi9iaW4vaW5zdGFsbGRlcGVuZGVuY2llcy5zaApzdWRvIC11IGNsb3VkLXVzZXIgLi9jb25maWcuc2ggLS11bmF0dGVuZGVkIC0tdXJsIGh0dHBzOi8vZGV2LmF6dXJlLmNvbS9tc2F6dXJlIC0tYXV0aCBwYXQgLS10b2tlbiAiJENJQVpQVE9LRU4iIC0tcG9vbCAiJENJUE9PTE5BTUUiIC0tYWdlbnQgIkFSTy1SSEVMLSRIT1NUTkFNRSIgLS1yZXBsYWNlCi4vc3ZjLnNoIGluc3RhbGwgY2xvdWQtdXNlcgpwb3BkCgpjYXQgPi9ob21lL2Nsb3VkLXVzZXIvYWdlbnQvLnBhdGggPDwnRU9GJwovdXNyL2xvY2FsL2JpbjovdXNyL2JpbjovdXNyL2xvY2FsL3NiaW46L3Vzci9zYmluOi9ob21lL2Nsb3VkLXVzZXIvLmxvY2FsL2JpbjovaG9tZS9jbG91ZC11c2VyL2JpbgpFT0YKCiMgU2V0IHRoZSBhZ2VudCdzICJTeXN0ZW0gY2FwYWJpbGl0aWVzIiBmb3IgdGVzdHMgKGdvLTEuMTcgYW5kIEdPTEFOR19GSVBTKSBpbiB0aGUgYWdlbnQncyAuZW52IGZpbGUKIyBhbmQgYWRkIGEgSEFDSyBmb3IgWERHX1JVTlRJTUVfRElSOiBodHRwczovL2dpdGh1Yi5jb20vY29udGFpbmVycy9wb2RtYW4vaXNzdWVzLzQyNwpjYXQgPi9ob21lL2Nsb3VkLXVzZXIvYWdlbnQvLmVudiA8PCdFT0YnCmdvLTEuMTc9dHJ1ZQpHT0xBTkdfRklQUz0xClhER19SVU5USU1FX0RJUj0vcnVuL3VzZXIvMTAwMApFT0YKCmNhdCA+L2V0Yy9jcm9uLmhvdXJseS90bXB3YXRjaCA8PCdFT0YnCiMhL2Jpbi9iYXNoCgpleGVjIC9zYmluL3RtcHdhdGNoIDI0aCAvdG1wCkVPRgpjaG1vZCAreCAvZXRjL2Nyb24uaG91cmx5L3RtcHdhdGNoCgojIEhBQ0sgLSBwb2RtYW4gZG9lc24ndCBhbHdheXMgdGVybWluYXRlIG9yIGNsZWFuIHVwIGl0J3MgcGF1c2UucGlkIGZpbGUgY2F1c2luZwojICdjYW5ub3QgcmVleGVjIGVycm9ycycgc28gYXR0ZW1wdCB0byBjbGVhbiBpdCB1cCBldmVyeSBtaW51dGUgdG8ga2VlcCBwaXBlbGluZXMgcnVubmluZwojIHNtb290aGx5CmNhdCA+L3Vzci9sb2NhbC9iaW4vZml4LXBvZG1hbi1wYXVzZS5zaCA8PCdFT0YnCiMhL2Jpbi9iYXNoCgpQQVVTRV9GSUxFPScvdG1wL3BvZG1hbi1ydW4tMTAwMC9saWJwb2QvdG1wL3BhdXNlLnBpZCcKCmlmIFsgLWYgIiR7UEFVU0VfRklMRX0iIF07IHRoZW4KCVBJRD0kKGNhdCAke1BBVVNFX0ZJTEV9KQoJaWYgISBwcyAtcCAkUElEID4gL2Rldi9udWxsOyB0aGVuCgkJcm0gJFBBVVNFX0ZJTEUKCWZpCmZpCkVPRgpjaG1vZCAreCAvdXNyL2xvY2FsL2Jpbi9maXgtcG9kbWFuLXBhdXNlLnNoCgojIEhBQ0sgLSAvdG1wIHdpbGwgZmlsbCB1cCBjYXVzaW5nIGJ1aWxkIGZhaWx1cmVzCiMgZGVsZXRlIGFueXRoaW5nIG5vdCBhY2Nlc3NlZCB3aXRoaW4gMiBkYXlzCmNhdCA+L3Vzci9sb2NhbC9iaW4vY2xlYW4tdG1wLnNoIDw8J0VPRicKIyEvYmluL2Jhc2gKCmZpbmQgL3RtcCAtdHlwZSBmIFwoICEgLXVzZXIgcm9vdCBcKSAtYXRpbWUgKzIgLWRlbGV0ZQoKRU9GCmNobW9kICt4IC91c3IvbG9jYWwvYmluL2NsZWFuLXRtcC5zaAoKZWNobyAiMCAwICovMSAqICogL3Vzci9sb2NhbC9iaW4vY2xlYW4tdG1wLnNoIiA+PiBjcm9uCmVjaG8gIiogKiAqICogKiAvdXNyL2xvY2FsL2Jpbi9maXgtcG9kbWFuLXBhdXNlLnNoIiA+PiBjcm9uCgojIEhBQ0sgLSBodHRwczovL2dpdGh1Yi5jb20vY29udGFpbmVycy9wb2RtYW4vaXNzdWVzLzkwMDIKZWNobyAiQHJlYm9vdCBsb2dpbmN0bCBlbmFibGUtbGluZ2VyIGNsb3VkLXVzZXIiID4+IGNyb24KCmNyb250YWIgY3JvbgpybSBjcm9uCgooc2xlZXAgMzA7IHJlYm9vdCkgJgo=')))]"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -331,7 +331,7 @@ enabled=yes
|
|||
gpgcheck=yes
|
||||
EOF
|
||||
|
||||
yum -y install azure-cli podman podman-docker jq gcc gpgme-devel libassuan-devel git make tmpwatch python3-devel go-toolset-1.16.12-1.module+el8.5.0+13637+960c7771
|
||||
yum -y install azure-cli podman podman-docker jq gcc gpgme-devel libassuan-devel git make tmpwatch python3-devel htop go-toolset-1.17.7-1.module+el8.6.0+14297+32a15e19
|
||||
|
||||
# Suppress emulation output for podman instead of docker for az acr compatability
|
||||
mkdir -p /etc/containers/
|
||||
|
@ -340,7 +340,7 @@ touch /etc/containers/nodocker
|
|||
VSTS_AGENT_VERSION=2.193.1
|
||||
mkdir /home/cloud-user/agent
|
||||
pushd /home/cloud-user/agent
|
||||
curl https://vstsagentpackage.azureedge.net/agent/${VSTS_AGENT_VERSION}/vsts-agent-linux-x64-${VSTS_AGENT_VERSION}.tar.gz | tar -xz
|
||||
curl -s https://vstsagentpackage.azureedge.net/agent/${VSTS_AGENT_VERSION}/vsts-agent-linux-x64-${VSTS_AGENT_VERSION}.tar.gz | tar -xz
|
||||
chown -R cloud-user:cloud-user .
|
||||
|
||||
./bin/installdependencies.sh
|
||||
|
@ -352,9 +352,10 @@ cat >/home/cloud-user/agent/.path <<'EOF'
|
|||
/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/cloud-user/.local/bin:/home/cloud-user/bin
|
||||
EOF
|
||||
|
||||
# HACK for XDG_RUNTIME_DIR: https://github.com/containers/podman/issues/427
|
||||
# Set the agent's "System capabilities" for tests (go-1.17 and GOLANG_FIPS) in the agent's .env file
|
||||
# and add a HACK for XDG_RUNTIME_DIR: https://github.com/containers/podman/issues/427
|
||||
cat >/home/cloud-user/agent/.env <<'EOF'
|
||||
go-1.16=true
|
||||
go-1.17=true
|
||||
GOLANG_FIPS=1
|
||||
XDG_RUNTIME_DIR=/run/user/1000
|
||||
EOF
|
||||
|
@ -448,6 +449,7 @@ rm cron
|
|||
ManagedDisk: &mgmtcompute.VirtualMachineScaleSetManagedDiskParameters{
|
||||
StorageAccountType: mgmtcompute.StorageAccountTypesPremiumLRS,
|
||||
},
|
||||
DiskSizeGB: to.Int32Ptr(200),
|
||||
},
|
||||
},
|
||||
NetworkProfile: &mgmtcompute.VirtualMachineScaleSetNetworkProfile{
|
||||
|
|
|
@ -237,6 +237,13 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
|
|||
s.Methods(http.MethodPost).HandlerFunc(f.postAdminKubernetesObjects).Name("postAdminKubernetesObjects")
|
||||
s.Methods(http.MethodDelete).HandlerFunc(f.deleteAdminKubernetesObjects).Name("deleteAdminKubernetesObjects")
|
||||
|
||||
// Pod logs
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/kubernetespodlogs").
|
||||
Subrouter()
|
||||
|
||||
s.Methods(http.MethodGet).HandlerFunc(f.getAdminKubernetesPodLogs).Name("getAdminKubernetesPodLogs")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/resources").
|
||||
Subrouter()
|
||||
|
@ -255,6 +262,18 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
|
|||
|
||||
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterRedeployVM).Name("postAdminOpenShiftClusterRedeployVM")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/stopvm").
|
||||
Subrouter()
|
||||
|
||||
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterStopVM).Name("postAdminOpenShiftClusterStopVM")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/startvm").
|
||||
Subrouter()
|
||||
|
||||
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterStartVM).Name("postAdminOpenShiftClusterStartVM")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/upgrade").
|
||||
Subrouter()
|
||||
|
@ -267,6 +286,18 @@ func (f *frontend) authenticatedRoutes(r *mux.Router) {
|
|||
|
||||
s.Methods(http.MethodGet).HandlerFunc(f.getAdminOpenShiftClusters).Name("getAdminOpenShiftClusters")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/skus").
|
||||
Subrouter()
|
||||
|
||||
s.Methods(http.MethodGet).HandlerFunc(f.getAdminOpenShiftClusterVMResizeOptions).Name("getAdminOpenShiftClusterVMResizeOptions")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/resize").
|
||||
Subrouter()
|
||||
|
||||
s.Methods(http.MethodPost).HandlerFunc(f.postAdminOpenShiftClusterVMResize).Name("postAdminOpenShiftClusterVMResize")
|
||||
|
||||
s = r.
|
||||
Path("/admin/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}/reconcilefailednic").
|
||||
Subrouter()
|
||||
|
|
|
@ -79,7 +79,7 @@ func (f *frontend) validateOpenShiftUniqueKey(ctx context.Context, doc *api.Open
|
|||
var rxKubernetesString = regexp.MustCompile(`(?i)^[-a-z0-9.]{0,255}$`)
|
||||
|
||||
func validateAdminKubernetesObjectsNonCustomer(method, groupKind, namespace, name string) error {
|
||||
if !utilnamespace.IsOpenShift(namespace) {
|
||||
if !utilnamespace.IsOpenShiftNamespace(namespace) {
|
||||
return api.NewCloudError(http.StatusForbidden, api.CloudErrorCodeForbidden, "", "Access to the provided namespace '%s' is forbidden.", namespace)
|
||||
}
|
||||
|
||||
|
@ -129,6 +129,25 @@ func validateAdminVMName(vmName string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func validateAdminKubernetesPodLogs(namespace, podName, containerName string) error {
|
||||
if podName == "" || !rxKubernetesString.MatchString(podName) {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided pod name '%s' is invalid.", podName)
|
||||
}
|
||||
|
||||
if namespace == "" || !rxKubernetesString.MatchString(namespace) {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided namespace '%s' is invalid.", namespace)
|
||||
}
|
||||
// Checking if the namespace is an OpenShift namespace not a customer workload namespace.
|
||||
if !utilnamespace.IsOpenShiftNamespace(namespace) {
|
||||
return api.NewCloudError(http.StatusForbidden, api.CloudErrorCodeForbidden, "", "Access to the provided namespace '%s' is forbidden.", namespace)
|
||||
}
|
||||
|
||||
if containerName == "" || !rxKubernetesString.MatchString(containerName) {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided container name '%s' is invalid.", containerName)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Azure resource name rules:
|
||||
// https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/resource-name-rules#microsoftnetwork
|
||||
var rxNetworkInterfaceName = regexp.MustCompile(`^[a-zA-Z0-9].*\w$`)
|
||||
|
@ -137,7 +156,13 @@ func validateNetworkInterfaceName(nicName string) error {
|
|||
if nicName == "" || !rxNetworkInterfaceName.MatchString(nicName) {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided nicName '%s' is invalid.", nicName)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateAdminVMSize(vmSize string) error {
|
||||
if vmSize == "" {
|
||||
return api.NewCloudError(http.StatusBadRequest, api.CloudErrorCodeInvalidParameter, "", "The provided vmSize '%s' is invalid.", vmSize)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package installer
|
||||
package cluster
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"context"
|
||||
"fmt"
|
||||
"net"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/sirupsen/logrus"
|
||||
|
@ -21,8 +22,9 @@ type statsd struct {
|
|||
log *logrus.Entry
|
||||
env env.Core
|
||||
|
||||
account string
|
||||
namespace string
|
||||
account string
|
||||
namespace string
|
||||
mdmSocketEnv string
|
||||
|
||||
conn net.Conn
|
||||
ch chan *metric
|
||||
|
@ -31,13 +33,14 @@ type statsd struct {
|
|||
}
|
||||
|
||||
// New returns a new metrics.Emitter
|
||||
func New(ctx context.Context, log *logrus.Entry, env env.Core, account, namespace string) metrics.Emitter {
|
||||
func New(ctx context.Context, log *logrus.Entry, env env.Core, account, namespace string, mdmSocketEnv string) metrics.Emitter {
|
||||
s := &statsd{
|
||||
log: log,
|
||||
env: env,
|
||||
|
||||
account: account,
|
||||
namespace: namespace,
|
||||
account: account,
|
||||
namespace: namespace,
|
||||
mdmSocketEnv: mdmSocketEnv,
|
||||
|
||||
ch: make(chan *metric, 1024),
|
||||
|
||||
|
@ -103,13 +106,65 @@ func (s *statsd) run() {
|
|||
}
|
||||
}
|
||||
|
||||
func (s *statsd) dial() (err error) {
|
||||
path := "/var/etw/mdm_statsd.socket"
|
||||
if s.env.IsLocalDevelopmentMode() {
|
||||
path = "mdm_statsd.socket"
|
||||
func (s *statsd) parseSocketEnv(env string) (string, string, error) {
|
||||
// Verify network:address format
|
||||
parameters := strings.SplitN(env, ":", 2)
|
||||
if len(parameters) != 2 {
|
||||
return "", "", fmt.Errorf("malformed definition for the mdm statds socket. Expecting udp:<hostname>:<port> or unix:<path-to-socket> format. Got: %q", env)
|
||||
}
|
||||
network := strings.ToLower(parameters[0])
|
||||
address := parameters[1]
|
||||
return network, address, nil
|
||||
}
|
||||
|
||||
func (s *statsd) validateSocketDefinition(network string, address string) (bool, error) {
|
||||
//Verify supported protocol provided. TCP might just work as well, but this was never tested
|
||||
if network != "udp" && network != "unix" {
|
||||
return false, fmt.Errorf("unsupported protocol for the mdm statds socket. Expecting 'udp:' or 'unix:'. Got: %q", network)
|
||||
}
|
||||
|
||||
s.conn, err = net.Dial("unix", path)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func (s *statsd) defaultSocketValues() (string, string) {
|
||||
network := "unix"
|
||||
address := "/var/etw/mdm_statsd.socket"
|
||||
|
||||
if s.env.IsLocalDevelopmentMode() {
|
||||
address = "mdm_statsd.socket"
|
||||
}
|
||||
|
||||
return network, address
|
||||
}
|
||||
|
||||
func (s *statsd) connectionDetails() (string, string, error) {
|
||||
// allow the default socket connection to be overwritten by ENV variable
|
||||
if s.mdmSocketEnv == "" {
|
||||
network, address := s.defaultSocketValues()
|
||||
return network, address, nil
|
||||
}
|
||||
|
||||
network, address, err := s.parseSocketEnv(s.mdmSocketEnv)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
ok, err := s.validateSocketDefinition(network, address)
|
||||
if !ok {
|
||||
return "", "", err
|
||||
}
|
||||
|
||||
return network, address, nil
|
||||
}
|
||||
|
||||
func (s *statsd) dial() (err error) {
|
||||
network, address, err := s.connectionDetails()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
s.conn, err = net.Dial(network, address)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
configv1 "github.com/openshift/api/config/v1"
|
||||
configclient "github.com/openshift/client-go/config/clientset/versioned"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
mcoclient "github.com/openshift/machine-config-operator/pkg/generated/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
|
@ -35,7 +35,7 @@ type Monitor struct {
|
|||
restconfig *rest.Config
|
||||
cli kubernetes.Interface
|
||||
configcli configclient.Interface
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
mcocli mcoclient.Interface
|
||||
m metrics.Emitter
|
||||
arocli aroclient.Interface
|
||||
|
@ -72,7 +72,7 @@ func NewMonitor(ctx context.Context, log *logrus.Entry, restConfig *rest.Config,
|
|||
return nil, err
|
||||
}
|
||||
|
||||
maocli, err := maoclient.NewForConfig(restConfig)
|
||||
maocli, err := machineclient.NewForConfig(restConfig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -9,15 +9,14 @@ import (
|
|||
"testing"
|
||||
|
||||
"github.com/golang/mock/gomock"
|
||||
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
maofake "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned/fake"
|
||||
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
machinefake "github.com/openshift/client-go/machine/clientset/versioned/fake"
|
||||
"github.com/sirupsen/logrus"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
kruntime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
azureproviderv1beta1 "sigs.k8s.io/cluster-api-provider-azure/pkg/apis/azureprovider/v1beta1"
|
||||
|
||||
mock_metrics "github.com/Azure/ARO-RP/pkg/util/mocks/metrics"
|
||||
)
|
||||
|
@ -25,7 +24,7 @@ import (
|
|||
func TestEmitNodeConditions(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
provSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{})
|
||||
provSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
@ -69,7 +68,7 @@ func TestEmitNodeConditions(t *testing.T) {
|
|||
},
|
||||
},
|
||||
})
|
||||
maoclient := maofake.NewSimpleClientset(
|
||||
machineclient := machinefake.NewSimpleClientset(
|
||||
&machinev1beta1.Machine{
|
||||
Spec: machinev1beta1.MachineSpec{
|
||||
ProviderSpec: machinev1beta1.ProviderSpec{
|
||||
|
@ -105,7 +104,7 @@ func TestEmitNodeConditions(t *testing.T) {
|
|||
|
||||
mon := &Monitor{
|
||||
cli: cli,
|
||||
maocli: maoclient,
|
||||
maocli: machineclient,
|
||||
m: m,
|
||||
}
|
||||
|
||||
|
@ -141,27 +140,27 @@ func TestEmitNodeConditions(t *testing.T) {
|
|||
func TestGetSpotInstances(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
spotProvSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{
|
||||
SpotVMOptions: &azureproviderv1beta1.SpotVMOptions{},
|
||||
spotProvSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{
|
||||
SpotVMOptions: &machinev1beta1.SpotVMOptions{},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
provSpec, err := json.Marshal(azureproviderv1beta1.AzureMachineProviderSpec{})
|
||||
provSpec, err := json.Marshal(machinev1beta1.AzureMachineProviderSpec{})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
node corev1.Node
|
||||
expectedSpotInstance bool
|
||||
}{
|
||||
{
|
||||
name: "node is a spot instance",
|
||||
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
Spec: machinev1beta1.MachineSpec{
|
||||
ProviderSpec: machinev1beta1.ProviderSpec{
|
||||
Value: &kruntime.RawExtension{
|
||||
|
@ -186,7 +185,7 @@ func TestGetSpotInstances(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "node is not a spot instance",
|
||||
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
Spec: machinev1beta1.MachineSpec{
|
||||
ProviderSpec: machinev1beta1.ProviderSpec{
|
||||
Value: &kruntime.RawExtension{
|
||||
|
@ -211,7 +210,7 @@ func TestGetSpotInstances(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "node is missing annotation",
|
||||
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
Spec: machinev1beta1.MachineSpec{
|
||||
ProviderSpec: machinev1beta1.ProviderSpec{
|
||||
Value: &kruntime.RawExtension{
|
||||
|
@ -234,7 +233,7 @@ func TestGetSpotInstances(t *testing.T) {
|
|||
},
|
||||
{
|
||||
name: "malformed json in providerSpec",
|
||||
maocli: maofake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
maocli: machinefake.NewSimpleClientset(&machinev1beta1.Machine{
|
||||
Spec: machinev1beta1.MachineSpec{
|
||||
ProviderSpec: machinev1beta1.ProviderSpec{
|
||||
Value: &kruntime.RawExtension{
|
||||
|
|
|
@ -5,7 +5,9 @@ package machine
|
|||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
@ -37,13 +39,27 @@ func (r *Reconciler) machineValid(ctx context.Context, machine *machinev1beta1.M
|
|||
return []error{fmt.Errorf("machine %s: provider spec missing", machine.Name)}
|
||||
}
|
||||
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(machine.Spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
if err != nil {
|
||||
return []error{err}
|
||||
}
|
||||
machineProviderSpec, ok := obj.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
if !ok {
|
||||
return []error{fmt.Errorf("machine %s: failed to read provider spec: %T", machine.Name, obj)}
|
||||
var machineProviderSpec *machinev1beta1.AzureMachineProviderSpec
|
||||
|
||||
if strings.Contains(string(machine.Spec.ProviderSpec.Value.Raw), "azureproviderconfig.openshift.io") {
|
||||
machineProviderSpec = &machinev1beta1.AzureMachineProviderSpec{}
|
||||
err := json.Unmarshal(machine.Spec.ProviderSpec.Value.Raw, machineProviderSpec)
|
||||
if err != nil {
|
||||
return []error{fmt.Errorf("machine %s: failed to unmarshal the 'azureproviderconfig.openshift.io' provider spec: %q", machine.Name, err.Error())}
|
||||
}
|
||||
} else {
|
||||
o, _, err := scheme.Codecs.UniversalDeserializer().Decode(machine.Spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
if err != nil {
|
||||
return []error{err}
|
||||
}
|
||||
|
||||
var ok bool
|
||||
machineProviderSpec, ok = o.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
if !ok {
|
||||
// This should never happen: codecs uses scheme that has only one registered type
|
||||
// and if something is wrong with the provider spec - decoding should fail
|
||||
return []error{fmt.Errorf("machine %s: failed to read provider spec: %T", machine.Name, o)}
|
||||
}
|
||||
}
|
||||
|
||||
// Validate VM size in machine provider spec
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
// Code generated for package machinehealthcheck by go-bindata DO NOT EDIT. (@generated)
|
||||
// sources:
|
||||
// staticresources/machinehealthcheck.yaml
|
||||
// staticresources/mhcremediationalert.yaml
|
||||
package machinehealthcheck
|
||||
|
||||
import (
|
||||
|
@ -77,7 +78,7 @@ func (fi bindataFileInfo) Sys() interface{} {
|
|||
return nil
|
||||
}
|
||||
|
||||
var _machinehealthcheckYaml = []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x9c\x90\xb1\x6e\xf3\x30\x0c\x84\x77\x3f\x05\xa1\xdd\xf9\x13\xfc\x9b\xd6\x20\x45\x3b\xb4\x43\x81\x74\x67\x65\x06\x12\x2c\x51\x82\x48\xa7\xf1\xdb\x17\xb2\x93\x74\x28\xba\x64\xb3\x4f\x77\xdf\x9d\x84\x25\x7c\x50\x95\x90\xd9\x42\x42\xe7\x03\xd3\x26\x17\x62\xf1\xe1\xa4\x9b\x90\xff\x9d\x77\x9f\xa4\xb8\xeb\xc6\xc0\x83\x85\xd7\xd5\xf2\x4c\x18\xd5\xef\x3d\xb9\xb1\x4b\xa4\x38\xa0\xa2\xed\x00\x18\x13\x59\xc0\x9a\xfb\x2b\xcb\x2f\x46\xb7\x18\xd7\x63\x29\xe8\xc8\xc2\xbd\xe3\xe6\xec\xb1\x84\x4e\x0a\xb9\xc6\x11\x8a\xe4\x34\xd7\xf6\x0d\x90\x50\x9d\x3f\x5c\x4a\x25\x69\x43\x65\x55\x7b\x18\x69\xfe\x63\xb4\x8b\x93\x28\xd5\xc6\xbc\xf3\x6b\x8e\xb4\x04\xa1\x95\x57\x6c\x78\x78\xcb\xfa\xc2\x57\xf5\x8c\x71\xa2\x2b\xbc\xe1\x4d\xe0\x53\x45\xf3\xf3\x9f\xb0\x41\xcd\x23\xed\x42\xfa\xab\xfb\x70\x09\xa2\xd2\x01\x4c\xbc\x3e\xd3\xbc\xcf\x3c\x04\xbd\x5d\xb1\x07\x9d\x0b\xd9\x16\x32\xef\x84\xc3\xbc\x36\x6b\x48\x94\x27\xb5\x60\xfe\x6f\xb7\x62\x60\x11\x45\x51\x27\xb1\x60\x9e\x30\x0a\x99\x47\xd3\x47\x1e\x39\x7f\x71\xb3\x26\xbc\x1c\x6f\xbb\x2c\x98\x9d\xf9\x0e\x00\x00\xff\xff\x9f\x0e\x45\xac\x2a\x02\x00\x00")
|
||||
var _machinehealthcheckYaml = []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x9c\x90\xb1\x6e\xf3\x30\x0c\x84\x77\x3f\x05\xa1\xdd\xf9\x13\xfc\x9b\xd6\x20\x45\x3b\xb4\x43\x81\x74\x67\x65\x06\x16\x2c\x51\x82\x48\xa7\xf1\xdb\x17\x8a\xe2\x74\x28\xba\x64\x93\x4e\xc7\xef\x4e\xc4\xec\x3f\xa8\x88\x4f\x6c\x21\xa2\x1b\x3d\xd3\x26\x65\x62\x19\xfd\x49\x37\x3e\xfd\x3b\xef\x3e\x49\x71\xd7\x4d\x9e\x07\x0b\xaf\xcd\xf2\x4c\x18\x74\xdc\x8f\xe4\xa6\x2e\x92\xe2\x80\x8a\xb6\x03\x60\x8c\x64\x01\x4b\xea\x6f\xac\xf1\x6a\x74\x57\x63\x7b\x96\x8c\x8e\x2c\xdc\x33\x56\x67\x8f\xd9\x77\x92\xc9\x55\x8e\x50\x20\xa7\xa9\xd4\x33\x40\x44\x75\xe3\xe1\x92\x0b\x49\x2d\x2a\x4d\xed\x61\xa2\xe5\x8f\xd2\x2e\xcc\xa2\x54\x2a\xf3\xce\x2f\x29\xd0\x75\x10\x6a\x78\xc1\x8a\x87\xb7\xa4\x2f\x7c\x53\xcf\x18\x66\xba\xc1\x2b\xde\x78\x3e\x15\x34\x3f\xf7\x88\x15\x6a\x1e\x49\x17\xd2\x5f\xd9\x87\x8b\x17\x95\x0e\x60\xe6\xb6\xa6\x65\x9f\x78\xf0\xba\x7e\xb1\x07\x5d\x32\x59\x30\xef\x84\xc3\xd2\x62\xd5\x47\x4a\xb3\x5a\x30\xff\xb7\x5b\x69\x9a\x28\xea\x2c\x16\xcc\x13\x06\x21\xf3\xc8\xe4\x91\x27\x4e\x5f\x5c\xd5\x88\x97\xe3\xda\xc7\x82\xd9\x99\xee\x3b\x00\x00\xff\xff\x38\x97\xb5\xaf\x23\x02\x00\x00")
|
||||
|
||||
func machinehealthcheckYamlBytes() ([]byte, error) {
|
||||
return bindataRead(
|
||||
|
@ -97,6 +98,26 @@ func machinehealthcheckYaml() (*asset, error) {
|
|||
return a, nil
|
||||
}
|
||||
|
||||
var _mhcremediationalertYaml = []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\x74\x91\x4d\x8b\x14\x4d\x10\x84\xef\xfd\x2b\xe2\xf8\xbe\x87\x5e\x5d\x0f\x22\x75\x10\x44\x84\xbd\x2c\xc8\x08\x5e\x44\x86\x9c\xea\x70\xaa\x98\xfa\x68\x32\xb3\x67\xdd\x7f\x2f\xdd\xd3\xea\x5e\xcc\x63\x55\x3e\x11\x4f\x51\x32\xe7\xaf\x54\xcb\xbd\x05\xd4\xde\xb2\x77\xcd\xed\x7c\x17\xbb\xb2\xdb\x5d\xec\xf5\xd5\xf5\x7e\xb8\xe4\x36\x05\x7c\xd6\x5e\xe9\x89\x8b\x1d\x96\xc2\xa1\xd2\x65\x12\x97\x30\x00\x4d\x2a\x03\x6a\x8a\xa3\xb2\x72\xca\xe2\xb9\xb7\x51\x0a\xd5\xf7\x5b\x9b\x25\x32\xa0\xcf\x6c\x96\xf2\x0f\x1f\xab\xc4\x94\x1b\x47\x99\xf3\x00\x14\x39\xb1\xd8\x1a\x05\xcc\x7f\x7a\x02\x2e\xef\x6c\x3b\xd3\x5e\x18\xb0\x05\x8e\xba\x14\xda\x60\x33\xe3\xba\x7f\xd6\xbe\xcc\x1b\x39\xee\x1a\xa6\x1c\xff\xa5\x02\x6c\xf4\xad\x68\xbc\x05\x06\x7c\x39\x7c\x7a\xbc\xe9\x3c\x50\x8a\xa7\x8f\x89\xf1\x72\xf8\x8b\x1f\xc4\xf9\x90\xcf\x69\xa3\x00\xfe\x9c\x35\x20\xb7\xa8\x14\xe3\x7f\x55\xe6\x7c\xdc\x9f\x93\x36\x3e\xae\xfc\xf1\x45\xff\xd1\x96\x18\x69\x76\xf4\xee\x52\xf0\xed\xed\xeb\xfa\xfd\x7f\xbc\xc7\xfd\x9e\xf8\xa1\xb5\xee\xdb\xe6\xae\xb6\xce\x23\xcd\xe4\xcc\x80\xa7\xae\x17\x2a\x5a\x9f\x68\x48\x72\x25\x4e\x64\xc3\xef\x7c\x4e\x78\x83\xae\xa8\x5d\x09\xcf\x95\x86\xdc\xe0\x89\x28\x62\x8e\xd4\x17\x85\xa7\x6c\xa8\xf2\x8c\xdc\xa6\x1c\xc5\x09\x69\x58\x9a\xb9\x9c\x0a\xb7\x82\xd2\x65\x82\x2e\xad\xe5\x76\x46\xbf\xf1\xb1\x2c\xe6\xd4\xdd\xe8\xe5\x1f\xad\x63\xbc\x52\xb3\x3f\x07\x3c\x89\xae\xd8\xf0\x2b\x00\x00\xff\xff\x9e\xca\xa1\xb4\x4d\x02\x00\x00")
|
||||
|
||||
func mhcremediationalertYamlBytes() ([]byte, error) {
|
||||
return bindataRead(
|
||||
_mhcremediationalertYaml,
|
||||
"mhcremediationalert.yaml",
|
||||
)
|
||||
}
|
||||
|
||||
func mhcremediationalertYaml() (*asset, error) {
|
||||
bytes, err := mhcremediationalertYamlBytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info := bindataFileInfo{name: "mhcremediationalert.yaml", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)}
|
||||
a := &asset{bytes: bytes, info: info}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// Asset loads and returns the asset for the given name.
|
||||
// It returns an error if the asset could not be found or
|
||||
// could not be loaded.
|
||||
|
@ -149,7 +170,8 @@ func AssetNames() []string {
|
|||
|
||||
// _bindata is a table, holding each asset generator, mapped to its name.
|
||||
var _bindata = map[string]func() (*asset, error){
|
||||
"machinehealthcheck.yaml": machinehealthcheckYaml,
|
||||
"machinehealthcheck.yaml": machinehealthcheckYaml,
|
||||
"mhcremediationalert.yaml": mhcremediationalertYaml,
|
||||
}
|
||||
|
||||
// AssetDir returns the file names below a certain
|
||||
|
@ -193,7 +215,8 @@ type bintree struct {
|
|||
}
|
||||
|
||||
var _bintree = &bintree{nil, map[string]*bintree{
|
||||
"machinehealthcheck.yaml": {machinehealthcheckYaml, map[string]*bintree{}},
|
||||
"machinehealthcheck.yaml": {machinehealthcheckYaml, map[string]*bintree{}},
|
||||
"mhcremediationalert.yaml": {mhcremediationalertYaml, map[string]*bintree{}},
|
||||
}}
|
||||
|
||||
// RestoreAsset restores an asset under the given directory
|
||||
|
|
|
@ -5,8 +5,9 @@ package machinehealthcheck
|
|||
|
||||
/*
|
||||
|
||||
The controller in this package aims to ensure that MachineHealthCheck objects
|
||||
exist and are correctly configured to automatically mitigate non-ready worker nodes.
|
||||
The controller in this package aims to ensure the ARO MachineHealthCheck CR and MHC Remediation Alert
|
||||
exist and are correctly configured to automatically mitigate non-ready worker nodes and create an in-cluster alert
|
||||
when remediation is occuring frequently.
|
||||
|
||||
There are two flags which control the operations performed by the controller:
|
||||
|
||||
|
@ -15,10 +16,11 @@ aro.machinehealthcheck.enabled:
|
|||
- When set to true, the controller continues on to check the managed flag
|
||||
|
||||
aro.machinehealthcheck.managed
|
||||
- When set to false, the controller will attempt to remove the aro-machinehealthcheck CR from the cluster.
|
||||
- When set to false, the controller will attempt to remove the aro-machinehealthcheck CR and the MHC Remediation alert from the cluster.
|
||||
This should effectively disable the MHC we deploy and prevent the automatic reconciliation of nodes.
|
||||
- When set to true, the controller will deploy/overwrite the aro-machinehealthcheck CR to the cluster.
|
||||
This enables the cluster to self heal when at most 1 worker node goes not ready for at least 5 minutes.
|
||||
- When set to true, the controller will deploy/overwrite the aro-machinehealthcheck CR and the MHC Remediation alert to the cluster.
|
||||
This enables the cluster to self heal when at most 1 worker node goes not ready for at least 5 minutes and alert when remediation
|
||||
occurs 2 or more times within an hour.
|
||||
|
||||
The aro-machinehealth check is configured in a way that if 2 worker nodes go not ready it will not take any action.
|
||||
More information about how the MHC works can be found here:
|
||||
|
|
|
@ -7,7 +7,7 @@ import (
|
|||
"context"
|
||||
"time"
|
||||
|
||||
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
|
||||
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
kruntime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes/scheme"
|
||||
|
@ -52,10 +52,16 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
|
|||
if err != nil {
|
||||
return reconcile.Result{RequeueAfter: time.Hour}, err
|
||||
}
|
||||
|
||||
err = r.dh.EnsureDeleted(ctx, "PrometheusRule", "openshift-machine-api", "mhc-remediation-alert")
|
||||
if err != nil {
|
||||
return reconcile.Result{RequeueAfter: time.Hour}, err
|
||||
}
|
||||
return reconcile.Result{}, nil
|
||||
}
|
||||
|
||||
var resources []kruntime.Object
|
||||
|
||||
// this loop prevents us from hard coding resource strings
|
||||
// and ensures all static resources are accounted for.
|
||||
for _, assetName := range AssetNames() {
|
||||
|
|
|
@ -60,7 +60,7 @@ func TestReconciler(t *testing.T) {
|
|||
wantErr: "",
|
||||
},
|
||||
{
|
||||
name: "Managed Feature Flag is false: ensure mhc is deleted",
|
||||
name: "Managed Feature Flag is false: ensure mhc and its alert are deleted",
|
||||
arocli: arofake.NewSimpleClientset(&arov1alpha1.Cluster{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
|
@ -74,6 +74,7 @@ func TestReconciler(t *testing.T) {
|
|||
}),
|
||||
mocks: func(mdh *mock_dynamichelper.MockInterface) {
|
||||
mdh.EXPECT().EnsureDeleted(gomock.Any(), "MachineHealthCheck", "openshift-machine-api", "aro-machinehealthcheck").Times(1)
|
||||
mdh.EXPECT().EnsureDeleted(gomock.Any(), "PrometheusRule", "openshift-machine-api", "mhc-remediation-alert").Times(1)
|
||||
},
|
||||
wantErr: "",
|
||||
},
|
||||
|
@ -96,6 +97,26 @@ func TestReconciler(t *testing.T) {
|
|||
wantErr: "Could not delete mhc",
|
||||
wantRequeueAfter: time.Hour,
|
||||
},
|
||||
{
|
||||
name: "Managed Feature Flag is false: mhc deletes but mhc alert fails to delete, an error is returned",
|
||||
arocli: arofake.NewSimpleClientset(&arov1alpha1.Cluster{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
},
|
||||
Spec: arov1alpha1.ClusterSpec{
|
||||
OperatorFlags: arov1alpha1.OperatorFlags{
|
||||
enabled: strconv.FormatBool(true),
|
||||
managed: strconv.FormatBool(false),
|
||||
},
|
||||
},
|
||||
}),
|
||||
mocks: func(mdh *mock_dynamichelper.MockInterface) {
|
||||
mdh.EXPECT().EnsureDeleted(gomock.Any(), "MachineHealthCheck", "openshift-machine-api", "aro-machinehealthcheck").Times(1)
|
||||
mdh.EXPECT().EnsureDeleted(gomock.Any(), "PrometheusRule", "openshift-machine-api", "mhc-remediation-alert").Return(errors.New("Could not delete mhc alert"))
|
||||
},
|
||||
wantErr: "Could not delete mhc alert",
|
||||
wantRequeueAfter: time.Hour,
|
||||
},
|
||||
{
|
||||
name: "Managed Feature Flag is true: dynamic helper ensures resources",
|
||||
arocli: arofake.NewSimpleClientset(&arov1alpha1.Cluster{
|
||||
|
@ -142,10 +163,7 @@ func TestReconciler(t *testing.T) {
|
|||
tt.mocks(mdh)
|
||||
|
||||
ctx := context.Background()
|
||||
r := &Reconciler{
|
||||
arocli: tt.arocli,
|
||||
dh: mdh,
|
||||
}
|
||||
r := NewReconciler(tt.arocli, mdh)
|
||||
request := ctrl.Request{}
|
||||
request.Name = "cluster"
|
||||
|
||||
|
|
|
@ -14,10 +14,10 @@ spec:
|
|||
- key: machine.openshift.io/cluster-api-machineset
|
||||
operator: Exists
|
||||
unhealthyConditions:
|
||||
- type: "Ready"
|
||||
timeout: "300s"
|
||||
- type: "Ready"
|
||||
timeout: "300s"
|
||||
status: "False"
|
||||
- type: "Ready"
|
||||
timeout: "300s"
|
||||
- type: "Ready"
|
||||
timeout: "300s"
|
||||
status: "Unknown"
|
||||
maxUnhealthy: "1"
|
||||
maxUnhealthy: "1"
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -1,174 +0,0 @@
|
|||
package muo
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/ugorji/go/codec"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
kruntime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
"k8s.io/client-go/kubernetes/scheme"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
|
||||
"github.com/Azure/ARO-RP/pkg/util/dynamichelper"
|
||||
"github.com/Azure/ARO-RP/pkg/util/ready"
|
||||
)
|
||||
|
||||
type muoConfig struct {
|
||||
api.MissingFields
|
||||
ConfigManager struct {
|
||||
api.MissingFields
|
||||
Source string `json:"source,omitempty"`
|
||||
OcmBaseUrl string `json:"ocmBaseUrl,omitempty"`
|
||||
LocalConfigName string `json:"localConfigName,omitempty"`
|
||||
} `json:"configManager,omitempty"`
|
||||
}
|
||||
|
||||
type Deployer interface {
|
||||
CreateOrUpdate(context.Context, *arov1alpha1.Cluster, *config.MUODeploymentConfig) error
|
||||
Remove(context.Context) error
|
||||
IsReady(ctx context.Context) (bool, error)
|
||||
Resources(*config.MUODeploymentConfig) ([]kruntime.Object, error)
|
||||
}
|
||||
|
||||
type deployer struct {
|
||||
kubernetescli kubernetes.Interface
|
||||
dh dynamichelper.Interface
|
||||
|
||||
jsonHandle *codec.JsonHandle
|
||||
}
|
||||
|
||||
func newDeployer(kubernetescli kubernetes.Interface, dh dynamichelper.Interface) Deployer {
|
||||
return &deployer{
|
||||
kubernetescli: kubernetescli,
|
||||
dh: dh,
|
||||
|
||||
jsonHandle: new(codec.JsonHandle),
|
||||
}
|
||||
}
|
||||
|
||||
func (o *deployer) Resources(config *config.MUODeploymentConfig) ([]kruntime.Object, error) {
|
||||
results := []kruntime.Object{}
|
||||
for _, assetName := range AssetNames() {
|
||||
b, err := Asset(assetName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(b, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// set the image for the deployments
|
||||
if d, ok := obj.(*appsv1.Deployment); ok {
|
||||
for i := range d.Spec.Template.Spec.Containers {
|
||||
d.Spec.Template.Spec.Containers[i].Image = config.Pullspec
|
||||
}
|
||||
}
|
||||
|
||||
if cm, ok := obj.(*corev1.ConfigMap); ok {
|
||||
if cm.Name == "managed-upgrade-operator-config" && cm.Namespace == "openshift-managed-upgrade-operator" {
|
||||
// read the config.yaml from the MUO ConfigMap which stores defaults
|
||||
configDataJSON, err := yaml.YAMLToJSON([]byte(cm.Data["config.yaml"]))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var configData muoConfig
|
||||
err = codec.NewDecoderBytes(configDataJSON, o.jsonHandle).Decode(&configData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if config.EnableConnected {
|
||||
configData.ConfigManager.Source = "OCM"
|
||||
configData.ConfigManager.OcmBaseUrl = config.OCMBaseURL
|
||||
configData.ConfigManager.LocalConfigName = ""
|
||||
} else {
|
||||
configData.ConfigManager.Source = "LOCAL"
|
||||
configData.ConfigManager.LocalConfigName = "managed-upgrade-config"
|
||||
configData.ConfigManager.OcmBaseUrl = ""
|
||||
}
|
||||
|
||||
// Write the yaml back into the ConfigMap
|
||||
var b []byte
|
||||
err = codec.NewEncoderBytes(&b, o.jsonHandle).Encode(configData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cmYaml, err := yaml.JSONToYAML(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cm.Data["config.yaml"] = string(cmYaml)
|
||||
}
|
||||
}
|
||||
|
||||
results = append(results, obj)
|
||||
}
|
||||
|
||||
return results, nil
|
||||
}
|
||||
|
||||
func (o *deployer) CreateOrUpdate(ctx context.Context, cluster *arov1alpha1.Cluster, config *config.MUODeploymentConfig) error {
|
||||
resources, err := o.Resources(config)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = dynamichelper.SetControllerReferences(resources, cluster)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = dynamichelper.Prepare(resources)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return o.dh.Ensure(ctx, resources...)
|
||||
}
|
||||
|
||||
func (o *deployer) Remove(ctx context.Context) error {
|
||||
resources, err := o.Resources(&config.MUODeploymentConfig{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var errs []error
|
||||
for _, obj := range resources {
|
||||
// delete any deployments we have
|
||||
if d, ok := obj.(*appsv1.Deployment); ok {
|
||||
err := o.dh.EnsureDeleted(ctx, "Deployment", d.Namespace, d.Name)
|
||||
// Don't error out because then we might delete some resources and not others
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) != 0 {
|
||||
errContent := []string{"error removing MUO:"}
|
||||
for _, err := range errs {
|
||||
errContent = append(errContent, err.Error())
|
||||
}
|
||||
return errors.New(strings.Join(errContent, "\n"))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *deployer) IsReady(ctx context.Context) (bool, error) {
|
||||
return ready.CheckDeploymentIsReady(ctx, o.kubernetescli.AppsV1().Deployments("openshift-managed-upgrade-operator"), "managed-upgrade-operator")()
|
||||
}
|
|
@ -1,350 +0,0 @@
|
|||
package muo
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/go-test/deep"
|
||||
"github.com/golang/mock/gomock"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/api/meta"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
kruntime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
|
||||
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
|
||||
mock_dynamichelper "github.com/Azure/ARO-RP/pkg/util/mocks/dynamichelper"
|
||||
)
|
||||
|
||||
func TestDeployCreateOrUpdateCorrectKinds(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
setPullSpec := "MyMUOPullSpec"
|
||||
cluster := &arov1alpha1.Cluster{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
},
|
||||
}
|
||||
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
dh := mock_dynamichelper.NewMockInterface(controller)
|
||||
|
||||
// When the DynamicHelper is called, count the number of objects it creates
|
||||
// and capture any deployments so that we can check the pullspec
|
||||
var deployments []*appsv1.Deployment
|
||||
deployedObjects := make(map[string]int)
|
||||
check := func(ctx context.Context, objs ...kruntime.Object) error {
|
||||
m := meta.NewAccessor()
|
||||
for _, i := range objs {
|
||||
kind, err := m.Kind(i)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if d, ok := i.(*appsv1.Deployment); ok {
|
||||
deployments = append(deployments, d)
|
||||
}
|
||||
deployedObjects[kind] = deployedObjects[kind] + 1
|
||||
}
|
||||
return nil
|
||||
}
|
||||
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
|
||||
|
||||
deployer := newDeployer(k8scli, dh)
|
||||
err := deployer.CreateOrUpdate(context.Background(), cluster, &config.MUODeploymentConfig{Pullspec: setPullSpec})
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// We expect these numbers of resources to be created
|
||||
expectedKinds := map[string]int{
|
||||
"ClusterRole": 1,
|
||||
"ConfigMap": 2,
|
||||
"ClusterRoleBinding": 1,
|
||||
"CustomResourceDefinition": 1,
|
||||
"Deployment": 1,
|
||||
"Namespace": 1,
|
||||
"Role": 4,
|
||||
"RoleBinding": 4,
|
||||
"ServiceAccount": 1,
|
||||
}
|
||||
errs := deep.Equal(deployedObjects, expectedKinds)
|
||||
for _, e := range errs {
|
||||
t.Error(e)
|
||||
}
|
||||
|
||||
// Ensure we have set the pullspec set on the containers
|
||||
for _, d := range deployments {
|
||||
for _, c := range d.Spec.Template.Spec.Containers {
|
||||
if c.Image != setPullSpec {
|
||||
t.Errorf("expected %s, got %s for pullspec", setPullSpec, c.Image)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployCreateOrUpdateSetsOwnerReferences(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
setPullSpec := "MyMUOPullSpec"
|
||||
cluster := &arov1alpha1.Cluster{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
},
|
||||
}
|
||||
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
dh := mock_dynamichelper.NewMockInterface(controller)
|
||||
|
||||
// the OwnerReference that we expect to be set on each object we Ensure
|
||||
pointerToTrueForSomeReason := bool(true)
|
||||
expectedOwner := metav1.OwnerReference{
|
||||
APIVersion: "aro.openshift.io/v1alpha1",
|
||||
Kind: "Cluster",
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
UID: cluster.UID,
|
||||
BlockOwnerDeletion: &pointerToTrueForSomeReason,
|
||||
Controller: &pointerToTrueForSomeReason,
|
||||
}
|
||||
|
||||
// save the list of OwnerReferences on each of the Ensured objects
|
||||
var ownerReferences [][]metav1.OwnerReference
|
||||
check := func(ctx context.Context, objs ...kruntime.Object) error {
|
||||
for _, i := range objs {
|
||||
obj, err := meta.Accessor(i)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
ownerReferences = append(ownerReferences, obj.GetOwnerReferences())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
|
||||
|
||||
deployer := newDeployer(k8scli, dh)
|
||||
err := deployer.CreateOrUpdate(context.Background(), cluster, &config.MUODeploymentConfig{Pullspec: setPullSpec})
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
// Check that each list of OwnerReferences contains our controller
|
||||
for _, references := range ownerReferences {
|
||||
errs := deep.Equal([]metav1.OwnerReference{expectedOwner}, references)
|
||||
for _, e := range errs {
|
||||
t.Error(e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployDelete(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
dh := mock_dynamichelper.NewMockInterface(controller)
|
||||
dh.EXPECT().EnsureDeleted(gomock.Any(), "Deployment", "openshift-managed-upgrade-operator", "managed-upgrade-operator").Return(nil)
|
||||
|
||||
deployer := newDeployer(k8scli, dh)
|
||||
err := deployer.Remove(context.Background())
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployDeleteFailure(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
dh := mock_dynamichelper.NewMockInterface(controller)
|
||||
dh.EXPECT().EnsureDeleted(gomock.Any(), "Deployment", "openshift-managed-upgrade-operator", "managed-upgrade-operator").Return(errors.New("fail"))
|
||||
|
||||
deployer := newDeployer(k8scli, dh)
|
||||
err := deployer.Remove(context.Background())
|
||||
if err == nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if err.Error() != "error removing MUO:\nfail" {
|
||||
t.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployIsReady(t *testing.T) {
|
||||
specReplicas := int32(1)
|
||||
k8scli := fake.NewSimpleClientset(&appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "managed-upgrade-operator",
|
||||
Namespace: "openshift-managed-upgrade-operator",
|
||||
Generation: 1234,
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Replicas: &specReplicas,
|
||||
},
|
||||
Status: appsv1.DeploymentStatus{
|
||||
ObservedGeneration: 1234,
|
||||
Replicas: 1,
|
||||
ReadyReplicas: 1,
|
||||
UpdatedReplicas: 1,
|
||||
AvailableReplicas: 1,
|
||||
UnavailableReplicas: 0,
|
||||
},
|
||||
})
|
||||
|
||||
deployer := newDeployer(k8scli, nil)
|
||||
ready, err := deployer.IsReady(context.Background())
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if !ready {
|
||||
t.Error("deployment is not seen as ready")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployIsReadyMissing(t *testing.T) {
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
deployer := newDeployer(k8scli, nil)
|
||||
ready, err := deployer.IsReady(context.Background())
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if ready {
|
||||
t.Error("deployment is wrongly seen as ready")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeployConfig(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
cluster := &arov1alpha1.Cluster{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: arov1alpha1.SingletonClusterName,
|
||||
},
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
deploymentConfig *config.MUODeploymentConfig
|
||||
expected []string
|
||||
}{
|
||||
{
|
||||
name: "local",
|
||||
deploymentConfig: &config.MUODeploymentConfig{EnableConnected: false},
|
||||
expected: []string{
|
||||
"configManager:",
|
||||
" localConfigName: managed-upgrade-config",
|
||||
" source: LOCAL",
|
||||
" watchInterval: 1",
|
||||
"healthCheck:",
|
||||
" ignoredCriticals:",
|
||||
" - PrometheusRuleFailures",
|
||||
" - CannotRetrieveUpdates",
|
||||
" - FluentdNodeDown",
|
||||
" ignoredNamespaces:",
|
||||
" - openshift-logging",
|
||||
" - openshift-redhat-marketplace",
|
||||
" - openshift-operators",
|
||||
" - openshift-user-workload-monitoring",
|
||||
" - openshift-pipelines",
|
||||
" - openshift-azure-logging",
|
||||
"maintenance:",
|
||||
" controlPlaneTime: 90",
|
||||
" ignoredAlerts:",
|
||||
" controlPlaneCriticals:",
|
||||
" - ClusterOperatorDown",
|
||||
" - ClusterOperatorDegraded",
|
||||
"nodeDrain:",
|
||||
" expectedNodeDrainTime: 8",
|
||||
" timeOut: 45",
|
||||
"scale:",
|
||||
" timeOut: 30",
|
||||
"upgradeWindow:",
|
||||
" delayTrigger: 30",
|
||||
" timeOut: 120",
|
||||
"",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "connected",
|
||||
deploymentConfig: &config.MUODeploymentConfig{EnableConnected: true, OCMBaseURL: "https://example.com"},
|
||||
expected: []string{
|
||||
"configManager:",
|
||||
" ocmBaseUrl: https://example.com",
|
||||
" source: OCM",
|
||||
" watchInterval: 1",
|
||||
"healthCheck:",
|
||||
" ignoredCriticals:",
|
||||
" - PrometheusRuleFailures",
|
||||
" - CannotRetrieveUpdates",
|
||||
" - FluentdNodeDown",
|
||||
" ignoredNamespaces:",
|
||||
" - openshift-logging",
|
||||
" - openshift-redhat-marketplace",
|
||||
" - openshift-operators",
|
||||
" - openshift-user-workload-monitoring",
|
||||
" - openshift-pipelines",
|
||||
" - openshift-azure-logging",
|
||||
"maintenance:",
|
||||
" controlPlaneTime: 90",
|
||||
" ignoredAlerts:",
|
||||
" controlPlaneCriticals:",
|
||||
" - ClusterOperatorDown",
|
||||
" - ClusterOperatorDegraded",
|
||||
"nodeDrain:",
|
||||
" expectedNodeDrainTime: 8",
|
||||
" timeOut: 45",
|
||||
"scale:",
|
||||
" timeOut: 30",
|
||||
"upgradeWindow:",
|
||||
" delayTrigger: 30",
|
||||
" timeOut: 120",
|
||||
"",
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
k8scli := fake.NewSimpleClientset()
|
||||
dh := mock_dynamichelper.NewMockInterface(controller)
|
||||
|
||||
// When the DynamicHelper is called, capture configmaps to inspect them
|
||||
var configs []*corev1.ConfigMap
|
||||
check := func(ctx context.Context, objs ...kruntime.Object) error {
|
||||
for _, i := range objs {
|
||||
if cm, ok := i.(*corev1.ConfigMap); ok {
|
||||
configs = append(configs, cm)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
dh.EXPECT().Ensure(gomock.Any(), gomock.Any()).Do(check).Return(nil)
|
||||
|
||||
deployer := newDeployer(k8scli, dh)
|
||||
err := deployer.CreateOrUpdate(context.Background(), cluster, tt.deploymentConfig)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
foundConfig := false
|
||||
for _, cms := range configs {
|
||||
if cms.Name == "managed-upgrade-operator-config" && cms.Namespace == "openshift-managed-upgrade-operator" {
|
||||
foundConfig = true
|
||||
errs := deep.Equal(tt.expected, strings.Split(cms.Data["config.yaml"], "\n"))
|
||||
for _, e := range errs {
|
||||
t.Error(e)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !foundConfig {
|
||||
t.Error("MUO config was not found")
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,12 +0,0 @@
|
|||
package muo
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
||||
// bindata for the above yaml files
|
||||
//go:generate go run ../../../../vendor/github.com/go-bindata/go-bindata/go-bindata -nometadata -pkg muo -prefix staticresources/ -o bindata.go staticresources/...
|
||||
//go:generate gofmt -s -l -w bindata.go
|
||||
|
||||
//go:generate rm -rf ../../mocks/$GOPACKAGE
|
||||
//go:generate go run ../../../../vendor/github.com/golang/mock/mockgen -destination=../../mocks/$GOPACKAGE/$GOPACKAGE.go github.com/Azure/ARO-RP/pkg/operator/controllers/$GOPACKAGE Deployer
|
||||
//go:generate go run ../../../../vendor/golang.org/x/tools/cmd/goimports -local=github.com/Azure/ARO-RP -e -w ../../mocks/$GOPACKAGE/$GOPACKAGE.go
|
|
@ -5,6 +5,7 @@ package muo
|
|||
|
||||
import (
|
||||
"context"
|
||||
"embed"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
@ -23,6 +24,7 @@ import (
|
|||
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
|
||||
"github.com/Azure/ARO-RP/pkg/util/deployer"
|
||||
"github.com/Azure/ARO-RP/pkg/util/dynamichelper"
|
||||
"github.com/Azure/ARO-RP/pkg/util/pullsecret"
|
||||
"github.com/Azure/ARO-RP/pkg/util/version"
|
||||
|
@ -34,13 +36,16 @@ const (
|
|||
controllerEnabled = "rh.srep.muo.enabled"
|
||||
controllerManaged = "rh.srep.muo.managed"
|
||||
controllerPullSpec = "rh.srep.muo.deploy.pullspec"
|
||||
controllerAllowOCM = "rh.srep.muo.deploy.allowOCM"
|
||||
controllerForceLocalOnly = "rh.srep.muo.deploy.forceLocalOnly"
|
||||
controllerOcmBaseURL = "rh.srep.muo.deploy.ocmBaseUrl"
|
||||
controllerOcmBaseURLDefaultValue = "https://api.openshift.com"
|
||||
|
||||
pullSecretOCMKey = "cloud.openshift.com"
|
||||
)
|
||||
|
||||
//go:embed staticresources
|
||||
var staticFiles embed.FS
|
||||
|
||||
var pullSecretName = types.NamespacedName{Name: "pull-secret", Namespace: "openshift-config"}
|
||||
|
||||
type MUODeploymentConfig struct {
|
||||
|
@ -52,7 +57,7 @@ type MUODeploymentConfig struct {
|
|||
type Reconciler struct {
|
||||
arocli aroclient.Interface
|
||||
kubernetescli kubernetes.Interface
|
||||
deployer Deployer
|
||||
deployer deployer.Deployer
|
||||
|
||||
readinessPollTime time.Duration
|
||||
readinessTimeout time.Duration
|
||||
|
@ -62,7 +67,7 @@ func NewReconciler(arocli aroclient.Interface, kubernetescli kubernetes.Interfac
|
|||
return &Reconciler{
|
||||
arocli: arocli,
|
||||
kubernetescli: kubernetescli,
|
||||
deployer: newDeployer(kubernetescli, dh),
|
||||
deployer: deployer.NewDeployer(kubernetescli, dh, staticFiles, "staticresources"),
|
||||
|
||||
readinessPollTime: 10 * time.Second,
|
||||
readinessTimeout: 5 * time.Minute,
|
||||
|
@ -96,8 +101,8 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
|
|||
Pullspec: pullSpec,
|
||||
}
|
||||
|
||||
allowOCM := instance.Spec.OperatorFlags.GetSimpleBoolean(controllerAllowOCM)
|
||||
if allowOCM {
|
||||
disableOCM := instance.Spec.OperatorFlags.GetSimpleBoolean(controllerForceLocalOnly)
|
||||
if !disableOCM {
|
||||
useOCM := func() bool {
|
||||
var userSecret *corev1.Secret
|
||||
|
||||
|
@ -138,13 +143,13 @@ func (r *Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.
|
|||
defer cancel()
|
||||
|
||||
err := wait.PollImmediateUntil(r.readinessPollTime, func() (bool, error) {
|
||||
return r.deployer.IsReady(ctx)
|
||||
return r.deployer.IsReady(ctx, "openshift-managed-upgrade-operator", "managed-upgrade-operator")
|
||||
}, timeoutCtx.Done())
|
||||
if err != nil {
|
||||
return reconcile.Result{}, fmt.Errorf("managed Upgrade Operator deployment timed out on Ready: %w", err)
|
||||
}
|
||||
} else if strings.EqualFold(managed, "false") {
|
||||
err := r.deployer.Remove(ctx)
|
||||
err := r.deployer.Remove(ctx, config.MUODeploymentConfig{})
|
||||
if err != nil {
|
||||
return reconcile.Result{}, err
|
||||
}
|
||||
|
@ -162,7 +167,7 @@ func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
|
|||
builder := ctrl.NewControllerManagedBy(mgr).
|
||||
For(&arov1alpha1.Cluster{}, builder.WithPredicates(aroClusterPredicate))
|
||||
|
||||
resources, err := r.deployer.Resources(&config.MUODeploymentConfig{})
|
||||
resources, err := r.deployer.Template(&config.MUODeploymentConfig{}, staticFiles)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -18,13 +18,13 @@ import (
|
|||
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
arofake "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned/fake"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
|
||||
mock_muo "github.com/Azure/ARO-RP/pkg/operator/mocks/muo"
|
||||
mock_deployer "github.com/Azure/ARO-RP/pkg/util/mocks/deployer"
|
||||
)
|
||||
|
||||
func TestMUOReconciler(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
mocks func(*mock_muo.MockDeployer, *arov1alpha1.Cluster)
|
||||
mocks func(*mock_deployer.MockDeployer, *arov1alpha1.Cluster)
|
||||
flags arov1alpha1.OperatorFlags
|
||||
// connected MUO -- cluster pullsecret
|
||||
pullsecret string
|
||||
|
@ -46,12 +46,13 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerManaged: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -60,104 +61,123 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "acrtest.example.com/managed-upgrade-operator:aro-b1",
|
||||
Pullspec: "acrtest.example.com/managed-upgrade-operator:aro-b4",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, OCM allowed but pull secret entirely missing",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerAllowOCM: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, OCM allowed but empty pullsecret",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerAllowOCM: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
pullsecret: "{\"auths\": {}}",
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, OCM allowed but mangled pullsecret",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerAllowOCM: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
pullsecret: "i'm a little json, short and stout",
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, OCM connected mode",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerAllowOCM: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: true,
|
||||
OCMBaseURL: "https://api.openshift.com",
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, OCM connected mode, custom OCM URL",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerAllowOCM: "true",
|
||||
controllerOcmBaseURL: "https://example.com",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "false",
|
||||
controllerOcmBaseURL: "https://example.com",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: true,
|
||||
OCMBaseURL: "https://example.com",
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(true, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "managed, pull secret exists, OCM disabled",
|
||||
flags: arov1alpha1.OperatorFlags{
|
||||
controllerEnabled: "true",
|
||||
controllerManaged: "true",
|
||||
controllerForceLocalOnly: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
pullsecret: "{\"auths\": {\"" + pullSecretOCMKey + "\": {\"auth\": \"secret value\"}}}",
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(true, nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -167,12 +187,13 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerManaged: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
expectedConfig := &config.MUODeploymentConfig{
|
||||
Pullspec: "wonderfulPullspec",
|
||||
Pullspec: "wonderfulPullspec",
|
||||
EnableConnected: false,
|
||||
}
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, expectedConfig).Return(nil)
|
||||
md.EXPECT().IsReady(gomock.Any()).Return(false, nil)
|
||||
md.EXPECT().IsReady(gomock.Any(), gomock.Any(), gomock.Any()).Return(false, nil)
|
||||
},
|
||||
wantErr: "managed Upgrade Operator deployment timed out on Ready: timed out waiting for the condition",
|
||||
},
|
||||
|
@ -183,7 +204,7 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerManaged: "true",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
md.EXPECT().CreateOrUpdate(gomock.Any(), cluster, gomock.AssignableToTypeOf(&config.MUODeploymentConfig{})).Return(errors.New("failed ensure"))
|
||||
},
|
||||
wantErr: "failed ensure",
|
||||
|
@ -195,8 +216,8 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerManaged: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
md.EXPECT().Remove(gomock.Any()).Return(nil)
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
md.EXPECT().Remove(gomock.Any(), gomock.Any()).Return(nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -206,8 +227,8 @@ func TestMUOReconciler(t *testing.T) {
|
|||
controllerManaged: "false",
|
||||
controllerPullSpec: "wonderfulPullspec",
|
||||
},
|
||||
mocks: func(md *mock_muo.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
md.EXPECT().Remove(gomock.Any()).Return(errors.New("failed delete"))
|
||||
mocks: func(md *mock_deployer.MockDeployer, cluster *arov1alpha1.Cluster) {
|
||||
md.EXPECT().Remove(gomock.Any(), gomock.Any()).Return(errors.New("failed delete"))
|
||||
},
|
||||
wantErr: "failed delete",
|
||||
},
|
||||
|
@ -236,7 +257,7 @@ func TestMUOReconciler(t *testing.T) {
|
|||
}
|
||||
arocli := arofake.NewSimpleClientset(cluster)
|
||||
kubecli := fake.NewSimpleClientset()
|
||||
deployer := mock_muo.NewMockDeployer(controller)
|
||||
deployer := mock_deployer.NewMockDeployer(controller)
|
||||
|
||||
if tt.pullsecret != "" {
|
||||
_, err := kubecli.CoreV1().Secrets(pullSecretName.Namespace).Create(context.Background(),
|
||||
|
|
|
@ -2,12 +2,13 @@ apiVersion: v1
|
|||
kind: ConfigMap
|
||||
metadata:
|
||||
name: managed-upgrade-operator-config
|
||||
namespace: openshift-managed-upgrade-operator
|
||||
namespace: openshift-managed-upgrade-operator
|
||||
data:
|
||||
config.yaml: |
|
||||
configManager:
|
||||
source: LOCAL
|
||||
localConfigName: managed-upgrade-config
|
||||
source: {{ if .EnableConnected }}OCM{{ else }}LOCAL{{ end }}
|
||||
{{ if .EnableConnected }}ocmBaseUrl: {{.OCMBaseURL}}{{end}}
|
||||
{{ if not .EnableConnected }}localConfigName: managed-upgrade-config{{end}}
|
||||
watchInterval: 1
|
||||
maintenance:
|
||||
controlPlaneTime: 90
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
|
|
|
@ -40,7 +40,7 @@ spec:
|
|||
- name: managed-upgrade-operator
|
||||
# Replace this with the built image name
|
||||
# This will get replaced on deploy by /hack/generate-operator-bundle.py
|
||||
image: GENERATED
|
||||
image: "{{ .Pullspec }}"
|
||||
command:
|
||||
- managed-upgrade-operator
|
||||
imagePullPolicy: Always
|
||||
|
|
|
@ -7,7 +7,7 @@ rules:
|
|||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
- configmaps
|
||||
- serviceaccounts
|
||||
- secrets
|
||||
- services
|
||||
|
|
|
@ -78,21 +78,21 @@ func (n *nsgFlowLogsFeature) Enable(ctx context.Context, instance *aropreviewv1a
|
|||
func (n *nsgFlowLogsFeature) newFlowLog(instance *aropreviewv1alpha1.PreviewFeature, nsgID string) *mgmtnetwork.FlowLog {
|
||||
// build a request as described here https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-nsg-flow-logging-rest#enable-network-security-group-flow-logs
|
||||
return &mgmtnetwork.FlowLog{
|
||||
Location: to.StringPtr(n.location),
|
||||
Location: &n.location,
|
||||
FlowLogPropertiesFormat: &mgmtnetwork.FlowLogPropertiesFormat{
|
||||
TargetResourceID: to.StringPtr(nsgID),
|
||||
TargetResourceID: &nsgID,
|
||||
Enabled: to.BoolPtr(true),
|
||||
Format: &mgmtnetwork.FlowLogFormatParameters{
|
||||
Type: mgmtnetwork.JSON,
|
||||
Version: to.Int32Ptr(int32(instance.Spec.NSGFlowLogs.Version)),
|
||||
},
|
||||
RetentionPolicy: &mgmtnetwork.RetentionPolicyParameters{
|
||||
Days: to.Int32Ptr(instance.Spec.NSGFlowLogs.RetentionDays),
|
||||
Days: &instance.Spec.NSGFlowLogs.RetentionDays,
|
||||
},
|
||||
StorageID: to.StringPtr(instance.Spec.NSGFlowLogs.StorageAccountResourceID),
|
||||
StorageID: &instance.Spec.NSGFlowLogs.StorageAccountResourceID,
|
||||
FlowAnalyticsConfiguration: &mgmtnetwork.TrafficAnalyticsProperties{
|
||||
NetworkWatcherFlowAnalyticsConfiguration: &mgmtnetwork.TrafficAnalyticsConfigurationProperties{
|
||||
WorkspaceID: to.StringPtr(instance.Spec.NSGFlowLogs.TrafficAnalyticsLogAnalyticsWorkspaceID),
|
||||
WorkspaceID: &instance.Spec.NSGFlowLogs.TrafficAnalyticsLogAnalyticsWorkspaceID,
|
||||
TrafficAnalyticsInterval: to.Int32Ptr(int32(instance.Spec.NSGFlowLogs.TrafficAnalyticsInterval.Truncate(time.Minute).Minutes())),
|
||||
},
|
||||
},
|
||||
|
|
|
@ -7,7 +7,7 @@ import (
|
|||
"context"
|
||||
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes"
|
||||
|
@ -41,10 +41,10 @@ type Reconciler struct {
|
|||
|
||||
arocli aroclient.Interface
|
||||
kubernetescli kubernetes.Interface
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
}
|
||||
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli maoclient.Interface) *Reconciler {
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli machineclient.Interface) *Reconciler {
|
||||
return &Reconciler{
|
||||
log: log,
|
||||
arocli: arocli,
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -7,9 +7,9 @@ import (
|
|||
"context"
|
||||
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
|
||||
imageregistryclient "github.com/openshift/client-go/imageregistry/clientset/versioned"
|
||||
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
@ -42,7 +42,7 @@ type Reconciler struct {
|
|||
|
||||
arocli aroclient.Interface
|
||||
kubernetescli kubernetes.Interface
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
imageregistrycli imageregistryclient.Interface
|
||||
}
|
||||
|
||||
|
@ -59,7 +59,7 @@ type reconcileManager struct {
|
|||
}
|
||||
|
||||
// NewReconciler creates a new Reconciler
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, maocli maoclient.Interface, kubernetescli kubernetes.Interface, imageregistrycli imageregistryclient.Interface) *Reconciler {
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, maocli machineclient.Interface, kubernetescli kubernetes.Interface, imageregistrycli imageregistryclient.Interface) *Reconciler {
|
||||
return &Reconciler{
|
||||
log: log,
|
||||
arocli: arocli,
|
||||
|
|
|
@ -311,6 +311,43 @@ func TestReconcileManager(t *testing.T) {
|
|||
instace.Spec.ArchitectureVersion = int(api.ArchitectureVersionV2)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Architecture V2 - empty NSG",
|
||||
operatorFlagEnabled: true,
|
||||
operatorFlagNSG: true,
|
||||
operatorFlagServiceEndpoint: true,
|
||||
subnetMock: func(mock *mock_subnet.MockManager, kmock *mock_subnet.MockKubeManager) {
|
||||
kmock.EXPECT().List(gomock.Any()).Return([]subnet.Subnet{
|
||||
{
|
||||
ResourceID: subnetResourceIdMaster,
|
||||
IsMaster: true,
|
||||
},
|
||||
{
|
||||
ResourceID: subnetResourceIdWorker,
|
||||
IsMaster: false,
|
||||
},
|
||||
}, nil)
|
||||
|
||||
subnetObjectMaster := getValidSubnet()
|
||||
subnetObjectMaster.NetworkSecurityGroup = nil
|
||||
mock.EXPECT().Get(gomock.Any(), subnetResourceIdMaster).Return(subnetObjectMaster, nil).MaxTimes(2)
|
||||
|
||||
subnetObjectMasterUpdate := getValidSubnet()
|
||||
subnetObjectMasterUpdate.NetworkSecurityGroup.ID = to.StringPtr(nsgv2ResourceId)
|
||||
mock.EXPECT().CreateOrUpdate(gomock.Any(), subnetResourceIdMaster, subnetObjectMasterUpdate).Return(nil)
|
||||
|
||||
subnetObjectWorker := getValidSubnet()
|
||||
subnetObjectWorker.NetworkSecurityGroup = nil
|
||||
mock.EXPECT().Get(gomock.Any(), subnetResourceIdWorker).Return(subnetObjectWorker, nil).MaxTimes(2)
|
||||
|
||||
subnetObjectWorkerUpdate := getValidSubnet()
|
||||
subnetObjectWorkerUpdate.NetworkSecurityGroup.ID = to.StringPtr(nsgv2ResourceId)
|
||||
mock.EXPECT().CreateOrUpdate(gomock.Any(), subnetResourceIdWorker, subnetObjectWorkerUpdate).Return(nil)
|
||||
},
|
||||
instance: func(instace *arov1alpha1.Cluster) {
|
||||
instace.Spec.ArchitectureVersion = int(api.ArchitectureVersionV2)
|
||||
},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
|
|
|
@ -9,8 +9,8 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
machinev1beta1 "github.com/openshift/machine-api-operator/pkg/apis/machine/v1beta1"
|
||||
maoclient "github.com/openshift/machine-api-operator/pkg/generated/clientset/versioned"
|
||||
machinev1beta1 "github.com/openshift/api/machine/v1beta1"
|
||||
machineclient "github.com/openshift/client-go/machine/clientset/versioned"
|
||||
"github.com/sirupsen/logrus"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
@ -44,7 +44,7 @@ type Reconciler struct {
|
|||
|
||||
arocli aroclient.Interface
|
||||
kubernetescli kubernetes.Interface
|
||||
maocli maoclient.Interface
|
||||
maocli machineclient.Interface
|
||||
}
|
||||
|
||||
// reconcileManager is an instance of the manager instantiated per request
|
||||
|
@ -59,7 +59,7 @@ type reconcileManager struct {
|
|||
}
|
||||
|
||||
// NewReconciler creates a new Reconciler
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli maoclient.Interface) *Reconciler {
|
||||
func NewReconciler(log *logrus.Entry, arocli aroclient.Interface, kubernetescli kubernetes.Interface, maocli machineclient.Interface) *Reconciler {
|
||||
return &Reconciler{
|
||||
log: log,
|
||||
arocli: arocli,
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -5,6 +5,7 @@ package deploy
|
|||
|
||||
import (
|
||||
"context"
|
||||
"embed"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
@ -32,6 +33,7 @@ import (
|
|||
aroclient "github.com/Azure/ARO-RP/pkg/operator/clientset/versioned"
|
||||
"github.com/Azure/ARO-RP/pkg/operator/controllers/genevalogging"
|
||||
"github.com/Azure/ARO-RP/pkg/util/dynamichelper"
|
||||
utilembed "github.com/Azure/ARO-RP/pkg/util/embed"
|
||||
"github.com/Azure/ARO-RP/pkg/util/pullsecret"
|
||||
"github.com/Azure/ARO-RP/pkg/util/ready"
|
||||
"github.com/Azure/ARO-RP/pkg/util/restconfig"
|
||||
|
@ -40,6 +42,9 @@ import (
|
|||
"github.com/Azure/ARO-RP/pkg/util/version"
|
||||
)
|
||||
|
||||
//go:embed staticresources
|
||||
var embeddedFiles embed.FS
|
||||
|
||||
type Operator interface {
|
||||
CreateOrUpdate(context.Context) error
|
||||
IsReady(context.Context) (bool, error)
|
||||
|
@ -78,16 +83,10 @@ func New(log *logrus.Entry, env env.Interface, oc *api.OpenShiftCluster, arocli
|
|||
}, nil
|
||||
}
|
||||
|
||||
func (o *operator) resources() ([]kruntime.Object, error) {
|
||||
// first static resources from Assets
|
||||
func (o *operator) staticResources() ([]kruntime.Object, error) {
|
||||
results := []kruntime.Object{}
|
||||
for _, assetName := range AssetNames() {
|
||||
b, err := Asset(assetName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(b, nil, nil)
|
||||
for _, fileBytes := range utilembed.ReadDirRecursive(embeddedFiles, "staticresources") {
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(fileBytes, nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -97,9 +96,18 @@ func (o *operator) resources() ([]kruntime.Object, error) {
|
|||
if d.Labels == nil {
|
||||
d.Labels = map[string]string{}
|
||||
}
|
||||
d.Labels["version"] = version.GitCommit
|
||||
var image string
|
||||
|
||||
if o.oc.Properties.OperatorVersion != "" {
|
||||
image = fmt.Sprintf("%s/aro:%s", o.env.ACRDomain(), o.oc.Properties.OperatorVersion)
|
||||
d.Labels["version"] = o.oc.Properties.OperatorVersion
|
||||
} else {
|
||||
image = o.env.AROOperatorImage()
|
||||
d.Labels["version"] = version.GitCommit
|
||||
}
|
||||
|
||||
for i := range d.Spec.Template.Spec.Containers {
|
||||
d.Spec.Template.Spec.Containers[i].Image = o.env.AROOperatorImage()
|
||||
d.Spec.Template.Spec.Containers[i].Image = image
|
||||
|
||||
if o.env.IsLocalDevelopmentMode() {
|
||||
d.Spec.Template.Spec.Containers[i].Env = append(d.Spec.Template.Spec.Containers[i].Env, corev1.EnvVar{
|
||||
|
@ -112,6 +120,16 @@ func (o *operator) resources() ([]kruntime.Object, error) {
|
|||
|
||||
results = append(results, obj)
|
||||
}
|
||||
return results, nil
|
||||
}
|
||||
|
||||
func (o *operator) resources() ([]kruntime.Object, error) {
|
||||
// first static resources from Assets
|
||||
results, err := o.staticResources()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// then dynamic resources
|
||||
key, cert := o.env.ClusterGenevaLoggingSecret()
|
||||
gcsKeyBytes, err := utiltls.PrivateKeyAsBytes(key)
|
||||
|
@ -193,12 +211,10 @@ func (o *operator) resources() ([]kruntime.Object, error) {
|
|||
},
|
||||
}
|
||||
|
||||
if o.oc.Properties.FeatureProfile.GatewayEnabled && o.oc.Properties.NetworkProfile.GatewayPrivateEndpointIP != "" {
|
||||
cluster.Spec.GatewayDomains = append(o.env.GatewayDomains(), o.oc.Properties.ImageRegistryStorageAccountName+".blob."+o.env.Environment().StorageEndpointSuffix)
|
||||
} else {
|
||||
// covers the case of an admin-disable, we need to update dnsmasq on each node
|
||||
cluster.Spec.GatewayDomains = make([]string, 0)
|
||||
}
|
||||
// TODO (BV): reenable gateway once we fix bugs
|
||||
// if o.oc.Properties.NetworkProfile.GatewayPrivateEndpointIP != "" {
|
||||
// cluster.Spec.GatewayDomains = append(o.env.GatewayDomains(), o.oc.Properties.ImageRegistryStorageAccountName+".blob."+o.env.Environment().StorageEndpointSuffix)
|
||||
// }
|
||||
|
||||
// create a secret here for genevalogging, later we will copy it to
|
||||
// the genevalogging namespace.
|
||||
|
|
|
@ -4,25 +4,16 @@ package deploy
|
|||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
<<<<<<< HEAD
|
||||
"github.com/golang/mock/gomock"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/client-go/kubernetes/fake"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/util/cmp"
|
||||
mock_env "github.com/Azure/ARO-RP/pkg/util/mocks/env"
|
||||
=======
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/util/cmp"
|
||||
>>>>>>> 98308f29a (add e2e test)
|
||||
"github.com/Azure/ARO-RP/pkg/util/version"
|
||||
)
|
||||
|
||||
func TestCheckIngressIP(t *testing.T) {
|
||||
|
@ -138,80 +129,6 @@ func TestCheckIngressIP(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
<<<<<<< HEAD
|
||||
|
||||
func TestCreateDeploymentData(t *testing.T) {
|
||||
operatorImageTag := "v20071110"
|
||||
operatorImageUntagged := "arosvc.azurecr.io/aro"
|
||||
operatorImageWithTag := operatorImageUntagged + ":" + operatorImageTag
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
mock func(*mock_env.MockInterface, *api.OpenShiftCluster)
|
||||
operatorVersionOverride string
|
||||
expected deploymentData
|
||||
}{
|
||||
{
|
||||
name: "no image override, use default",
|
||||
mock: func(env *mock_env.MockInterface, oc *api.OpenShiftCluster) {
|
||||
env.EXPECT().
|
||||
AROOperatorImage().
|
||||
Return(operatorImageWithTag)
|
||||
},
|
||||
expected: deploymentData{
|
||||
Image: operatorImageWithTag,
|
||||
Version: operatorImageTag},
|
||||
},
|
||||
{
|
||||
name: "no image tag, use latest version",
|
||||
mock: func(env *mock_env.MockInterface, oc *api.OpenShiftCluster) {
|
||||
env.EXPECT().
|
||||
AROOperatorImage().
|
||||
Return(operatorImageUntagged)
|
||||
},
|
||||
expected: deploymentData{
|
||||
Image: operatorImageUntagged,
|
||||
Version: "latest"},
|
||||
},
|
||||
{
|
||||
name: "OperatorVersion override set",
|
||||
mock: func(env *mock_env.MockInterface, oc *api.OpenShiftCluster) {
|
||||
env.EXPECT().
|
||||
AROOperatorImage().
|
||||
Return(operatorImageUntagged)
|
||||
env.EXPECT().
|
||||
ACRDomain().
|
||||
Return("docker.io")
|
||||
|
||||
oc.Properties.OperatorVersion = "override"
|
||||
},
|
||||
expected: deploymentData{
|
||||
Image: "docker.io/aro:override",
|
||||
Version: "override"},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
env := mock_env.NewMockInterface(controller)
|
||||
env.EXPECT().IsLocalDevelopmentMode().Return(tt.expected.IsLocalDevelopment).AnyTimes()
|
||||
|
||||
oc := &api.OpenShiftCluster{Properties: api.OpenShiftClusterProperties{}}
|
||||
tt.mock(env, oc)
|
||||
|
||||
o := operator{
|
||||
oc: oc,
|
||||
env: env,
|
||||
}
|
||||
|
||||
deploymentData := o.createDeploymentData()
|
||||
if !reflect.DeepEqual(deploymentData, tt.expected) {
|
||||
t.Errorf("actual deployment: %v, expected %v", deploymentData, tt.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestOperatorVersion(t *testing.T) {
|
||||
type test struct {
|
||||
|
@ -227,7 +144,7 @@ func TestOperatorVersion(t *testing.T) {
|
|||
oc: func() *api.OpenShiftClusterProperties {
|
||||
return &api.OpenShiftClusterProperties{}
|
||||
},
|
||||
wantVersion: "latest",
|
||||
wantVersion: version.GitCommit,
|
||||
wantPullspec: "defaultaroimagefromenv",
|
||||
},
|
||||
{
|
||||
|
@ -257,7 +174,7 @@ func TestOperatorVersion(t *testing.T) {
|
|||
env: _env,
|
||||
}
|
||||
|
||||
staticResources, err := o.createObjects()
|
||||
staticResources, err := o.staticResources()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
@ -275,7 +192,7 @@ func TestOperatorVersion(t *testing.T) {
|
|||
|
||||
for _, d := range deployments {
|
||||
if d.Labels["version"] != tt.wantVersion {
|
||||
t.Errorf("Got %q, not %q for label \"version\"", d.Labels["version"], tt.wantVersion)
|
||||
t.Errorf("Got %q, not %q", d.Labels["version"], tt.wantVersion)
|
||||
}
|
||||
|
||||
if len(d.Spec.Template.Spec.Containers) != 1 {
|
||||
|
@ -284,143 +201,9 @@ func TestOperatorVersion(t *testing.T) {
|
|||
|
||||
image := d.Spec.Template.Spec.Containers[0].Image
|
||||
if image != tt.wantPullspec {
|
||||
t.Errorf("Got %q, not %q for the image", image, tt.wantPullspec)
|
||||
t.Errorf("Got %q, not %q", image, tt.wantPullspec)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckOperatorDeploymentVersion(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
deployment *appsv1.Deployment
|
||||
desiredVersion string
|
||||
want bool
|
||||
wantErr error
|
||||
}{
|
||||
{
|
||||
name: "arooperator deployment has correct version",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "arooperator-deploy",
|
||||
Namespace: "openshift-azure-operator",
|
||||
Labels: map[string]string{
|
||||
"version": "abcde",
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredVersion: "abcde",
|
||||
want: true,
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "arooperator deployment has incorrect version",
|
||||
deployment: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "arooperator-deploy",
|
||||
Namespace: "openshift-azure-operator",
|
||||
Labels: map[string]string{
|
||||
"version": "unknown",
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredVersion: "abcde",
|
||||
want: false,
|
||||
wantErr: nil,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
clientset := fake.NewSimpleClientset()
|
||||
_, err := clientset.AppsV1().Deployments("openshift-azure-operator").Create(ctx, tt.deployment, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("error creating deployment: %v", err)
|
||||
}
|
||||
|
||||
got, err := checkOperatorDeploymentVersion(ctx, clientset.AppsV1().Deployments("openshift-azure-operator"), tt.deployment.Name, tt.desiredVersion)
|
||||
if err != nil && err.Error() != tt.wantErr.Error() ||
|
||||
err == nil && tt.wantErr != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if tt.want != got {
|
||||
t.Fatalf("error with CheckOperatorDeploymentVersion test %s: got %v wanted %v", tt.name, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCheckPodImageVersion(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
pod *corev1.Pod
|
||||
desiredVersion string
|
||||
want bool
|
||||
wantErr error
|
||||
}{
|
||||
{
|
||||
name: "arooperator pod has correct image version",
|
||||
pod: &corev1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "arooperator-pod",
|
||||
Namespace: "openshift-azure-operator",
|
||||
Labels: map[string]string{
|
||||
"app": "arooperator-pod",
|
||||
},
|
||||
},
|
||||
Spec: corev1.PodSpec{
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Image: "random-image:abcde",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredVersion: "abcde",
|
||||
want: true,
|
||||
wantErr: nil,
|
||||
},
|
||||
{
|
||||
name: "arooperator pod has incorrect image version",
|
||||
pod: &corev1.Pod{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "arooperator-pod",
|
||||
Namespace: "openshift-azure-operator",
|
||||
Labels: map[string]string{
|
||||
"app": "arooperator-pod",
|
||||
},
|
||||
},
|
||||
Spec: corev1.PodSpec{
|
||||
Containers: []corev1.Container{
|
||||
{
|
||||
Image: "random-image:unknown",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredVersion: "abcde",
|
||||
want: false,
|
||||
wantErr: nil,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
clientset := fake.NewSimpleClientset()
|
||||
_, err := clientset.CoreV1().Pods("openshift-azure-operator").Create(ctx, tt.pod, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
t.Fatalf("error creating pod: %v", err)
|
||||
}
|
||||
|
||||
got, err := checkPodImageVersion(ctx, clientset.CoreV1().Pods("openshift-azure-operator"), tt.pod.Name, tt.desiredVersion)
|
||||
if err != nil && err.Error() != tt.wantErr.Error() ||
|
||||
err == nil && tt.wantErr != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
if tt.want != got {
|
||||
t.Fatalf("error with CheckPodImageVersion test %s: got %v wanted %v", tt.name, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
=======
|
||||
>>>>>>> 98308f29a (add e2e test)
|
||||
|
|
|
@ -4,7 +4,7 @@ apiVersion: apiextensions.k8s.io/v1
|
|||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.6.3-0.20210916130746-94401651a6c3
|
||||
controller-gen.kubebuilder.io/version: v0.7.0
|
||||
creationTimestamp: null
|
||||
name: clusters.aro.openshift.io
|
||||
spec:
|
||||
|
|
|
@ -4,7 +4,7 @@ apiVersion: apiextensions.k8s.io/v1
|
|||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.6.3-0.20210916130746-94401651a6c3
|
||||
controller-gen.kubebuilder.io/version: v0.7.0
|
||||
creationTimestamp: null
|
||||
name: previewfeatures.preview.aro.openshift.io
|
||||
spec:
|
||||
|
|
|
@ -1,97 +0,0 @@
|
|||
// Code generated by MockGen. DO NOT EDIT.
|
||||
// Source: github.com/Azure/ARO-RP/pkg/operator/controllers/muo (interfaces: Deployer)
|
||||
|
||||
// Package mock_muo is a generated GoMock package.
|
||||
package mock_muo
|
||||
|
||||
import (
|
||||
context "context"
|
||||
reflect "reflect"
|
||||
|
||||
gomock "github.com/golang/mock/gomock"
|
||||
runtime "k8s.io/apimachinery/pkg/runtime"
|
||||
|
||||
v1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
config "github.com/Azure/ARO-RP/pkg/operator/controllers/muo/config"
|
||||
)
|
||||
|
||||
// MockDeployer is a mock of Deployer interface.
|
||||
type MockDeployer struct {
|
||||
ctrl *gomock.Controller
|
||||
recorder *MockDeployerMockRecorder
|
||||
}
|
||||
|
||||
// MockDeployerMockRecorder is the mock recorder for MockDeployer.
|
||||
type MockDeployerMockRecorder struct {
|
||||
mock *MockDeployer
|
||||
}
|
||||
|
||||
// NewMockDeployer creates a new mock instance.
|
||||
func NewMockDeployer(ctrl *gomock.Controller) *MockDeployer {
|
||||
mock := &MockDeployer{ctrl: ctrl}
|
||||
mock.recorder = &MockDeployerMockRecorder{mock}
|
||||
return mock
|
||||
}
|
||||
|
||||
// EXPECT returns an object that allows the caller to indicate expected use.
|
||||
func (m *MockDeployer) EXPECT() *MockDeployerMockRecorder {
|
||||
return m.recorder
|
||||
}
|
||||
|
||||
// CreateOrUpdate mocks base method.
|
||||
func (m *MockDeployer) CreateOrUpdate(arg0 context.Context, arg1 *v1alpha1.Cluster, arg2 *config.MUODeploymentConfig) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "CreateOrUpdate", arg0, arg1, arg2)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// CreateOrUpdate indicates an expected call of CreateOrUpdate.
|
||||
func (mr *MockDeployerMockRecorder) CreateOrUpdate(arg0, arg1, arg2 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "CreateOrUpdate", reflect.TypeOf((*MockDeployer)(nil).CreateOrUpdate), arg0, arg1, arg2)
|
||||
}
|
||||
|
||||
// IsReady mocks base method.
|
||||
func (m *MockDeployer) IsReady(arg0 context.Context) (bool, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "IsReady", arg0)
|
||||
ret0, _ := ret[0].(bool)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// IsReady indicates an expected call of IsReady.
|
||||
func (mr *MockDeployerMockRecorder) IsReady(arg0 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsReady", reflect.TypeOf((*MockDeployer)(nil).IsReady), arg0)
|
||||
}
|
||||
|
||||
// Remove mocks base method.
|
||||
func (m *MockDeployer) Remove(arg0 context.Context) error {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Remove", arg0)
|
||||
ret0, _ := ret[0].(error)
|
||||
return ret0
|
||||
}
|
||||
|
||||
// Remove indicates an expected call of Remove.
|
||||
func (mr *MockDeployerMockRecorder) Remove(arg0 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Remove", reflect.TypeOf((*MockDeployer)(nil).Remove), arg0)
|
||||
}
|
||||
|
||||
// Resources mocks base method.
|
||||
func (m *MockDeployer) Resources(arg0 *config.MUODeploymentConfig) ([]runtime.Object, error) {
|
||||
m.ctrl.T.Helper()
|
||||
ret := m.ctrl.Call(m, "Resources", arg0)
|
||||
ret0, _ := ret[0].([]runtime.Object)
|
||||
ret1, _ := ret[1].(error)
|
||||
return ret0, ret1
|
||||
}
|
||||
|
||||
// Resources indicates an expected call of Resources.
|
||||
func (mr *MockDeployerMockRecorder) Resources(arg0 interface{}) *gomock.Call {
|
||||
mr.mock.ctrl.T.Helper()
|
||||
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Resources", reflect.TypeOf((*MockDeployer)(nil).Resources), arg0)
|
||||
}
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -1,66 +0,0 @@
|
|||
package arm
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
|
||||
mgmtfeatures "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2019-07-01/features"
|
||||
"github.com/Azure/go-autorest/autorest"
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/util/azureclient/mgmt/features"
|
||||
"github.com/Azure/ARO-RP/pkg/util/azureerrors"
|
||||
)
|
||||
|
||||
func DeployTemplate(ctx context.Context, log *logrus.Entry, deployments features.DeploymentsClient, resourceGroupName string, deploymentName string, template *Template, parameters map[string]interface{}) error {
|
||||
log.Printf("deploying %s template", deploymentName)
|
||||
err := deployments.CreateOrUpdateAndWait(ctx, resourceGroupName, deploymentName, mgmtfeatures.Deployment{
|
||||
Properties: &mgmtfeatures.DeploymentProperties{
|
||||
Template: template,
|
||||
Parameters: parameters,
|
||||
Mode: mgmtfeatures.Incremental,
|
||||
},
|
||||
})
|
||||
|
||||
if azureerrors.IsDeploymentActiveError(err) {
|
||||
log.Printf("waiting for %s template to be deployed", deploymentName)
|
||||
err = deployments.Wait(ctx, resourceGroupName, deploymentName)
|
||||
}
|
||||
|
||||
if azureerrors.HasAuthorizationFailedError(err) ||
|
||||
azureerrors.HasLinkedAuthorizationFailedError(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
serviceErr, _ := err.(*azure.ServiceError) // futures return *azure.ServiceError directly
|
||||
|
||||
// CreateOrUpdate() returns a wrapped *azure.ServiceError
|
||||
if detailedErr, ok := err.(autorest.DetailedError); ok {
|
||||
serviceErr, _ = detailedErr.Original.(*azure.ServiceError)
|
||||
}
|
||||
|
||||
if serviceErr != nil {
|
||||
b, _ := json.Marshal(serviceErr)
|
||||
|
||||
return &api.CloudError{
|
||||
StatusCode: http.StatusBadRequest,
|
||||
CloudErrorBody: &api.CloudErrorBody{
|
||||
Code: api.CloudErrorCodeDeploymentFailed,
|
||||
Message: "Deployment failed.",
|
||||
Details: []api.CloudErrorBody{
|
||||
{
|
||||
Message: string(b),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
|
@ -1,120 +0,0 @@
|
|||
package arm
|
||||
|
||||
// Copyright (c) Microsoft Corporation.
|
||||
// Licensed under the Apache License 2.0.
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
mgmtfeatures "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2019-07-01/features"
|
||||
"github.com/Azure/go-autorest/autorest"
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
"github.com/golang/mock/gomock"
|
||||
"github.com/sirupsen/logrus"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
|
||||
mock_features "github.com/Azure/ARO-RP/pkg/util/mocks/azureclient/mgmt/features"
|
||||
)
|
||||
|
||||
const deploymentName = "test"
|
||||
|
||||
func TestDeployARMTemplate(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
resourceGroup := "fakeResourceGroup"
|
||||
|
||||
armTemplate := &Template{}
|
||||
params := map[string]interface{}{}
|
||||
|
||||
deployment := mgmtfeatures.Deployment{
|
||||
Properties: &mgmtfeatures.DeploymentProperties{
|
||||
Template: armTemplate,
|
||||
Parameters: params,
|
||||
Mode: mgmtfeatures.Incremental,
|
||||
},
|
||||
}
|
||||
|
||||
activeErr := autorest.NewErrorWithError(azure.RequestError{
|
||||
ServiceError: &azure.ServiceError{Code: "DeploymentActive"},
|
||||
}, "", "", nil, "")
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
mocks func(*mock_features.MockDeploymentsClient)
|
||||
wantErr string
|
||||
}{
|
||||
{
|
||||
name: "Deployment successful with no errors",
|
||||
mocks: func(dc *mock_features.MockDeploymentsClient) {
|
||||
dc.EXPECT().
|
||||
CreateOrUpdateAndWait(ctx, resourceGroup, deploymentName, deployment).
|
||||
Return(nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Deployment active error, then wait successfully",
|
||||
mocks: func(dc *mock_features.MockDeploymentsClient) {
|
||||
dc.EXPECT().
|
||||
CreateOrUpdateAndWait(ctx, resourceGroup, deploymentName, deployment).
|
||||
Return(activeErr)
|
||||
dc.EXPECT().
|
||||
Wait(ctx, resourceGroup, deploymentName).
|
||||
Return(nil)
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Deployment active error, then timeout",
|
||||
mocks: func(dc *mock_features.MockDeploymentsClient) {
|
||||
dc.EXPECT().
|
||||
CreateOrUpdateAndWait(ctx, resourceGroup, deploymentName, deployment).
|
||||
Return(activeErr)
|
||||
dc.EXPECT().
|
||||
Wait(ctx, resourceGroup, deploymentName).
|
||||
Return(wait.ErrWaitTimeout)
|
||||
},
|
||||
wantErr: "timed out waiting for the condition",
|
||||
},
|
||||
{
|
||||
name: "DetailedError which should be returned to user",
|
||||
mocks: func(dc *mock_features.MockDeploymentsClient) {
|
||||
dc.EXPECT().
|
||||
CreateOrUpdateAndWait(ctx, resourceGroup, deploymentName, deployment).
|
||||
Return(autorest.DetailedError{
|
||||
Original: &azure.ServiceError{
|
||||
Code: "AccountIsDisabled",
|
||||
},
|
||||
})
|
||||
},
|
||||
wantErr: `400: DeploymentFailed: : Deployment failed. Details: : : {"code":"AccountIsDisabled","message":"","target":null,"details":null,"innererror":null,"additionalInfo":null}`,
|
||||
},
|
||||
{
|
||||
name: "ServiceError which should be returned to user",
|
||||
mocks: func(dc *mock_features.MockDeploymentsClient) {
|
||||
dc.EXPECT().
|
||||
CreateOrUpdateAndWait(ctx, resourceGroup, deploymentName, deployment).
|
||||
Return(&azure.ServiceError{
|
||||
Code: "AccountIsDisabled",
|
||||
})
|
||||
},
|
||||
wantErr: `400: DeploymentFailed: : Deployment failed. Details: : : {"code":"AccountIsDisabled","message":"","target":null,"details":null,"innererror":null,"additionalInfo":null}`,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
controller := gomock.NewController(t)
|
||||
defer controller.Finish()
|
||||
|
||||
deploymentsClient := mock_features.NewMockDeploymentsClient(controller)
|
||||
tt.mocks(deploymentsClient)
|
||||
|
||||
log := logrus.NewEntry(logrus.StandardLogger())
|
||||
|
||||
err := DeployTemplate(ctx, log, deploymentsClient, resourceGroup, deploymentName, armTemplate, params)
|
||||
|
||||
if err != nil && err.Error() != tt.wantErr ||
|
||||
err == nil && tt.wantErr != "" {
|
||||
t.Error(err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
|
@ -26,8 +26,8 @@ import (
|
|||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
|
||||
"github.com/Azure/ARO-RP/pkg/api"
|
||||
"github.com/Azure/ARO-RP/pkg/api/v20210901preview"
|
||||
mgmtredhatopenshift20210901preview "github.com/Azure/ARO-RP/pkg/client/services/redhatopenshift/mgmt/2021-09-01-preview/redhatopenshift"
|
||||
v20220401 "github.com/Azure/ARO-RP/pkg/api/v20220401"
|
||||
mgmtredhatopenshift20220401 "github.com/Azure/ARO-RP/pkg/client/services/redhatopenshift/mgmt/2022-04-01/redhatopenshift"
|
||||
"github.com/Azure/ARO-RP/pkg/deploy"
|
||||
"github.com/Azure/ARO-RP/pkg/deploy/generator"
|
||||
"github.com/Azure/ARO-RP/pkg/env"
|
||||
|
@ -402,8 +402,9 @@ func (c *Cluster) createCluster(ctx context.Context, vnetResourceGroup, clusterN
|
|||
oc := api.OpenShiftCluster{
|
||||
Properties: api.OpenShiftClusterProperties{
|
||||
ClusterProfile: api.ClusterProfile{
|
||||
Domain: strings.ToLower(clusterName),
|
||||
ResourceGroupID: fmt.Sprintf("/subscriptions/%s/resourceGroups/%s", c.env.SubscriptionID(), "aro-"+clusterName),
|
||||
Domain: strings.ToLower(clusterName),
|
||||
ResourceGroupID: fmt.Sprintf("/subscriptions/%s/resourceGroups/%s", c.env.SubscriptionID(), "aro-"+clusterName),
|
||||
FipsValidatedModules: api.FipsValidatedModulesEnabled,
|
||||
},
|
||||
ServicePrincipalProfile: api.ServicePrincipalProfile{
|
||||
ClientID: clientID,
|
||||
|
@ -453,19 +454,19 @@ func (c *Cluster) createCluster(ctx context.Context, vnetResourceGroup, clusterN
|
|||
oc.Properties.WorkerProfiles[0].VMSize = api.VMSizeStandardD2sV3
|
||||
}
|
||||
|
||||
ext := api.APIs[v20210901preview.APIVersion].OpenShiftClusterConverter().ToExternal(&oc)
|
||||
ext := api.APIs[v20220401.APIVersion].OpenShiftClusterConverter().ToExternal(&oc)
|
||||
data, err := json.Marshal(ext)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ocExt := mgmtredhatopenshift20210901preview.OpenShiftCluster{}
|
||||
ocExt := mgmtredhatopenshift20220401.OpenShiftCluster{}
|
||||
err = json.Unmarshal(data, &ocExt)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return c.openshiftclustersv20210901preview.CreateOrUpdateAndWait(ctx, vnetResourceGroup, clusterName, ocExt)
|
||||
return c.openshiftclustersv20220401.CreateOrUpdateAndWait(ctx, vnetResourceGroup, clusterName, ocExt)
|
||||
}
|
||||
|
||||
func (c *Cluster) registerSubscription(ctx context.Context) error {
|
||||
|
@ -478,10 +479,6 @@ func (c *Cluster) registerSubscription(ctx context.Context) error {
|
|||
Name: "Microsoft.RedHatOpenShift/RedHatEngineering",
|
||||
State: "Registered",
|
||||
},
|
||||
{
|
||||
Name: "Microsoft.RedHatOpenShift/FIPS",
|
||||
State: "Registered",
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
|
|
|
@ -75,14 +75,17 @@ func (ef *workerProfilesEnricherTask) FetchData(ctx context.Context, callbacks c
|
|||
continue
|
||||
}
|
||||
|
||||
obj, _, err := scheme.Codecs.UniversalDeserializer().Decode(machineset.Spec.Template.Spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
o, _, err := scheme.Codecs.UniversalDeserializer().Decode(machineset.Spec.Template.Spec.ProviderSpec.Value.Raw, nil, nil)
|
||||
if err != nil {
|
||||
ef.log.Info(err)
|
||||
continue
|
||||
}
|
||||
machineProviderSpec, ok := obj.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
|
||||
machineProviderSpec, ok := o.(*machinev1beta1.AzureMachineProviderSpec)
|
||||
if !ok {
|
||||
ef.log.Infof("failed to read provider spec from the machine set %q: %T", machineset.Name, obj)
|
||||
// This should never happen: codecs uses scheme that has only one registered type
|
||||
// and if something is wrong with the provider spec - decoding should fail
|
||||
ef.log.Infof("failed to read provider spec from the machine set %q: %T", machineset.Name, o)
|
||||
continue
|
||||
}
|
||||
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -1 +1 @@
|
|||
4.9.28
|
||||
4.10.15
|
||||
|
|
|
@ -8,15 +8,24 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/go-autorest/autorest/to"
|
||||
mcv1 "github.com/openshift/machine-config-operator/pkg/apis/machineconfiguration.openshift.io/v1"
|
||||
appsv1 "k8s.io/api/apps/v1"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
extensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
extensionsv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
kruntime "k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/util/intstr"
|
||||
|
||||
arov1alpha1 "github.com/Azure/ARO-RP/pkg/operator/apis/aro.openshift.io/v1alpha1"
|
||||
"github.com/Azure/ARO-RP/pkg/util/cmp"
|
||||
)
|
||||
|
||||
func TestMerge(t *testing.T) {
|
||||
serviceInternalTrafficPolicy := corev1.ServiceInternalTrafficPolicyCluster
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
old kruntime.Object
|
||||
|
@ -249,6 +258,217 @@ func TestMerge(t *testing.T) {
|
|||
},
|
||||
wantEmptyDiff: true,
|
||||
},
|
||||
{
|
||||
name: "DaemonSet changes",
|
||||
old: &appsv1.DaemonSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
"deprecated.daemonset.template.generation": "1",
|
||||
},
|
||||
},
|
||||
Status: appsv1.DaemonSetStatus{
|
||||
CurrentNumberScheduled: 5,
|
||||
NumberReady: 5,
|
||||
ObservedGeneration: 1,
|
||||
},
|
||||
},
|
||||
new: &appsv1.DaemonSet{},
|
||||
want: &appsv1.DaemonSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
"deprecated.daemonset.template.generation": "1",
|
||||
},
|
||||
},
|
||||
Status: appsv1.DaemonSetStatus{
|
||||
CurrentNumberScheduled: 5,
|
||||
NumberReady: 5,
|
||||
ObservedGeneration: 1,
|
||||
},
|
||||
Spec: appsv1.DaemonSetSpec{
|
||||
Template: corev1.PodTemplateSpec{
|
||||
Spec: corev1.PodSpec{
|
||||
RestartPolicy: "Always",
|
||||
TerminationGracePeriodSeconds: to.Int64Ptr(corev1.DefaultTerminationGracePeriodSeconds),
|
||||
DNSPolicy: "ClusterFirst",
|
||||
SecurityContext: &corev1.PodSecurityContext{},
|
||||
SchedulerName: "default-scheduler",
|
||||
},
|
||||
},
|
||||
UpdateStrategy: appsv1.DaemonSetUpdateStrategy{
|
||||
Type: appsv1.RollingUpdateDaemonSetStrategyType,
|
||||
RollingUpdate: &appsv1.RollingUpdateDaemonSet{
|
||||
MaxUnavailable: &intstr.IntOrString{IntVal: 1},
|
||||
MaxSurge: &intstr.IntOrString{IntVal: 0},
|
||||
},
|
||||
},
|
||||
RevisionHistoryLimit: to.Int32Ptr(10),
|
||||
},
|
||||
},
|
||||
wantChanged: true,
|
||||
},
|
||||
{
|
||||
name: "Deployment changes",
|
||||
old: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
"deployment.kubernetes.io/revision": "2",
|
||||
},
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Template: corev1.PodTemplateSpec{
|
||||
Spec: corev1.PodSpec{
|
||||
DeprecatedServiceAccount: "openshift-apiserver-sa",
|
||||
},
|
||||
},
|
||||
},
|
||||
Status: appsv1.DeploymentStatus{
|
||||
AvailableReplicas: 3,
|
||||
ReadyReplicas: 3,
|
||||
Replicas: 3,
|
||||
UpdatedReplicas: 3,
|
||||
},
|
||||
},
|
||||
new: &appsv1.Deployment{},
|
||||
want: &appsv1.Deployment{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Annotations: map[string]string{
|
||||
"deployment.kubernetes.io/revision": "2",
|
||||
},
|
||||
},
|
||||
Status: appsv1.DeploymentStatus{
|
||||
AvailableReplicas: 3,
|
||||
ReadyReplicas: 3,
|
||||
Replicas: 3,
|
||||
UpdatedReplicas: 3,
|
||||
},
|
||||
Spec: appsv1.DeploymentSpec{
|
||||
Replicas: to.Int32Ptr(1),
|
||||
Template: corev1.PodTemplateSpec{
|
||||
Spec: corev1.PodSpec{
|
||||
RestartPolicy: "Always",
|
||||
TerminationGracePeriodSeconds: to.Int64Ptr(corev1.DefaultTerminationGracePeriodSeconds),
|
||||
DNSPolicy: "ClusterFirst",
|
||||
SecurityContext: &corev1.PodSecurityContext{},
|
||||
SchedulerName: "default-scheduler",
|
||||
DeprecatedServiceAccount: "openshift-apiserver-sa",
|
||||
},
|
||||
},
|
||||
Strategy: appsv1.DeploymentStrategy{
|
||||
Type: appsv1.RollingUpdateDeploymentStrategyType,
|
||||
RollingUpdate: &appsv1.RollingUpdateDeployment{
|
||||
MaxUnavailable: &intstr.IntOrString{
|
||||
Type: 1,
|
||||
StrVal: "25%",
|
||||
},
|
||||
MaxSurge: &intstr.IntOrString{
|
||||
Type: 1,
|
||||
StrVal: "25%",
|
||||
},
|
||||
},
|
||||
},
|
||||
RevisionHistoryLimit: to.Int32Ptr(10),
|
||||
ProgressDeadlineSeconds: to.Int32Ptr(600),
|
||||
},
|
||||
},
|
||||
wantChanged: true,
|
||||
},
|
||||
{
|
||||
name: "KubeletConfig no changes",
|
||||
old: &mcv1.KubeletConfig{
|
||||
Status: mcv1.KubeletConfigStatus{
|
||||
Conditions: []mcv1.KubeletConfigCondition{
|
||||
{
|
||||
Message: "Success",
|
||||
Status: "True",
|
||||
Type: "Success",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
new: &mcv1.KubeletConfig{},
|
||||
want: &mcv1.KubeletConfig{
|
||||
Status: mcv1.KubeletConfigStatus{
|
||||
Conditions: []mcv1.KubeletConfigCondition{
|
||||
{
|
||||
Message: "Success",
|
||||
Status: "True",
|
||||
Type: "Success",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantEmptyDiff: true,
|
||||
},
|
||||
{
|
||||
name: "Cluster no changes",
|
||||
old: &arov1alpha1.Cluster{
|
||||
Status: arov1alpha1.ClusterStatus{
|
||||
OperatorVersion: "8b66c40",
|
||||
},
|
||||
},
|
||||
new: &arov1alpha1.Cluster{},
|
||||
want: &arov1alpha1.Cluster{
|
||||
Status: arov1alpha1.ClusterStatus{
|
||||
OperatorVersion: "8b66c40",
|
||||
},
|
||||
},
|
||||
wantEmptyDiff: true,
|
||||
},
|
||||
{
|
||||
name: "CustomResourceDefinition Betav1 no changes",
|
||||
old: &extensionsv1beta1.CustomResourceDefinition{
|
||||
Status: extensionsv1beta1.CustomResourceDefinitionStatus{
|
||||
Conditions: []extensionsv1beta1.CustomResourceDefinitionCondition{
|
||||
{
|
||||
Message: "no conflicts found",
|
||||
Reason: "NoConflicts",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
new: &extensionsv1beta1.CustomResourceDefinition{},
|
||||
want: &extensionsv1beta1.CustomResourceDefinition{
|
||||
Status: extensionsv1beta1.CustomResourceDefinitionStatus{
|
||||
Conditions: []extensionsv1beta1.CustomResourceDefinitionCondition{
|
||||
{
|
||||
Message: "no conflicts found",
|
||||
Reason: "NoConflicts",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantEmptyDiff: true,
|
||||
},
|
||||
{
|
||||
name: "CustomResourceDefinition changes",
|
||||
old: &extensionsv1.CustomResourceDefinition{
|
||||
Status: extensionsv1.CustomResourceDefinitionStatus{
|
||||
Conditions: []extensionsv1.CustomResourceDefinitionCondition{
|
||||
{
|
||||
Message: "no conflicts found",
|
||||
Reason: "NoConflicts",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
new: &extensionsv1.CustomResourceDefinition{},
|
||||
want: &extensionsv1.CustomResourceDefinition{
|
||||
Spec: extensionsv1.CustomResourceDefinitionSpec{
|
||||
Conversion: &extensionsv1.CustomResourceConversion{
|
||||
Strategy: "None",
|
||||
},
|
||||
},
|
||||
Status: extensionsv1.CustomResourceDefinitionStatus{
|
||||
Conditions: []extensionsv1.CustomResourceDefinitionCondition{
|
||||
{
|
||||
Message: "no conflicts found",
|
||||
Reason: "NoConflicts",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
wantChanged: true,
|
||||
},
|
||||
{
|
||||
name: "Secret changes, not logged",
|
||||
old: &corev1.Secret{
|
||||
|
@ -288,3 +508,76 @@ func TestMerge(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMakeURLSegments(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
gvr *schema.GroupVersionResource
|
||||
namespace string
|
||||
uname, name string
|
||||
url []string
|
||||
want []string
|
||||
}{
|
||||
{
|
||||
uname: "Group is empty",
|
||||
gvr: &schema.GroupVersionResource{
|
||||
Group: "",
|
||||
Version: "4.10",
|
||||
Resource: "test-resource",
|
||||
},
|
||||
namespace: "openshift",
|
||||
name: "test-name-1",
|
||||
want: []string{"api", "4.10", "namespaces", "openshift", "test-resource", "test-name-1"},
|
||||
},
|
||||
{
|
||||
uname: "Group is not empty",
|
||||
gvr: &schema.GroupVersionResource{
|
||||
Group: "test-group",
|
||||
Version: "4.10",
|
||||
Resource: "test-resource",
|
||||
},
|
||||
namespace: "openshift-apiserver",
|
||||
name: "test-name-2",
|
||||
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-apiserver", "test-resource", "test-name-2"},
|
||||
},
|
||||
{
|
||||
uname: "Namespace is empty",
|
||||
gvr: &schema.GroupVersionResource{
|
||||
Group: "test-group",
|
||||
Version: "4.10",
|
||||
Resource: "test-resource",
|
||||
},
|
||||
namespace: "",
|
||||
name: "test-name-3",
|
||||
want: []string{"apis", "test-group", "4.10", "test-resource", "test-name-3"},
|
||||
},
|
||||
{
|
||||
uname: "Namespace is not empty",
|
||||
gvr: &schema.GroupVersionResource{
|
||||
Group: "test-group",
|
||||
Version: "4.10",
|
||||
Resource: "test-resource",
|
||||
},
|
||||
namespace: "openshift-sdn",
|
||||
name: "test-name-3",
|
||||
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-sdn", "test-resource", "test-name-3"},
|
||||
},
|
||||
{
|
||||
uname: "Name is empty",
|
||||
gvr: &schema.GroupVersionResource{
|
||||
Group: "test-group",
|
||||
Version: "4.10",
|
||||
Resource: "test-resource",
|
||||
},
|
||||
namespace: "openshift-ns",
|
||||
name: "",
|
||||
want: []string{"apis", "test-group", "4.10", "namespaces", "openshift-ns", "test-resource"},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.uname, func(t *testing.T) {
|
||||
got := makeURLSegments(tt.gvr, tt.namespace, tt.name)
|
||||
if !reflect.DeepEqual(got, tt.want) {
|
||||
t.Error(cmp.Diff(got, tt.want))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -67,7 +67,6 @@ func IsOpenShiftNamespace(ns string) bool {
|
|||
"openshift-oauth-apiserver": {},
|
||||
"openshift-openstack-infra": {},
|
||||
"openshift-operators": {},
|
||||
"openshift-operator-lifecycle-manager": {},
|
||||
"openshift-ovirt-infra": {},
|
||||
"openshift-sdn": {},
|
||||
"openshift-service-ca": {},
|
||||
|
|
|
@ -58,7 +58,7 @@ func TestIsOpenShiftNamespace(t *testing.T) {
|
|||
},
|
||||
{
|
||||
namespace: "openshift-operator-lifecycle-manager",
|
||||
want: true,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
namespace: "openshift-cluster-version",
|
||||
|
|
|
@ -27,14 +27,18 @@ var GitCommit = "unknown"
|
|||
|
||||
// InstallStream describes stream we are defaulting to for all new clusters
|
||||
var InstallStream = &Stream{
|
||||
Version: NewVersion(4, 9, 28),
|
||||
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:4084d94969b186e20189649b5affba7da59f7d1943e4e5bc7ef78b981eafb7a8",
|
||||
Version: NewVersion(4, 10, 15),
|
||||
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:ddcb70ce04a01ce487c0f4ad769e9e36a10c8c832a34307c1b1eb8e03a5b7ddb",
|
||||
}
|
||||
|
||||
// UpgradeStreams describes list of streams we support for upgrades
|
||||
var (
|
||||
UpgradeStreams = []*Stream{
|
||||
InstallStream,
|
||||
{
|
||||
Version: NewVersion(4, 9, 28),
|
||||
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:4084d94969b186e20189649b5affba7da59f7d1943e4e5bc7ef78b981eafb7a8",
|
||||
},
|
||||
{
|
||||
Version: NewVersion(4, 8, 18),
|
||||
PullSpec: "quay.io/openshift-release-dev/ocp-release@sha256:321aae3d3748c589bc2011062cee9fd14e106f258807dc2d84ced3f7461160ea",
|
||||
|
@ -64,16 +68,18 @@ func FluentbitImage(acrDomain string) string {
|
|||
}
|
||||
|
||||
// MdmImage contains the location of the MDM container image
|
||||
// https://eng.ms/docs/products/geneva/collect/references/linuxcontainers
|
||||
func MdmImage(acrDomain string) string {
|
||||
return acrDomain + "/genevamdm:master_20220419.1"
|
||||
return acrDomain + "/genevamdm:master_20220522.1"
|
||||
}
|
||||
|
||||
// MdsdImage contains the location of the MDSD container image
|
||||
// see https://eng.ms/docs/products/geneva/collect/references/linuxcontainers
|
||||
func MdsdImage(acrDomain string) string {
|
||||
return acrDomain + "/genevamdsd:master_20220419.1"
|
||||
return acrDomain + "/genevamdsd:master_20220522.1"
|
||||
}
|
||||
|
||||
// MUOImage contains the location of the Managed Upgrade Operator container image
|
||||
func MUOImage(acrDomain string) string {
|
||||
return acrDomain + "/managed-upgrade-operator:aro-b1"
|
||||
return acrDomain + "/managed-upgrade-operator:aro-b4"
|
||||
}
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче