зеркало из https://github.com/Azure/aks-engine.git
Use registry.k8s.io for components (#5071)
This commit is contained in:
Родитель
4abc935412
Коммит
fb9d128a1f
|
@ -5,7 +5,7 @@
|
|||
The existing AKS Engine Kubernetes component container image configuration surface area presents obstacles in the way of:
|
||||
|
||||
1. quickly testing/validating specific container images across the set of Kubernetes components in a working cluster; and
|
||||
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated k8s.gcr.io container images.
|
||||
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated registry.k8s.io container images.
|
||||
|
||||
## Proximate Problem Statements
|
||||
|
||||
|
@ -14,15 +14,15 @@ The existing AKS Engine Kubernetes component container image configuration surfa
|
|||
- https://github.com/Azure/aks-engine/issues/2378
|
||||
2. At present, the "blessed" component configuration image URIs are maintained via a concatenation of two properties:
|
||||
- A "base URI" property (`KubernetesImageBase` is the property that has the widest impact across the set of component images)
|
||||
- e.g., `"k8s.gcr.io/"`
|
||||
- e.g., `"registry.k8s.io/"`
|
||||
- A hardcoded string that represents the right-most concatenation substring of the fully qualified image reference URI
|
||||
- e.g., `"kube-proxy:v1.16.1"`
|
||||
|
||||
In summary, in order to render `"k8s.gcr.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"k8s.gcr.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
|
||||
In summary, in order to render `"registry.k8s.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"registry.k8s.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
|
||||
|
||||
In practice, this means that the `KubernetesImageBase` property is effectively a "Kubernetes component image registry mirror base URI" property, and in fact this is exactly how that property is leveraged, to redirect container image references to proximate origin URIs when building clusters in non-public cloud environments (e.g., China Cloud, Azure Stack).
|
||||
|
||||
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a k8s.gcr.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
|
||||
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a registry.k8s.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
|
||||
|
||||
# A Proposed Solution
|
||||
|
||||
|
@ -98,9 +98,9 @@ In summary, we will introduce a new "components" configuration interface (a sibl
|
|||
|
||||
~
|
||||
|
||||
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the k8s.gcr.io container registry origin, or a mirror that follows its specification".
|
||||
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the registry.k8s.io container registry origin, or a mirror that follows its specification".
|
||||
|
||||
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "k8s.gcr.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of k8s.gcr.io.
|
||||
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "registry.k8s.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of registry.k8s.io.
|
||||
|
||||
What we can do is add a "mirror type" (or "mirror flavor", if you prefer) configuration context to the existing `KubernetesImageBase` property, allowing us to maintain easy backwards-compatibility (by keeping that property valid), and then adapt the underlying hardcoded "image URI substring" values to be sensitive to that context.
|
||||
|
||||
|
@ -111,12 +111,12 @@ Concretely, we could add a new sibling (of KubernetesImageBase) configuration pr
|
|||
|
||||
The value of that property tells the template generation code flows to generate container image reference URI strings according to one of the known specifications supported by AKS Engine:
|
||||
|
||||
- k8s.gcr.io
|
||||
- e.g., `"k8s.gcr.io/kube-addon-manager-amd64:v9.0.2"`
|
||||
- registry.k8s.io
|
||||
- e.g., `"registry.k8s.io/kube-addon-manager-amd64:v9.0.2"`
|
||||
- mcr.microsoft.com/oss/kubernetes
|
||||
- e.g., `"mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.0.2"`
|
||||
|
||||
The above solution would support a per-environment migration from the current, known-working k8s.gcr.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
|
||||
The above solution would support a per-environment migration from the current, known-working registry.k8s.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
|
||||
|
||||
# A Proposed Implementation
|
||||
|
||||
|
|
|
@ -110,7 +110,7 @@ So, assuming we've waited 30 minutes or so, let's update the controller-manager
|
|||
|
||||
```
|
||||
azureuser@k8s-master-31453872-0:~$ grep 1.15.7 /opt/azure/kube-controller-manager.yaml
|
||||
image: k8s.gcr.io/hyperkube-amd64:v1.15.7
|
||||
image: registry.k8s.io/hyperkube-amd64:v1.15.7
|
||||
```
|
||||
|
||||
Let's update the spec on all control plane VMs:
|
||||
|
@ -124,7 +124,7 @@ Authorized uses only. All activity may be monitored and reported.
|
|||
|
||||
Authorized uses only. All activity may be monitored and reported.
|
||||
azureuser@k8s-master-31453872-0:~$ grep 1.15.12 /opt/azure/kube-controller-manager.yaml
|
||||
image: k8s.gcr.io/hyperkube-amd64:v1.15.12
|
||||
image: registry.k8s.io/hyperkube-amd64:v1.15.12
|
||||
```
|
||||
|
||||
(Again, if you're using `cloud-controller-manager`, substitute the correct `cloud-controller-manager.yaml` file name.)
|
||||
|
@ -135,7 +135,7 @@ Now, if we're running the `cluster-autoscaler` addon on this cluster let's make
|
|||
|
||||
```
|
||||
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
|
||||
- image: k8s.gcr.io/cluster-autoscaler:v1.15.3
|
||||
- image: registry.k8s.io/cluster-autoscaler:v1.15.3
|
||||
azureuser@k8s-master-31453872-0:~$ for control_plane_vm in $(kubectl get nodes | grep k8s-master | awk '{print $1}'); do ssh $control_plane_vm "sudo sed -i 's|v1.15.3|v1.15.6|g' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml"; done
|
||||
|
||||
Authorized uses only. All activity may be monitored and reported.
|
||||
|
@ -144,7 +144,7 @@ Authorized uses only. All activity may be monitored and reported.
|
|||
|
||||
Authorized uses only. All activity may be monitored and reported.
|
||||
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
|
||||
- image: k8s.gcr.io/cluster-autoscaler:v1.15.6
|
||||
- image: registry.k8s.io/cluster-autoscaler:v1.15.6
|
||||
```
|
||||
|
||||
The above validated that we *weren't* using the latest `cluster-autoscaler`, and so we changed the addon spec on each control plane VM in the `/etc/kubernetes/addons/` directory so that we would load 1.15.6 instead.
|
||||
|
|
|
@ -337,7 +337,7 @@ spec:
|
|||
supplementalGroups: [ 65534 ]
|
||||
fsGroup: 65534
|
||||
containers:
|
||||
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
|
||||
- image: registry.k8s.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
|
||||
name: autoscaler
|
||||
command:
|
||||
- /cluster-proportional-autoscaler
|
||||
|
|
|
@ -69,7 +69,7 @@ $ aks-engine get-versions
|
|||
| gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/) |
|
||||
| kubeletConfig | no | Configure various runtime configuration for kubelet. See `kubeletConfig` [below](#feat-kubelet-config) |
|
||||
| kubeReservedCgroup | no | The name of a systemd slice to create for containment of both kubelet and the container runtime. When this value is a non-empty string, a file will be dropped at `/etc/systemd/system/$KUBE_RESERVED_CGROUP.slice` creating a systemd slice. Both kubelet and docker will run in this slice. This should not point to an existing systemd slice. If this value is unspecified or specified as the empty string, kubelet and the container runtime will run in the system slice by default. |
|
||||
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `k8s.gcr.io/` |
|
||||
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `registry.k8s.io/` |
|
||||
| loadBalancerSku | no | Sku of Load Balancer and Public IP. Candidate values are: `basic` and `standard`. If not set, it will be default to "standard". NOTE: Because VMs behind standard SKU load balancer will not be able to access the internet without an outbound rule configured with at least one frontend IP, AKS Engine creates a Load Balancer with an outbound rule and with agent nodes added to the backend pool during cluster creation, as described in the [Outbound NAT for internal Standard Load Balancer scenarios doc](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-rules-overview#outbound-nat-for-internal-standard-load-balancer-scenarios) |
|
||||
| loadBalancerOutboundIPs | no | Number of outbound IP addresses (e.g., 3) to use in Standard LoadBalancer configuration. If not set, AKS Engine will configure a single outbound IP address. You may want more than one outbound IP address if you are running a large cluster that is processing lots of connections. See [here](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#multifesnat) for more documentation about how adding more outbound IP addresses can increase the number of SNAT ports available for use by the Standard Load Balancer in your cluster. Note: this value is only configurable at cluster creation time, it can not be changed using `aks-engine upgrade`.|
|
||||
| networkPlugin | no | Specifies the network plugin implementation for the cluster. Valid values are:<br>`"azure"` (default), which provides an Azure native networking experience <br>`"kubenet"` for k8s software networking implementation. <br> `"cilium"` for using the default Cilium CNI IPAM (requires the `"cilium"` networkPolicy as well)<br> `"antrea"` for using the Antrea network plugin (requires the `"antrea"` networkPolicy as well) |
|
||||
|
|
|
@ -69,7 +69,7 @@ To test node-problem-detector in a running cluster, you can inject messages into
|
|||
| Name | Required | Description | Default Value |
|
||||
| -------------- | -------- | --------------------------------- | ----------------------------------------- |
|
||||
| name | no | container name | "node-problem-detector" |
|
||||
| image | no | image | "k8s.gcr.io/node-problem-detector:v0.8.1" |
|
||||
| image | no | image | "registry.k8s.io/node-problem-detector:v0.8.1" |
|
||||
| cpuRequests | no | cpu requests for the container | "20m" |
|
||||
| memoryRequests | no | memory requests for the container | "20Mi" |
|
||||
| cpuLimits | no | cpu limits for the container | "200m" |
|
||||
|
|
|
@ -194,7 +194,7 @@ kubeStateMetrics:
|
|||
## kube-state-metrics container image
|
||||
##
|
||||
image:
|
||||
repository: k8s.gcr.io/kube-state-metrics
|
||||
repository: registry.k8s.io/kube-state-metrics
|
||||
tag: v1.2.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ type AzureEnvironmentSpecConfig struct {
|
|||
// KubernetesSpecConfig is the kubernetes container images used.
|
||||
type KubernetesSpecConfig struct {
|
||||
AzureTelemetryPID string `json:"azureTelemetryPID,omitempty"`
|
||||
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream k8s.gcr.io
|
||||
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream registry.k8s.io
|
||||
KubernetesImageBase string `json:"kubernetesImageBase,omitempty"`
|
||||
TillerImageBase string `json:"tillerImageBase,omitempty"`
|
||||
ACIConnectorImageBase string `json:"aciConnectorImageBase,omitempty"` // Deprecated
|
||||
|
@ -66,7 +66,7 @@ const (
|
|||
var (
|
||||
// DefaultKubernetesSpecConfig is the default Docker image source of Kubernetes
|
||||
DefaultKubernetesSpecConfig = KubernetesSpecConfig{
|
||||
KubernetesImageBase: "k8s.gcr.io/",
|
||||
KubernetesImageBase: "registry.k8s.io/",
|
||||
TillerImageBase: "mcr.microsoft.com/",
|
||||
NVIDIAImageBase: "mcr.microsoft.com/",
|
||||
CalicoImageBase: "mcr.microsoft.com/oss/calico/",
|
||||
|
|
|
@ -1198,7 +1198,7 @@ func TestKubernetesImageBase(t *testing.T) {
|
|||
mockCS.Location = "westus2"
|
||||
cloudSpecConfig = mockCS.GetCloudSpecConfig()
|
||||
properties = mockCS.Properties
|
||||
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
|
||||
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
|
||||
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBaseType = ""
|
||||
mockCS.setOrchestratorDefaults(true, false)
|
||||
if properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase != cloudSpecConfig.KubernetesSpecConfig.MCRKubernetesImageBase {
|
||||
|
|
|
@ -33,7 +33,7 @@ const (
|
|||
aadPodIdentityMICImageReference string = "mcr.microsoft.com/k8s/aad-pod-identity/mic:1.6.1"
|
||||
azurePolicyImageReference string = "mcr.microsoft.com/azure-policy/policy-kubernetes-addon-prod:prod_20201023.1"
|
||||
gatekeeperImageReference string = "mcr.microsoft.com/oss/open-policy-agent/gatekeeper:v3.2.3"
|
||||
nodeProblemDetectorImageReference string = "k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.4"
|
||||
nodeProblemDetectorImageReference string = "registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.4"
|
||||
csiProvisionerImageReference string = "oss/kubernetes-csi/csi-provisioner:v3.0.0"
|
||||
csiAttacherImageReference string = "oss/kubernetes-csi/csi-attacher:v3.3.0"
|
||||
csiLivenessProbeImageReference string = "oss/kubernetes-csi/livenessprobe:v2.5.0"
|
||||
|
|
|
@ -1742,7 +1742,7 @@ func (o *OrchestratorProfile) IsHostsConfigAgentEnabled() bool {
|
|||
return o.KubernetesConfig != nil && o.KubernetesConfig.PrivateCluster != nil && to.Bool(o.KubernetesConfig.PrivateCluster.EnableHostsConfigAgent)
|
||||
}
|
||||
|
||||
// GetPodInfraContainerSpec returns the sandbox image as a string (ex: k8s.gcr.io/pause-amd64:3.1)
|
||||
// GetPodInfraContainerSpec returns the sandbox image as a string (ex: registry.k8s.io/pause-amd64:3.1)
|
||||
func (o *OrchestratorProfile) GetPodInfraContainerSpec() string {
|
||||
return o.KubernetesConfig.MCRKubernetesImageBase + GetK8sComponentsByVersionMap(o.KubernetesConfig)[o.OrchestratorVersion][common.PauseComponentName]
|
||||
}
|
||||
|
@ -2207,7 +2207,7 @@ func (f *FeatureFlags) IsFeatureEnabled(feature string) bool {
|
|||
}
|
||||
|
||||
// GetCloudSpecConfig returns the Kubernetes container images URL configurations based on the deploy target environment.
|
||||
// for example: if the target is the public azure, then the default container image url should be k8s.gcr.io/...
|
||||
// for example: if the target is the public azure, then the default container image url should be registry.k8s.io/...
|
||||
// if the target is azure china, then the default container image should be mirror.azure.cn:5000/google_container/...
|
||||
func (cs *ContainerService) GetCloudSpecConfig() AzureEnvironmentSpecConfig {
|
||||
targetEnv := helpers.GetTargetEnv(cs.Location, cs.Properties.GetCustomCloudName())
|
||||
|
|
|
@ -4173,10 +4173,10 @@ func TestGetKubernetesVersion(t *testing.T) {
|
|||
|
||||
func TestGetKubernetesHyperkubeSpec(t *testing.T) {
|
||||
mock1dot13dot11 := getMockAPIProperties("1.13.11")
|
||||
mock1dot13dot11.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
|
||||
mock1dot13dot11.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
|
||||
mock1dot13dot11.OrchestratorProfile.KubernetesConfig.KubernetesImageBaseType = common.KubernetesImageBaseTypeGCR
|
||||
mock1dot16dot3 := getMockAPIProperties("1.16.0")
|
||||
mock1dot16dot3.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
|
||||
mock1dot16dot3.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
|
||||
mock1dot16dot3.OrchestratorProfile.KubernetesConfig.KubernetesImageBaseType = common.KubernetesImageBaseTypeGCR
|
||||
mock1dot15dot4azs := GetMockPropertiesWithCustomCloudProfile("AzureStackCloud", true, true, true)
|
||||
mock1dot15dot4azs.OrchestratorProfile = &OrchestratorProfile{
|
||||
|
@ -4188,7 +4188,7 @@ func TestGetKubernetesHyperkubeSpec(t *testing.T) {
|
|||
},
|
||||
}
|
||||
mockcustomproperties := getMockAPIProperties("1.16.0")
|
||||
mockcustomproperties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
|
||||
mockcustomproperties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
|
||||
mockcustomproperties.OrchestratorProfile.KubernetesConfig.CustomHyperkubeImage = "mcr.io/my-custom-image"
|
||||
|
||||
tests := []struct {
|
||||
|
@ -4199,12 +4199,12 @@ func TestGetKubernetesHyperkubeSpec(t *testing.T) {
|
|||
{
|
||||
name: "1.13.11 Azure public cloud",
|
||||
properties: &mock1dot13dot11,
|
||||
expectedHyperkubeSpec: "k8s.gcr.io/hyperkube-amd64:v1.13.11",
|
||||
expectedHyperkubeSpec: "registry.k8s.io/hyperkube-amd64:v1.13.11",
|
||||
},
|
||||
{
|
||||
name: "1.16.0 Azure public cloud",
|
||||
properties: &mock1dot16dot3,
|
||||
expectedHyperkubeSpec: "k8s.gcr.io/hyperkube-amd64:v1.16.0",
|
||||
expectedHyperkubeSpec: "registry.k8s.io/hyperkube-amd64:v1.16.0",
|
||||
},
|
||||
{
|
||||
name: "1.15.4 Azure Stack",
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -152,7 +152,7 @@
|
|||
},
|
||||
"kubernetesAddonManagerSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/kube-addon-manager-amd64:v8.6"
|
||||
"value": "registry.k8s.io/kube-addon-manager-amd64:v8.6"
|
||||
},
|
||||
"kubernetesCcmImageSpec": {
|
||||
"type": "String",
|
||||
|
@ -160,19 +160,19 @@
|
|||
},
|
||||
"kubernetesDNSMasqSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.15.0"
|
||||
"value": "registry.k8s.io/k8s-dns-dnsmasq-nanny-amd64:1.15.0"
|
||||
},
|
||||
"kubernetesDNSSidecarSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
|
||||
"value": "registry.k8s.io/k8s-dns-sidecar-amd64:1.14.10"
|
||||
},
|
||||
"kubernetesHyperkubeSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/hyperkube-amd64:v1.11.9"
|
||||
"value": "registry.k8s.io/hyperkube-amd64:v1.11.9"
|
||||
},
|
||||
"kubernetesKubeDNSSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.15.0"
|
||||
"value": "registry.k8s.io/k8s-dns-kube-dns-amd64:1.15.0"
|
||||
},
|
||||
"kubernetesKubeletClusterDomain": {
|
||||
"type": "String",
|
||||
|
@ -180,7 +180,7 @@
|
|||
},
|
||||
"kubernetesPodInfraContainerSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/pause-amd64:3.1"
|
||||
"value": "registry.k8s.io/pause-amd64:3.1"
|
||||
},
|
||||
"kuberneteselbsvcname": {
|
||||
"type": "String",
|
||||
|
|
|
@ -6,8 +6,8 @@
|
|||
"details": [
|
||||
{
|
||||
"code": "Conflict",
|
||||
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'cse-master-0'. Error message: \\\"Enable failed: failed to execute command: command terminated with exit status=30\\n[stdout]\\n\\n[stderr]\\nConnection to k8s.gcr.io 443 port [tcp/https] succeeded!\\nConnection to gcr.io 443 port [tcp/https] succeeded!\\nConnection to docker.io 443 port [tcp/https] succeeded!\\n\\\".\"\r\n }\r\n ]\r\n }\r\n}"
|
||||
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'cse-master-0'. Error message: \\\"Enable failed: failed to execute command: command terminated with exit status=30\\n[stdout]\\n\\n[stderr]\\nConnection to registry.k8s.io 443 port [tcp/https] succeeded!\\nConnection to gcr.io 443 port [tcp/https] succeeded!\\nConnection to docker.io 443 port [tcp/https] succeeded!\\n\\\".\"\r\n }\r\n ]\r\n }\r\n}"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -152,7 +152,7 @@
|
|||
},
|
||||
"kubernetesAddonManagerSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/kube-addon-manager-amd64:v8.6"
|
||||
"value": "registry.k8s.io/kube-addon-manager-amd64:v8.6"
|
||||
},
|
||||
"kubernetesCcmImageSpec": {
|
||||
"type": "String",
|
||||
|
@ -160,19 +160,19 @@
|
|||
},
|
||||
"kubernetesDNSMasqSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.15.0"
|
||||
"value": "registry.k8s.io/k8s-dns-dnsmasq-nanny-amd64:1.15.0"
|
||||
},
|
||||
"kubernetesDNSSidecarSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
|
||||
"value": "registry.k8s.io/k8s-dns-sidecar-amd64:1.14.10"
|
||||
},
|
||||
"kubernetesHyperkubeSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/hyperkube-amd64:v1.11.9"
|
||||
"value": "registry.k8s.io/hyperkube-amd64:v1.11.9"
|
||||
},
|
||||
"kubernetesKubeDNSSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.15.0"
|
||||
"value": "registry.k8s.io/k8s-dns-kube-dns-amd64:1.15.0"
|
||||
},
|
||||
"kubernetesKubeletClusterDomain": {
|
||||
"type": "String",
|
||||
|
@ -180,7 +180,7 @@
|
|||
},
|
||||
"kubernetesPodInfraContainerSpec": {
|
||||
"type": "String",
|
||||
"value": "k8s.gcr.io/pause-amd64:3.1"
|
||||
"value": "registry.k8s.io/pause-amd64:3.1"
|
||||
},
|
||||
"kuberneteselbsvcname": {
|
||||
"type": "String",
|
||||
|
|
|
@ -6,8 +6,8 @@
|
|||
"details": [
|
||||
{
|
||||
"code": "Conflict",
|
||||
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'cse-master-0'. Error message: \\\"Enable failed: failed to execute command: command terminated with exit status=30\\n[stdout]\\n\\n[stderr]\\nConnection to k8s.gcr.io 443 port [tcp/https] succeeded!\\nConnection to gcr.io 443 port [tcp/https] succeeded!\\nConnection to docker.io 443 port [tcp/https] succeeded!\\n\\\".\"\r\n }\r\n ]\r\n }\r\n}"
|
||||
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"VMExtensionProvisioningError\",\r\n \"message\": \"VM has reported a failure when processing extension 'cse-master-0'. Error message: \\\"Enable failed: failed to execute command: command terminated with exit status=30\\n[stdout]\\n\\n[stderr]\\nConnection to registry.k8s.io 443 port [tcp/https] succeeded!\\nConnection to gcr.io 443 port [tcp/https] succeeded!\\nConnection to docker.io 443 port [tcp/https] succeeded!\\n\\\".\"\r\n }\r\n ]\r\n }\r\n}"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -4,7 +4,7 @@
|
|||
"properties": {
|
||||
"orchestratorProfile": {
|
||||
"kubernetesConfig": {
|
||||
"kubernetesImageBase": "k8s.gcr.io/",
|
||||
"kubernetesImageBase": "registry.k8s.io/",
|
||||
"useInstanceMetadata": false,
|
||||
"useCloudControllerManager": true,
|
||||
"networkPolicy": "none"
|
||||
|
@ -101,4 +101,4 @@
|
|||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
"properties": {
|
||||
"orchestratorProfile": {
|
||||
"kubernetesConfig": {
|
||||
"kubernetesImageBase": "k8s.gcr.io/",
|
||||
"kubernetesImageBase": "registry.k8s.io/",
|
||||
"useInstanceMetadata": false,
|
||||
"useCloudControllerManager": true,
|
||||
"networkPolicy": "none"
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
"properties": {
|
||||
"orchestratorProfile": {
|
||||
"kubernetesConfig": {
|
||||
"kubernetesImageBase": "k8s.gcr.io/",
|
||||
"kubernetesImageBase": "registry.k8s.io/",
|
||||
"useInstanceMetadata": false,
|
||||
"networkPolicy": "none"
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
"properties": {
|
||||
"orchestratorProfile": {
|
||||
"kubernetesConfig": {
|
||||
"kubernetesImageBase": "k8s.gcr.io/",
|
||||
"kubernetesImageBase": "registry.k8s.io/",
|
||||
"clusterSubnet": "10.240.0.0/12",
|
||||
"dnsServiceIP": "10.0.0.10",
|
||||
"serviceCidr": "10.0.0.0/16",
|
||||
|
@ -46,7 +46,7 @@
|
|||
"containers": [
|
||||
{
|
||||
"name": "cluster-autoscaler",
|
||||
"image": "k8s.gcr.io/cluster-autoscaler:v1.3.7",
|
||||
"image": "registry.k8s.io/cluster-autoscaler:v1.3.7",
|
||||
"cpuRequests": "100m",
|
||||
"memoryRequests": "300Mi",
|
||||
"cpuLimits": "100m",
|
||||
|
@ -107,7 +107,7 @@
|
|||
"containers": [
|
||||
{
|
||||
"name": "metrics-server",
|
||||
"image": "k8s.gcr.io/metrics-server-amd64:v0.2.1"
|
||||
"image": "registry.k8s.io/metrics-server-amd64:v0.2.1"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
@ -176,7 +176,7 @@
|
|||
"containers": [
|
||||
{
|
||||
"name": "ip-masq-agent",
|
||||
"image": "k8s.gcr.io/ip-masq-agent-amd64:v2.3.0",
|
||||
"image": "registry.k8s.io/ip-masq-agent-amd64:v2.3.0",
|
||||
"cpuRequests": "50m",
|
||||
"memoryRequests": "50Mi",
|
||||
"cpuLimits": "50m",
|
||||
|
@ -215,7 +215,7 @@
|
|||
"--network-plugin": "cni",
|
||||
"--node-status-update-frequency": "10s",
|
||||
"--non-masquerade-cidr": "0.0.0.0/0",
|
||||
"--pod-infra-container-image": "k8s.gcr.io/pause-amd64:3.1",
|
||||
"--pod-infra-container-image": "registry.k8s.io/pause-amd64:3.1",
|
||||
"--pod-manifest-path": "/etc/kubernetes/manifests",
|
||||
"--pod-max-pids": "-1"
|
||||
},
|
||||
|
@ -354,7 +354,7 @@
|
|||
"--network-plugin": "cni",
|
||||
"--node-status-update-frequency": "10s",
|
||||
"--non-masquerade-cidr": "0.0.0.0/0",
|
||||
"--pod-infra-container-image": "k8s.gcr.io/pause-amd64:3.1",
|
||||
"--pod-infra-container-image": "registry.k8s.io/pause-amd64:3.1",
|
||||
"--pod-manifest-path": "/etc/kubernetes/manifests",
|
||||
"--pod-max-pids": "-1"
|
||||
}
|
||||
|
@ -398,7 +398,7 @@
|
|||
"--network-plugin": "cni",
|
||||
"--node-status-update-frequency": "10s",
|
||||
"--non-masquerade-cidr": "0.0.0.0/0",
|
||||
"--pod-infra-container-image": "k8s.gcr.io/pause-amd64:3.1",
|
||||
"--pod-infra-container-image": "registry.k8s.io/pause-amd64:3.1",
|
||||
"--pod-manifest-path": "/etc/kubernetes/manifests",
|
||||
"--pod-max-pids": "-1"
|
||||
}
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -4,7 +4,7 @@ metadata:
|
|||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: k8s.gcr.io/e2e-test-images/nginx:1.14-1
|
||||
- image: registry.k8s.io/e2e-test-images/nginx:1.14-1
|
||||
name: nginx
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/secrets/tokens
|
||||
|
|
Загрузка…
Ссылка в новой задаче