зеркало из https://github.com/Azure/aks-engine.git
docs: refresh documentation store (#3177)
This commit is contained in:
Родитель
9b9b279b31
Коммит
b357dc45fa
|
@ -10,7 +10,7 @@ assignees: ''
|
|||
**Describe the bug**
|
||||
|
||||
**Steps To Reproduce**
|
||||
<!-- Please include the apimodel used to deploy the cluster if applicable (make sure to redact any secrets) -->
|
||||
<!-- Please include the API model used to deploy the cluster if applicable (make sure to redact any secrets) -->
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
<!-- Thank you for helping aks-engine with a pull request!
|
||||
<!-- Thank you for helping AKS Engine with a pull request!
|
||||
Use conventional commit messages, such as
|
||||
feat: add a knob to the frobnitz
|
||||
or
|
||||
|
@ -6,7 +6,7 @@ or
|
|||
And read this for faster PR reviews: https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md#best-practices-for-faster-reviews -->
|
||||
|
||||
**Reason for Change**:
|
||||
<!-- What does this PR improve or fix in aks-engine? -->
|
||||
<!-- What does this PR improve or fix in AKS Engine? -->
|
||||
|
||||
|
||||
**Issue Fixed**:
|
||||
|
|
|
@ -4,7 +4,7 @@ Prow is a CI system that offers various features such as rich Github automation,
|
|||
and running tests in Jenkins or on a Kubernetes cluster. You can read more about
|
||||
Prow in [upstream docs][0].
|
||||
|
||||
## aks-engine setup
|
||||
## AKS Engine setup
|
||||
|
||||
Deploy a new Kubernetes cluster.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Contributing Guidelines
|
||||
|
||||
The Microsoft aks-engine project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted.
|
||||
The Microsoft AKS Engine project accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted.
|
||||
|
||||
Please see also the [AKS Engine Developer Guide](docs/community/developer-guide.md).
|
||||
|
||||
|
@ -35,11 +35,11 @@ specific upcoming bug or minor release, it would go into `2.2.1` or `2.3.0`.
|
|||
A milestone (and hence release) is considered done when all outstanding issues/PRs have been closed or moved to another milestone.
|
||||
|
||||
## Issues
|
||||
Issues are used as the primary method for tracking anything to do with the aks-engine project.
|
||||
Issues are used as the primary method for tracking anything to do with the AKS Engine project.
|
||||
|
||||
### Issue Lifecycle
|
||||
The issue lifecycle is mainly driven by the core maintainers, but is good information for those
|
||||
contributing to aks-engine. All issue types follow the same general lifecycle. Differences are noted below.
|
||||
contributing to AKS Engine. All issue types follow the same general lifecycle. Differences are noted below.
|
||||
1. Issue creation
|
||||
2. Triage
|
||||
- The maintainer in charge of triaging will apply the proper labels for the issue. This
|
||||
|
|
|
@ -52,8 +52,8 @@ type addPoolCmd struct {
|
|||
|
||||
const (
|
||||
addPoolName = "addpool"
|
||||
addPoolShortDescription = "Add a node pool to an existing Kubernetes cluster"
|
||||
addPoolLongDescription = "Add a node pool to an existing Kubernetes cluster by referencing a new agentpoolProfile spec"
|
||||
addPoolShortDescription = "Add a node pool to an existing AKS Engine-created Kubernetes cluster"
|
||||
addPoolLongDescription = "Add a node pool to an existing AKS Engine-created Kubernetes cluster by referencing a new agentpoolProfile spec"
|
||||
)
|
||||
|
||||
// newAddPoolCmd run a command to add an agent pool to a Kubernetes cluster
|
||||
|
|
|
@ -170,7 +170,7 @@ func (dc *deployCmd) mergeAPIModel() error {
|
|||
return errors.Wrapf(err, "error merging --set values with the api model: %s", dc.apimodelPath)
|
||||
}
|
||||
|
||||
log.Infoln(fmt.Sprintf("new api model file has been generated during merge: %s", dc.apimodelPath))
|
||||
log.Infoln(fmt.Sprintf("new API model file has been generated during merge: %s", dc.apimodelPath))
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
@ -132,7 +132,7 @@ func (gc *generateCmd) mergeAPIModel() error {
|
|||
return errors.Wrap(err, "error merging --set values with the api model")
|
||||
}
|
||||
|
||||
log.Infoln(fmt.Sprintf("new api model file has been generated during merge: %s", gc.apimodelPath))
|
||||
log.Infoln(fmt.Sprintf("new API model file has been generated during merge: %s", gc.apimodelPath))
|
||||
}
|
||||
|
||||
return nil
|
||||
|
|
|
@ -32,7 +32,7 @@ import (
|
|||
|
||||
const (
|
||||
rotateCertsName = "rotate-certs"
|
||||
rotateCertsShortDescription = "Rotate certificates on an existing Kubernetes cluster"
|
||||
rotateCertsShortDescription = "Rotate certificates on an existing AKS Engine-created Kubernetes cluster"
|
||||
rotateCertsLongDescription = "Rotate CA, etcd, kubelet, kubeconfig and apiserver certificates in a cluster built with AKS Engine. Rotating certificates can break component connectivity and leave the cluster in an unrecoverable state. Before performing any of these instructions on a live cluster, it is preferrable to backup your cluster state and migrate critical workloads to another cluster."
|
||||
kubeSystemNamespace = "kube-system"
|
||||
)
|
||||
|
|
|
@ -59,8 +59,8 @@ type scaleCmd struct {
|
|||
|
||||
const (
|
||||
scaleName = "scale"
|
||||
scaleShortDescription = "Scale an existing Kubernetes cluster"
|
||||
scaleLongDescription = "Scale an existing Kubernetes cluster by specifying increasing or decreasing the node count of an agentpool"
|
||||
scaleShortDescription = "Scale an existing AKS Engine-created Kubernetes cluster"
|
||||
scaleLongDescription = "Scale an existing AKS Engine-created Kubernetes cluster by specifying increasing or decreasing the number of nodes in a node pool"
|
||||
apiModelFilename = "apimodel.json"
|
||||
)
|
||||
|
||||
|
|
|
@ -31,8 +31,8 @@ import (
|
|||
|
||||
const (
|
||||
upgradeName = "upgrade"
|
||||
upgradeShortDescription = "Upgrade an existing Kubernetes cluster"
|
||||
upgradeLongDescription = "Upgrade an existing Kubernetes cluster, one minor version at a time"
|
||||
upgradeShortDescription = "Upgrade an existing AKS Engine-created Kubernetes cluster"
|
||||
upgradeLongDescription = "Upgrade an existing AKS Engine-created Kubernetes cluster, one node at a time"
|
||||
)
|
||||
|
||||
type upgradeCmd struct {
|
||||
|
|
|
@ -31,7 +31,7 @@ var (
|
|||
|
||||
const (
|
||||
versionName = "version"
|
||||
versionShortDescription = "Print the version of AKS Engine"
|
||||
versionShortDescription = "Print the version of aks-engine"
|
||||
versionLongDescription = versionShortDescription
|
||||
)
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ AKS Engine has a lot of documentation. A high-level overview of how it’s organ
|
|||
|
||||
[Topic guides][] discuss key topics and concepts at a fairly high level and provide useful background information and explanation.
|
||||
|
||||
[How-to guides][] are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how AKS Engine works.
|
||||
[How-to guides][] are recipes. They guide you through the steps involved in addressing key problems and use-cases. They are more advanced than tutorials and assume some knowledge of how the `aks-engine` tool works.
|
||||
|
||||
[Community guides][] teach you about the AKS Engine community. It incudes information on the project's Code of Conduct, the planning process for the AKS Engine project itself, its release cycle, and how you can contribute to the project.
|
||||
|
||||
|
|
|
@ -9,9 +9,10 @@ Here you'll find documentation geared towards learning about the development pro
|
|||
|
||||
AKS Engine is a community effort. As it keeps growing, we always need more people to help others. As soon as you learn AKS Engine, you can contribute in many ways:
|
||||
|
||||
- Join the [#aks-engine-users][] Slack channel on <https://kubernetes.slack.com> and answer questions. By explaining AKS Engine to other users, you’re going to learn a lot about the tool yourself.
|
||||
- Join the [#aks-engine-users][] and/or [#aks-engine-dev][] public Slack channels on <https://kubernetes.slack.com> and answer questions. By explaining AKS Engine to other users, you’re going to learn a lot about the tool yourself.
|
||||
- Blog about AKS Engine. We syndicate all the AKS Engine blogs we know about on the [topics page](../topics/README.md); if you’d like to see your blog on that page, you are more than welcome to add it there.
|
||||
- Contribute to other projects that use AKS Engine, write documentation, or release your own code as an open-source extension. The ecosystem of extensions is a community effort; help us build it!
|
||||
|
||||
|
||||
[#aks-engine-users]: https://kubernetes.slack.com/archives/CU3N85WJK
|
||||
[#aks-engine-dev]: https://kubernetes.slack.com/archives/CU1CXUHN0
|
||||
|
|
|
@ -38,7 +38,7 @@ Or on Windows (ensure Docker is configured for Linux containers on Windows):
|
|||
powershell ./makedev.ps1
|
||||
```
|
||||
|
||||
This make target mounts the `aks-engine` source directory as a volume into the Docker container, which means you can edit your source code in your favorite editor on your machine, while still being able to compile and test inside of the Docker container. This environment mirrors the environment used in the AKS Engine continuous integration (CI) system.
|
||||
This make target mounts the AKS Engine source directory as a volume into the Docker container, which means you can edit your source code in your favorite editor on your machine, while still being able to compile and test inside of the Docker container. This environment mirrors the environment used in the AKS Engine continuous integration (CI) system.
|
||||
|
||||
When `make dev` completes, you will be left at a command prompt inside a Docker container.
|
||||
|
||||
|
@ -62,17 +62,17 @@ Usage:
|
|||
aks-engine [command]
|
||||
|
||||
Available Commands:
|
||||
addpool Add a node pool to an existing Kubernetes cluster
|
||||
addpool Add a node pool to an existing AKS Engine-created Kubernetes cluster
|
||||
completion Generates bash completion scripts
|
||||
deploy Deploy an Azure Resource Manager template
|
||||
generate Generate an Azure Resource Manager template
|
||||
get-logs Collect logs and current cluster nodes configuration.
|
||||
get-versions Display info about supported Kubernetes versions
|
||||
help Help about any command
|
||||
rotate-certs Rotate certificates on an existing Kubernetes cluster
|
||||
scale Scale an existing Kubernetes cluster
|
||||
upgrade Upgrade an existing Kubernetes cluster
|
||||
version Print the version of AKS Engine
|
||||
rotate-certs Rotate certificates on an existing AKS Engine-created Kubernetes cluster
|
||||
scale Scale an existing AKS Engine-created Kubernetes cluster
|
||||
upgrade Upgrade an existing AKS Engine-created Kubernetes cluster
|
||||
version Print the version of aks-engine
|
||||
|
||||
Flags:
|
||||
--debug enable verbose debug logs
|
||||
|
@ -86,9 +86,9 @@ Use "aks-engine [command] --help" for more information about a command.
|
|||
|
||||
### Building on Windows, OSX, and Linux
|
||||
|
||||
If the above docker container conveniences don't work for your developer environment, below is per-platform guidance to help you set up your local dev environment manually to build AKS Engine from source.
|
||||
If the above docker container conveniences don't work for your developer environment, below is per-platform guidance to help you set up your local dev environment manually to build an `aks-engine` binary from source.
|
||||
|
||||
Building AKS Engine from source has a few requirements for each of the platforms. Download and install the prerequisites for your platform: Windows, Linux, or Mac:
|
||||
Building an `aks-engine` binary from source has a few requirements for each of the platforms. Download and install the prerequisites for your platform: Windows, Linux, or Mac:
|
||||
|
||||
#### Windows
|
||||
|
||||
|
@ -138,7 +138,7 @@ Build aks-engine:
|
|||
|
||||
### Structure of the Code
|
||||
|
||||
The code for the aks-engine project is organized as follows:
|
||||
The code for the AKS Engine project is organized as follows:
|
||||
|
||||
- The individual programs are located in `cmd/`. Code inside of `cmd/`
|
||||
is not designed for library re-use.
|
||||
|
@ -212,12 +212,12 @@ Thorough guidance around effectively running E2E tests to validate source code c
|
|||
|
||||
### Debugging
|
||||
|
||||
To debug `aks-engine` code directly, use the [Go extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.Go)
|
||||
To debug AKS Engine code directly, use the [Go extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.Go)
|
||||
for Visual Studio Code or use [Delve](https://github.com/go-delve/delve) at the command line.
|
||||
|
||||
#### Visual Studio Code
|
||||
|
||||
To debug `aks-engine` with [VS Code](https://code.visualstudio.com/), first ensure that you have the
|
||||
To debug AKS Engine with [VS Code](https://code.visualstudio.com/), first ensure that you have the
|
||||
[Go extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.Go) installed. Click
|
||||
the "Extensions" icon in the Activity Bar (on the far left), search for "go", then install the
|
||||
official Microsoft extension titled "Rich Go language support for Visual Studio Code."
|
||||
|
@ -228,8 +228,7 @@ Once installed, the Go extension will `go get` several helper applications, incl
|
|||
debugging support. You can read more about VS Code integration with Delve
|
||||
[here](https://github.com/Microsoft/vscode-go/wiki/Debugging-Go-code-using-VS-Code).
|
||||
|
||||
Make sure you have the `aks-engine` code checked out to the appropriate location in your `$GOPATH`
|
||||
and open that directory in VS Code.
|
||||
Open the directory that you checked out the `aks-engine` repo to in VS Code.
|
||||
|
||||
##### Debugging Tests
|
||||
|
||||
|
@ -243,7 +242,7 @@ To the right of "run test" appears a link saying "debug test": click it!
|
|||
|
||||
##### Debugging AKS Engine
|
||||
|
||||
To debug `aks-engine` itself, the default Go debugging configuration in `.vscode/launch.json` needs
|
||||
To debug changes to AKS Engine source during active development, the default Go debugging configuration in `.vscode/launch.json` needs
|
||||
to be edited. Open that file (or just click the gear-shaped "Open launch.json" icon if you have the
|
||||
Debug panel open).
|
||||
|
||||
|
@ -333,7 +332,7 @@ The following steps constitute the AKS Engine CI pipeline:
|
|||
|
||||
## Pull Requests and Generated Code
|
||||
|
||||
To make it easier use AKS Engine as a library and to `go get github.com/Azure/aks-engine`, some
|
||||
To make it easier use AKS Engine source code as a library and to `go get github.com/Azure/aks-engine`, some
|
||||
generated Go code is committed to the repository. Your pull request may need to regenerate those
|
||||
files before it will pass the required `make ensure-generated` step.
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# Planning Process
|
||||
|
||||
aks-engine features a lightweight process that emphasizes openness and ensures every community member can see project goals for the future.
|
||||
AKS Engine features a lightweight process that emphasizes openness and ensures every community member can see project goals for the future.
|
||||
|
||||
## The Role of Maintainers
|
||||
|
||||
[Maintainers][] lead the aks-engine project. Their duties include proposing the Roadmap, reviewing and integrating contributions and maintaining the vision of the project.
|
||||
[Maintainers][] lead the AKS Engine project. Their duties include proposing the Roadmap, reviewing and integrating contributions and maintaining the vision of the project.
|
||||
|
||||
## Open Roadmap
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Releases
|
||||
|
||||
aks-engine uses a [continuous delivery][] approach for creating releases. Every merged commit that passes
|
||||
AKS Engine uses a [continuous delivery][] approach for creating releases. Every merged commit that passes
|
||||
testing results in a deliverable that can be given a [semantic version][] tag and shipped.
|
||||
|
||||
## Master Is Always Releasable
|
||||
|
@ -8,13 +8,13 @@ testing results in a deliverable that can be given a [semantic version][] tag an
|
|||
The master `git` branch of a project should always work. Only changes considered ready to be
|
||||
released publicly are merged.
|
||||
|
||||
aks-engine depends on components that release new versions as often as needed. Fixing
|
||||
AKS Engine depends on components that release new versions as often as needed. Fixing
|
||||
a high priority bug requires the project maintainer to create a new patch release.
|
||||
Merging a backward-compatible feature implies a minor release.
|
||||
|
||||
By releasing often, each release becomes a safe and routine event. This makes it faster
|
||||
and easier for users to obtain specific fixes. Continuous delivery also reduces the work
|
||||
necessary to release a product such as aks-engine, which depends on several external projects.
|
||||
necessary to release a product such as AKS Engine, which depends on several external projects.
|
||||
|
||||
"Components" applies not just to AKS projects, but also to development and release
|
||||
tools, to orchestrator versions, to Docker base images, and to other Azure
|
||||
|
@ -35,7 +35,7 @@ See "[Creating a New Release](#creating-a-new-release)" for more detail.
|
|||
|
||||
## Semantic Versioning
|
||||
|
||||
aks-engine releases comply with [semantic versioning][semantic version], with the "public API" broadly
|
||||
Releases of the `aks-engine` binary comply with [semantic versioning][semantic version], with the "public API" broadly
|
||||
defined as:
|
||||
|
||||
- REST, gRPC, or other API that is network-accessible
|
||||
|
@ -45,12 +45,12 @@ defined as:
|
|||
- Integration with Azure public APIs such as ARM
|
||||
|
||||
In general, changes to anything a user might reasonably link to, customize, or integrate with should
|
||||
be backward-compatible, or else require a major release. aks-engine users can be confident that upgrading
|
||||
be backward-compatible, or else require a major release. `aks-engine` users can be confident that upgrading
|
||||
to a patch or to a minor release will not break anything.
|
||||
|
||||
## Creating a New Release
|
||||
|
||||
Let's go through the process of creating a new release of [aks-engine][].
|
||||
Let's go through the process of creating a new release of the [aks-engine][] binary.
|
||||
|
||||
We will use **v0.32.3** as an example herein. You should replace this with the new version you're releasing.
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ As mentioned briefly in the [developer guide](developer-guide.md), a `make` targ
|
|||
$ ORCHESTRATOR_RELEASE=1.18 CLUSTER_DEFINITION=examples/kubernetes.json SUBSCRIPTION_ID=$TEST_AZURE_SUB_ID CLIENT_ID=$TEST_AZURE_SP_ID CLIENT_SECRET=$TEST_AZURE_SP_PW TENANT_ID=$TEST_AZURE_TENANT_ID LOCATION=$AZURE_REGION CLEANUP_ON_EXIT=false make test-kubernetes
|
||||
```
|
||||
|
||||
The above, simple example describes an E2E test invocation against a base cluster configuration defined by the api model at `examples/kubernetes.json`, overriding any specific Kubernetes version therein to validate the most recent, supported v1.18 release; using Azure service principal authentication defined in the various `$TEST_AZURE_`* environment variables; deployed to the region defined by the environment variable `$AZURE_REGION`; and finally, we tell the E2E test runner not to delete the cluster resources (i.e., the resource group) following the completion of the tests.
|
||||
The above, simple example describes an E2E test invocation against a base cluster configuration defined by the API model at `examples/kubernetes.json`, overriding any specific Kubernetes version therein to validate the most recent, supported v1.18 release; using Azure service principal authentication defined in the various `$TEST_AZURE_`* environment variables; deployed to the region defined by the environment variable `$AZURE_REGION`; and finally, we tell the E2E test runner not to delete the cluster resources (i.e., the resource group) following the completion of the tests.
|
||||
|
||||
Example output from such an invocation is [here](e2e-output-example.log). If your test run succeeded, you'll see this in your console stdout at the conclusion of the test run:
|
||||
|
||||
|
@ -30,7 +30,7 @@ The E2E test runner is designed to be flexible across a wide range of cluster co
|
|||
| `NAME` | no | Allows you to re-run E2E tests on an existing cluster. Assumes the cluster has been created via a prior E2E test run, and that its generated artifacts still exist in the relative `_output/` directory. The value of `NAME` should be equal to the resource group created by the E2E test runner, and that value will also map to a directory under `_output/`. E.g., a value of `kubernetes-westus2-13811` will map to a resource group in the configured subscription, using the configured service principal credentials, and a directory under `_output/kubernetes-westus2-13811/` will exist with all cluster configuration artifacts. |
|
||||
| `LOCATION` | yes | The Azure region to build your cluster in. E.g., `LOCATION=westus2`. Required if `REGIONS` is empty. Not required if `NAME` is provided, i.e., if you are re-testing an existing E2E-created cluster. |
|
||||
| `REGIONS` | no | When you want to deploy to a randomly selected region from a known-working set of regions. E.g., `REGIONS=westus2,westeurope,canadacentral`. Required if `LOCATION` is empty; not required if `NAME` is provided. |
|
||||
| `CLUSTER_DEFINITION` | yes | The api model to use as cluster configuration input for creating a new cluster. E.g., `CLUSTER_DEFINITION=examples/kubernetes.json`. Not required if `NAME` is provided. |
|
||||
| `CLUSTER_DEFINITION` | yes | The API model to use as cluster configuration input for creating a new cluster. E.g., `CLUSTER_DEFINITION=examples/kubernetes.json`. Not required if `NAME` is provided. |
|
||||
| `CLEANUP_ON_EXIT` | no | Delete cluster after running E2E. E.g., `CLEANUP_ON_EXIT=true`. Default is false. |
|
||||
| `CLEANUP_IF_FAIL` | no | Delete cluster only if E2E failed. E.g., `CLEANUP_IF_FAIL=false`. Default is false. |
|
||||
| `STABILITY_ITERATIONS` | no | How many basic functional cluster tests to run in rapid succession as a part of E2E validation. This is useful for simulation continual usage of basic cluster reconciliation functionality (schedule/delete a pod, resolve a DNS lookup, etc). E.g., `STABILITY_ITERATIONS=100`. Default is 3. |
|
||||
|
|
|
@ -30,7 +30,7 @@ In the "Proximate Problem Statements" above, we observe that one of the three ex
|
|||
|
||||
- quickly testing/validating specific container images across the set of Kubernetes components in a working cluster
|
||||
|
||||
More specifically, the "addons" interface summarized above will allow for the required container image reference configuration across a large set of the Kubernetes components that either aren’t configurable, or which require non-generic, distinct flat properties. That would look like this in the api model:
|
||||
More specifically, the "addons" interface summarized above will allow for the required container image reference configuration across a large set of the Kubernetes components that either aren’t configurable, or which require non-generic, distinct flat properties. That would look like this in the API model:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -6,13 +6,13 @@ This page provides help with the most common questions about AKS Engine.
|
|||
|
||||
Azure Kubernetes Service ([AKS][]) is a Microsoft Azure service that supports fully managed Kubernetes clusters. [AKS Engine][] is an Azure open source project that creates Kubernetes clusters with your custom requirements. AKS uses AKS Engine internally, but they are not the same.
|
||||
|
||||
AKS clusters can be created in the Azure portal or with `az aks create` in the [Azure command-line tool][]. AKS Engine clusters can be created with `aks-engine deploy` in the AKS Engine command-line tool, or by generating the ARM templates with `aks-engine generate` and deploying them as a separate step.
|
||||
AKS clusters can be created in the Azure portal or with `az aks create` in the [Azure command-line tool][]. AKS Engine clusters can be created with `aks-engine deploy` (`aks-engine` is the AKS Engine command-line tool), or by generating ARM templates with `aks-engine generate` and deploying them as a separate step using the `az` command-line tool (e.g., `az group deployement create`).
|
||||
|
||||
### What's the Difference Between `acs-engine` and `aks-engine`?
|
||||
|
||||
AKS Engine is the next version of the ACS-Engine project. AKS Engine supports current and future versions of [Kubernetes][], while ACS-Engine also supported the Docker Swarm and Mesos DC/OS container orchestrators.
|
||||
|
||||
### Can I Scale or Upgrade an `acs-engine` Cluster with `aks-engine`?
|
||||
### Can I Scale or Upgrade an `acs-engine`-created Kubernetes Cluster with `aks-engine`?
|
||||
|
||||
Yes.
|
||||
|
||||
|
@ -22,11 +22,11 @@ No further development or releases in ACS-Engine are planned. AKS Engine is a ba
|
|||
|
||||
### Can I Build an AKS Cluster with `aks-engine`?
|
||||
|
||||
No, Azure Kubernetes Service itself is the way to create a supported, managed AKS cluster. AKS Engine shares some code with AKS, but does not create managed clusters.
|
||||
No, using the Azure Kubernetes Service itself is the way to create a supported, managed AKS cluster. AKS Engine shares some code with AKS, but does not create managed clusters.
|
||||
|
||||
### Should I use the latest `aks-engine` release if I was previously using `acs-engine`?
|
||||
|
||||
Yes. `aks-engine` [v0.27.0][] is a continuation of acs-engine [v0.26.2][] with all the Kubernetes fixes and features included in [v0.26.2][] and more.
|
||||
Yes. `aks-engine` released [v0.27.0][] as a continuation of the ACS-Engine project ([v0.26.2][] was the final `acs-engine` release) with all the Kubernetes fixes and features included in [v0.26.2][] and more.
|
||||
|
||||
|
||||
[AKS]: https://azure.microsoft.com/en-us/services/kubernetes-service/
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## Background
|
||||
|
||||
Starting from AKS Engine v0.3.0, AKS Engine supports using exponential cloud backoff that is a feature of Kubernetes v1.6.6 and newer. Cloud backoff allows Kubernetes nodes to backoff on HTTP 429 errors that are usually caused by exceeding Azure API limits.
|
||||
AKS Engine supports using exponential cloud backoff that is a feature of Kubernetes v1.6.6 and newer. Cloud backoff allows Kubernetes nodes to backoff on HTTP 429 errors that are usually caused by exceeding Azure API limits.
|
||||
|
||||
## To Use
|
||||
|
||||
|
@ -12,7 +12,7 @@ Declare your kubernetes cluster API model config as you normally would, with the
|
|||
|
||||
## Backoff configuration options
|
||||
|
||||
The following configuration parameters are available in the `properties.orchestratorProfile.kubernetesConfig` configuration object in the api model specification:
|
||||
The following configuration parameters are available in the `properties.orchestratorProfile.kubernetesConfig` configuration object in the API model specification:
|
||||
|
||||
```json
|
||||
"cloudProviderBackoff": {
|
||||
|
@ -48,4 +48,4 @@ The following configuration parameters are available in the `properties.orchestr
|
|||
"--route-reconciliation-period": "1m" // how often to reconcile cloudprovider-originating node routes
|
||||
}
|
||||
```
|
||||
The [examples/largeclusters/kubernetes.json](https://github.com/Azure/aks-engine/blob/master/examples/largeclusters/kubernetes.json) api model example suggests how you might opt into these large cluster features following the guidelines above.
|
||||
The [examples/largeclusters/kubernetes.json](https://github.com/Azure/aks-engine/blob/master/examples/largeclusters/kubernetes.json) API model example suggests how you might opt into these large cluster features following the guidelines above.
|
||||
|
|
|
@ -4,7 +4,7 @@ Common issues or questions that users have run into when using AKS Engine are de
|
|||
|
||||
## VMExtensionProvisioningError or VMExtensionProvisioningTimeout
|
||||
|
||||
The two above VMExtensionProvisioning— errors tell us that a vm in the cluster failed installing required application prerequisites after CRP provisioned the VM into the resource group. When aks-engine creates a new Kubernetes cluster, a series of shell scripts runs to install prereq's like docker, etcd, Kubernetes runtime, and various other host OS packages that support the Kubernetes application layer. *Usually* this indicates one of the following:
|
||||
The two above VMExtensionProvisioning— errors tell us that a vm in the cluster failed installing required application prerequisites after CRP provisioned the VM into the resource group. When `aks-engine deploy` creates a new Kubernetes cluster, a series of shell scripts runs to install prereq's like docker, etcd, Kubernetes runtime, and various other host OS packages that support the Kubernetes application layer. *Usually* this indicates one of the following:
|
||||
|
||||
1. Something about the cluster configuration is pathological. For example, perhaps the cluster config includes a custom version of a particular software dependency that doesn't exist. Or, another example, for a cluster created inside a custom VNET (i.e., a user-provided, pre-existing VNET), perhaps that custom VNET does not have general outbound internet access, and so apt, docker pull, etc is not able to execute successfully.
|
||||
2. A transient Azure environmental error caused the shell script operation to timeout, or exceed its retry count. For example, the shell script may attempt to download a required package (e.g., etcd), and if the Azure networking environment for the newly provisioned vm is flaky for a period of time, then the shell script may retry several times, but eventually timeout and fail.
|
||||
|
@ -17,7 +17,7 @@ For classification #2 above, the appropriate strategic response is to retry a fe
|
|||
|
||||
CSE stands for CustomScriptExtension, and is just a way of expressing: "a script that executes as part of the VM provisioning process, and that must exit 0 (i.e., successfully) in order for that VM provisioning process to succeed". Basically it's another way of expressing the VMExtensionProvisioning— concept above.
|
||||
|
||||
To summarize, the way that aks-engine implements Kubernetes on Azure is a collection of (1) Azure VM configuration + (2) shell script execution. Both are implemented as a single operational unit, and when #2 fails, we consider the entire VM provisioning operation to be a failure; more importantly, if only one VM in the cluster deployment fails, we consider the entire cluster operation to be a failure.
|
||||
To summarize, the way that AKS Engine implements Kubernetes on Azure is a collection of (1) Azure VM configuration + (2) shell script execution. Both are implemented as a single operational unit, and when #2 fails, we consider the entire VM provisioning operation to be a failure; more importantly, if only one VM in the cluster deployment fails, we consider the entire cluster operation to be a failure.
|
||||
|
||||
### How To Debug CSE errors (Linux)
|
||||
|
||||
|
@ -52,7 +52,7 @@ Look for the exit code. In the above example, the exit code is `20`. The list of
|
|||
|
||||
If after following the above you are still unable to troubleshoot your deployment error, please open a Github issue with title "CSE error: exit code <INSERT_YOUR_EXIT_CODE>" and include the following in the description:
|
||||
|
||||
1. The apimodel json used to deploy the cluster (aka your cluster config). **Please make sure you remove all secrets and keys before posting it on GitHub.**
|
||||
1. Relevant data from the cluster definition JSON file (API model) used to deploy the cluster. **Please make sure you remove all secrets and keys before posting it on GitHub.**
|
||||
|
||||
2. The output of `kubectl get nodes`
|
||||
|
||||
|
@ -154,8 +154,8 @@ read and **write** permissions to the target Subscription.
|
|||
|
||||
`Nov 10 16:35:22 k8s-master-43D6F832-0 docker[3177]: E1110 16:35:22.840688 3201 kubelet_node_status.go:69] Unable to construct api.Node object for kubelet: failed to get external ID from cloud provider: autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/72f988bf-86f1-41af-91ab-2d7cd011db47/oauth2/token?api-version=1.0 failed with 400 Bad Request: StatusCode=400`
|
||||
|
||||
[This documentation](../topics/service-principals.md) explains how to create/configure a service principal for an AKS Engine Kubernetes cluster.
|
||||
[This documentation](../topics/service-principals.md) explains how to create/configure a service principal for an AKS Engine-created Kubernetes cluster.
|
||||
|
||||
## Failed upgrade
|
||||
|
||||
Please review the [upgrade documentation](../topics/upgrade.md) for a guide on upgrading `aks-engine` Kubernetes clusters.
|
||||
Please review the [upgrade documentation](../topics/upgrade.md) for a guide on upgrading AKS Engine-created Kubernetes clusters.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# AAD integration Walkthrough
|
||||
|
||||
This walkthrough is to help you get start with Azure Active Directory(AAD) integeration with an AKS Engine Kubernetes cluster.
|
||||
This walkthrough is to help you get start with Azure Active Directory(AAD) integration with an AKS Engine-created Kubernetes cluster.
|
||||
|
||||
[OpenID Connect](http://openid.net/connect/) is a simple identity layer built on top of the OAuth 2.0 protocol, and it is supported by both AAD and Kubernetes. Here we're going to use OpenID Connect as the communication protocol.
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
AKS Engine is a command line tool that generates ARM (Azure Resource Manager) templates to deploy Kubernetes clusters on the Azure platform.
|
||||
|
||||
This design document provides a brief and high-level overview of what aks-engine does internally to achieve deployment of containerized clusters. The scope of this document will be limited to the execution of aks-engine when creating Kubernetes clusters.
|
||||
This design document provides a brief and high-level overview of what AKS Engine does internally to achieve deployment of Kubernetes clusters.
|
||||
|
||||
## Architecture Diagram
|
||||
|
||||
|
@ -10,9 +10,9 @@ This design document provides a brief and high-level overview of what aks-engine
|
|||
|
||||
## Components
|
||||
|
||||
### Cluster api model
|
||||
### Cluster definition, or API model
|
||||
|
||||
AKS Engine accepts JSONs of cluster api models as inputs. These api models allow the user to specify cluster configuration items such as
|
||||
AKS Engine accepts a cluster definition JSON file (or API model) as input. An API model allows the user to specify cluster configuration items such as:
|
||||
|
||||
- Master and worker nodes configuration
|
||||
- Kubernetes version
|
||||
|
@ -24,7 +24,7 @@ The input validator checks for bad/missing input in the user-provided api models
|
|||
|
||||
### Template Generator
|
||||
|
||||
Once the input is validated, the template generator is invoked which will convert the apimodel JSON into another JSON which has a format that is well-understood by ARM (Azure Resource Manager). The template generator achieves this through templating where existing skeleton json files are converted into the actual ARM JSONs using the values present in the input api model. These skeleton templates are written in the schema recognized by ARM and they contain placeholders which can be substituted with the values provided in the apimodel JSONs. These templates also nest other template files inside of it. Given below is an example of a template file with placeholders.
|
||||
Once the input is validated, the template generator is invoked which will convert the API model JSON into another JSON which has a format that is well-understood by ARM (Azure Resource Manager). The template generator achieves this through templating where existing skeleton JSON files are converted into the actual ARM JSON files using the values present in the input API model. These skeleton templates are written in the schema recognized by ARM and they contain placeholders which can be substituted with the values provided in the API model JSON file. These templates also nest other template files inside of it. Given below is an example of a template file with placeholders.
|
||||
|
||||
```js
|
||||
{
|
||||
|
@ -219,7 +219,7 @@ Once the input is validated, the template generator is invoked which will conver
|
|||
```
|
||||
The template generator then creates the following artifacts
|
||||
|
||||
- ARM Templates (Deploy and Paramater JSONs). These artifacts are used by ARM to effect the actual deployment of the kubernetes clusters.
|
||||
- ARM Templates (Deploy and Parameter JSON files). These artifacts are used by ARM to effect the actual deployment of the kubernetes clusters.
|
||||
|
||||
- KubeConfigs. These are kubernetes config files which can be used by the user or the Kubernetes API clients to perform kubectl operations against the deployed Kubernetes cluster directly.
|
||||
|
||||
|
@ -231,7 +231,7 @@ AKS Engine interfaces with Azure Resource Manager (ARM) through the Azure Go SDK
|
|||
|
||||
### Kubernetes Client API
|
||||
|
||||
AKS Engine also performs kubernetes cluster management operations (kubectl) through the imported Kubernetes API libraries. The Client API calls are made during the scale and upgrade commands of aks-engine.
|
||||
AKS Engine also performs Kubernetes cluster management operations through the imported Kubernetes API libraries. The Client API calls are made during `aks-engine scale` and `aks-engine upgrade` command operations.
|
||||
|
||||
|
||||
Design challenges and proposals
|
||||
|
@ -242,9 +242,9 @@ Design challenges and proposals
|
|||
We find that the current implementation of templating leads to challenges in terms of code readability and maintainability.
|
||||
|
||||
|
||||
- There is no direct and intuitive mapping between the input apimodels and the ARM templates. The placeholder substitutions are performed at very specific areas in the template skeletons. It's hard to draw any generality from it and this makes it difficult to create the template JSONs purely through code as opposed to performing the placeholder substitutions.
|
||||
- There is no direct and intuitive mapping between the input apimodels and the ARM templates. The placeholder substitutions are performed at very specific areas in the template skeletons. It's hard to draw any generality from it and this makes it difficult to create the template JSON files purely through code as opposed to performing the placeholder substitutions.
|
||||
|
||||
- This also limits the capabilities of aks-engine as far as extensibility is concerned. If we were to introduce more changes and customizations, it would potentially entail modifying the template skeleton layouts. This would just add more complexity.
|
||||
- This also limits the capabilities of AKS Engine as far as extensibility is concerned. If we were to introduce more changes and customizations, it would potentially entail modifying the template skeleton layouts. This would just add more complexity.
|
||||
|
||||
#### Possible Solutions
|
||||
|
||||
|
@ -254,11 +254,11 @@ As of now, we have no standard/formal representation of the ARM templates. They
|
|||
|
||||
_**Pros**_
|
||||
|
||||
- A formal representation would help us create a more direct mapping between the api model inputs and their corresponding ARM template files.
|
||||
- A formal representation would help us create a more direct mapping between the API model inputs and their corresponding ARM template files.
|
||||
|
||||
- This will allow us to accommodate future ARM template customization more effectively, because we can express and maintain the variety of inter-dependent outputs natively, as first class data representations.
|
||||
|
||||
- Template validation can be done within the aks-engine layer itself. Currently, template validation can only be performed via the Azure GO SDK and this entails a network call.
|
||||
- Template validation can be done within the AKS Engine layer itself. Currently, template validation can only be performed via the Azure GO SDK and this entails a network call.
|
||||
|
||||
_**Cons/Challenges**_
|
||||
|
||||
|
|
|
@ -5,7 +5,7 @@ Instructions on rotating TLS CA and certificates for an AKS Engine cluster.
|
|||
## Prerequesites
|
||||
|
||||
- The etcd members MUST be in a healthy state before rotating the CA and certs (ie. `etcdctl cluster-health` shows all peers are healthy and cluster is healthy).
|
||||
- The apimodel file reflecting the current cluster configuration and a working ssh private key that has root access to all nodes. The apimodel file is persisted at AKS Engine template generation time, by default to the _output/ child directory from the working parent directory at the time of the aks-engine invocation.
|
||||
- The API model file reflecting the current cluster configuration and a working ssh private key that has root access to all nodes. The API model file is persisted at AKS Engine template generation time, by default to the _output/ child directory from the working parent directory at the time of the `aks-engine` invocation.
|
||||
|
||||
<a name="preparation"></a>
|
||||
|
||||
|
|
|
@ -46,7 +46,7 @@ $ aks-engine get-versions
|
|||
| containerRuntime | no | The container runtime to use as a backend. The default is `docker`. Also supported is `containerd`. Windows support for `containerd` is **Experimental** - see [Windows ContainerD](features.md#windows-containerd) |
|
||||
| containerRuntimeConfig | no | A map of key-value pairs to drive configuration of the container runtime. Currently accepts a single key, "dataDir", which configures the root data directory for the container runtime. dataDir must be an absolute path. This is only implemented on Linux. See an [example](../../examples/kubernetes-config/kubernetes-docker-tmpdir.json) which places docker on the tmp disk of a Linux VM. |
|
||||
| controllerManagerConfig | no | Configure various runtime configuration for controller-manager. See `controllerManagerConfig` [below](#feat-controller-manager-config) |
|
||||
| customWindowsPackageURL | no | Configure custom windows Kubernetes release package URL for deployment on Windows. The format of this file is a zip file with multiple items (binaries, cni, infra container) in it. This setting will be deprecated in a future release of aks-engine where the binaries will be pulled in the format of Kubernetes releases that only contain the kubernetes binaries. |
|
||||
| customWindowsPackageURL | no | Configure custom windows Kubernetes release package URL for deployment on Windows. The format of this file is a zip file with multiple items (binaries, cni, infra container) in it. This setting will be deprecated in a future release of `aks-engine` where the binaries will be pulled in the format of Kubernetes releases that only contain the kubernetes binaries. |
|
||||
| WindowsNodeBinariesURL | no | Windows Kubernetes Node binaries can be provided in the format of Kubernetes release (example: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#node-binaries-1). This setting allows overriding the binaries for custom builds. |
|
||||
| WindowsContainerdURL | no (for development only) | **Experimental** - see [Windows ContainerD](features.md#windows-containerd) |
|
||||
| WindowsSdnPluginURL | no (for development only) | **Experimental** - see [Windows ContainerD](features.md#windows-containerd) |
|
||||
|
@ -57,11 +57,11 @@ $ aks-engine get-versions
|
|||
| enableAggregatedAPIs | no | Enable [Kubernetes Aggregated APIs](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/). enableRbac must be set to true to use aggregated APIs. Aggregated API functionality is required by [Service Catalog](https://github.com/kubernetes-incubator/service-catalog/blob/master/README.md). (boolean - default is true) |
|
||||
| enableDataEncryptionAtRest | no | Enable [kubernetes data encryption at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).This is currently an alpha feature. (boolean - default == false) |
|
||||
| enableEncryptionWithExternalKms | no | Enable [kubernetes data encryption at rest with external KMS](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/).This is currently an alpha feature. (boolean - default == false) |
|
||||
| enablePodSecurityPolicy | no | Deprecated, see the pod-security-policy addon for a description of the aks-engine-configured PodSecurityPolicy spec that is bootstrapped as a Kubernetes addon |
|
||||
| enablePodSecurityPolicy | no | Deprecated, see the pod-security-policy addon for a description of the AKS Engine-configured PodSecurityPolicy spec that is bootstrapped as a Kubernetes addon |
|
||||
| enableRbac | no | Enable [Kubernetes RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) (boolean - default == true) RBAC support is required for Kubernetes 1.15.0 and greater, so enableRbac=false is not an allowed configuration for clusters >= 1.15.0. If you upgrade a cluster to 1.15.0 or greater from a version less than 1.15, and RBAC is disabled, the cluster configuration will be statically modified to enable RBAC as a result of running `aks-engine upgrade`. |
|
||||
| etcdDiskSizeGB | no | Size in GB to assign to etcd data volume. Defaults (if no user value provided) are: 256 GB for clusters up to 3 nodes; 512 GB for clusters with between 4 and 10 nodes; 1024 GB for clusters with between 11 and 20 nodes; and 2048 GB for clusters with more than 20 nodes |
|
||||
| etcdEncryptionKey | no | Enryption key to be used if enableDataEncryptionAtRest is enabled. Defaults to a random, generated, key |
|
||||
| etcdVersion | no (for development only) | Enables an explicit etcd version, e.g. `3.2.23`. Default is `3.3.19`. This `kubernetesConfig` property is for development only, and recommended only for ephemeral clusters. However, you may use `aks-engine upgrade` on a cluster with an api model that includes a user-modified `etcdVersion` value. If `aks-engine upgrade` determines that the user-modified version is greater than the current AKS Engine default, `aks-engine upgrade` will _not_ replace the newer version with an older version. However, if `aks-engine upgrade` determines that the user-modified version is older than the current AKS Engine default, it will build the newly upgraded master node VMs with the newer, AKS Engine default version of etcd. |
|
||||
| etcdVersion | no (for development only) | Enables an explicit etcd version, e.g. `3.2.23`. Default is `3.3.19`. This `kubernetesConfig` property is for development only, and recommended only for ephemeral clusters. However, you may use `aks-engine upgrade` on a cluster with an API model that includes a user-modified `etcdVersion` value. If `aks-engine upgrade` determines that the user-modified version is greater than the current AKS Engine default, `aks-engine upgrade` will _not_ replace the newer version with an older version. However, if `aks-engine upgrade` determines that the user-modified version is older than the current AKS Engine default, it will build the newly upgraded master node VMs with the newer, AKS Engine default version of etcd. |
|
||||
| gcHighThreshold | no | Sets the --image-gc-high-threshold value on the kublet configuration. Default is 85. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/) |
|
||||
| gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/) |
|
||||
| kubeletConfig | no | Configure various runtime configuration for kubelet. See `kubeletConfig` [below](#feat-kubelet-config) |
|
||||
|
@ -228,7 +228,7 @@ Above you see custom configuration for both tiller and kubernetes-dashboard. Bot
|
|||
|
||||
See https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ for more on Kubernetes resource limits.
|
||||
|
||||
Additionally above, we specified a custom docker image for tiller, let's say we want to build a cluster and test an alpha version of tiller in it. **Important note!** customizing the image is not sticky across upgrade/scale, to ensure that aks-engine always delivers a version-curated, known-working addon when moving a cluster to a new version. Considering all that, providing a custom image reference for an addon configuration should be considered for testing/development, but not for a production cluster. If you'd like to entirely customize one of the addons available, including across scale/upgrade operations, you may include in an addon's spec a base64-encoded string of a Kubernetes yaml manifest. E.g.,
|
||||
Additionally above, we specified a custom docker image for tiller, let's say we want to build a cluster and test an alpha version of tiller in it. **Important note!** customizing the image is not sticky across upgrade/scale, to ensure that AKS Engine always delivers a version-curated, known-working addon when moving a cluster to a new version. Considering all that, providing a custom image reference for an addon configuration should be considered for testing/development, but not for a production cluster. If you'd like to entirely customize one of the addons available, including across scale/upgrade operations, you may include in an addon's spec a base64-encoded string of a Kubernetes yaml manifest. E.g.,
|
||||
|
||||
```json
|
||||
"kubernetesConfig": {
|
||||
|
@ -399,7 +399,7 @@ The above is the pattern we use to pass in a `cluster-init` spec for loading at
|
|||
|
||||
See [here](https://kubernetes.io/docs/reference/generated/kubelet/) for a reference of supported kubelet options.
|
||||
|
||||
Below is a list of kubelet options that aks-engine will configure by default:
|
||||
Below is a list of kubelet options that AKS Engine will configure by default:
|
||||
|
||||
| kubelet option | default value |
|
||||
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
@ -470,7 +470,7 @@ Below is a list of kubelet options that are _not_ currently user-configurable, e
|
|||
|
||||
See [here](https://kubernetes.io/docs/reference/generated/kube-controller-manager/) for a reference of supported controller-manager options.
|
||||
|
||||
Below is a list of controller-manager options that aks-engine will configure by default:
|
||||
Below is a list of controller-manager options that AKS Engine will configure by default:
|
||||
|
||||
| controller-manager option | default value |
|
||||
| ------------------------------- | ------------------------------------------ |
|
||||
|
@ -487,7 +487,7 @@ Below is a list of controller-manager options that are _not_ currently user-conf
|
|||
| "--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
|
||||
| "--allocate-node-cidrs" | "false" |
|
||||
| "--cluster-cidr" | _uses clusterSubnet value_ |
|
||||
| "--cluster-name" | _auto-generated using api model properties_ |
|
||||
| "--cluster-name" | _auto-generated using API model properties_ |
|
||||
| "--root-ca-file" | "/etc/kubernetes/certs/ca.crt" |
|
||||
| "--cluster-signing-cert-file" | "/etc/kubernetes/certs/ca.crt" |
|
||||
| "--cluster-signing-key-file" | "/etc/kubernetes/certs/ca.key" |
|
||||
|
@ -513,7 +513,7 @@ Below is a list of controller-manager options that are _not_ currently user-conf
|
|||
|
||||
See [here](https://kubernetes.io/docs/reference/generated/cloud-controller-manager/) for a reference of supported controller-manager options.
|
||||
|
||||
Below is a list of cloud-controller-manager options that aks-engine will configure by default:
|
||||
Below is a list of cloud-controller-manager options that AKS Engine will configure by default:
|
||||
|
||||
| controller-manager option | default value |
|
||||
| ------------------------------- | ------------- |
|
||||
|
@ -526,7 +526,7 @@ Below is a list of cloud-controller-manager options that are _not_ currently use
|
|||
| "--kubeconfig" | "/var/lib/kubelet/kubeconfig" |
|
||||
| "--allocate-node-cidrs" | "false" |
|
||||
| "--cluster-cidr" | _uses clusterSubnet value_ |
|
||||
| "--cluster-name" | _auto-generated using api model properties_ |
|
||||
| "--cluster-name" | _auto-generated using API model properties_ |
|
||||
| "--cloud-provider" | "azure" |
|
||||
| "--cloud-config" | "/etc/kubernetes/azure.json" |
|
||||
| "--leader-elect" | "true" |
|
||||
|
@ -562,7 +562,7 @@ Or perhaps you want to customize/override the set of admission-control flags pas
|
|||
|
||||
See [here](https://kubernetes.io/docs/reference/generated/kube-apiserver/) for a reference of supported apiserver options.
|
||||
|
||||
Below is a list of apiserver options that aks-engine will configure by default:
|
||||
Below is a list of apiserver options that AKS Engine will configure by default:
|
||||
|
||||
| apiserver option | default value |
|
||||
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
@ -635,7 +635,7 @@ Below is a list of apiserver options that are _not_ currently user-configurable,
|
|||
|
||||
See [here](https://kubernetes.io/docs/reference/generated/kube-scheduler/) for a reference of supported kube-scheduler options.
|
||||
|
||||
Below is a list of scheduler options that aks-engine will configure by default:
|
||||
Below is a list of scheduler options that AKS Engine will configure by default:
|
||||
|
||||
| kube-scheduler option | default value |
|
||||
| --------------------- | ------------------------------------------ |
|
||||
|
@ -684,7 +684,7 @@ The `sysctldConfig` configuration interface allows generic Linux kernel runtime
|
|||
|
||||
Kubernetes kernel configuration varies by distro, so please validate that the kernel parameter and value works for the Linux flavor you are using in your cluster.
|
||||
|
||||
Below is a list of sysctl configuration that aks-engine will configure by default for both Ubuntu 16.04-LTS and 18.04-LTS, for both master and node pool VMs:
|
||||
Below is a list of sysctl configuration that AKS Engine will configure by default for both Ubuntu 16.04-LTS and 18.04-LTS, for both master and node pool VMs:
|
||||
|
||||
| kernel parameter | default value |
|
||||
| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
@ -710,7 +710,7 @@ Below is a list of sysctl configuration that aks-engine will configure by defaul
|
|||
|
||||
#### jumpboxProfile
|
||||
|
||||
`jumpboxProfile` describes the settings for a jumpbox deployed via aks-engine to access a private cluster. It is a child property of `privateCluster`.
|
||||
`jumpboxProfile` describes the settings for a jumpbox deployed via `aks-engine` to access a private cluster. It is a child property of `privateCluster`.
|
||||
|
||||
| Name | Required | Description |
|
||||
| -------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
@ -744,7 +744,7 @@ Below is a list of sysctl configuration that aks-engine will configure by defaul
|
|||
| imageReference.gallery | no | Name of Shared Image Gallery containing the Linux OS image. Applies only to Shared Image Galleries. All of name, resourceGroup, subscription, gallery, image name, and version must be specified for this scenario. |
|
||||
| imageReference.version | no | Version containing the Linux OS image. Applies only to Shared Image Galleries. All of name, resourceGroup, subscription, gallery, image name, and version must be specified for this scenario. |
|
||||
| distro | no | Specifies the masters' Linux distribution. Currently supported values are: `ubuntu`, `ubuntu-18.04`, `ubuntu-18.04-gen2` (Ubuntu 18.04-LTS running on a [Generation 2 VM](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/generation-2)), `aks-ubuntu-16.04` (previously `aks`), and `aks-ubuntu-18.04`. For Azure Public Cloud, Azure US Government Cloud and Azure China Cloud, defaults to `aks-ubuntu-16.04`. For other Sovereign Clouds, the default is `ubuntu-16.04` (There is a [known issue](https://github.com/Azure/aks-engine/issues/761) with `ubuntu-18.04` + Azure CNI). `aks-ubuntu-16.04` is a custom image based on `ubuntu-16.04` that comes with pre-installed software necessary for Kubernetes deployments. |
|
||||
| customFiles | no | The custom files to be provisioned to the master nodes. Defined as an array of json objects with each defined as `"source":"absolute-local-path", "dest":"absolute-path-on-masternodes"`.[See examples](../../examples/customfiles) |
|
||||
| customFiles | no | The custom files to be provisioned to the master nodes. Defined as an array of JSON objects with each defined as `"source":"absolute-local-path", "dest":"absolute-path-on-masternodes"`.[See examples](../../examples/customfiles) |
|
||||
| availabilityProfile | no | Supported values are `AvailabilitySet` (default) and `VirtualMachineScaleSets` (still under development: upgrade not supported; requires Kubernetes clusters version 1.10+ and agent pool availabilityProfile must also be `VirtualMachineScaleSets`). When MasterProfile is using `VirtualMachineScaleSets`, to SSH into a master node, you need to use `ssh -p 50001` instead of port 22. |
|
||||
| agentVnetSubnetId | only required when using custom VNET and when MasterProfile is using `VirtualMachineScaleSets` | Specifies the Id of an alternate VNET subnet for all the agent pool nodes. The subnet id must specify a valid VNET ID owned by the same subscription. ([bring your own VNET examples](../../examples/vnet)). When MasterProfile is using `VirtualMachineScaleSets`, this value should be the subnetId of the subnet for all agent pool nodes. |
|
||||
| [availabilityZones](../../examples/kubernetes-zones/README.md) | no | To protect your cluster from datacenter-level failures, you can enable the Availability Zones feature for your master VMs. Check out [Availability Zones README](../../examples/kubernetes-zones/README.md) for more details. |
|
||||
|
@ -787,7 +787,7 @@ A cluster can have 0 to 12 agent pool profiles. Agent Pool Profiles are used for
|
|||
| acceleratedNetworkingEnabledWindows | no | Currently unstable, and disabled for new clusters! |
|
||||
| vmssOverProvisioningEnabled | no | Use [Overprovisioning](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-design-overview#overprovisioning) with VMSS. This configuration is only valid on an agent pool with an `"availabilityProfile"` value of `"VirtualMachineScaleSets"`. Defaults to `false` |
|
||||
| enableVMSSNodePublicIP | no | Enable creation of public IP on VMSS nodes. This configuration is only valid on an agent pool with an `"availabilityProfile"` value of `"VirtualMachineScaleSets"`. Defaults to `false` |
|
||||
| LoadBalancerBackendAddressPoolIDs | no | Enables automatic placement of the agent pool nodes into existing load balancer's backend address pools. Each element value of this string array is the corresponding load balancer backend address pool's Azure Resource Manager(ARM) resource ID. By default this property is not included in the api model, which is equivalent to an empty string array. |
|
||||
| LoadBalancerBackendAddressPoolIDs | no | Enables automatic placement of the agent pool nodes into existing load balancer's backend address pools. Each element value of this string array is the corresponding load balancer backend address pool's Azure Resource Manager(ARM) resource ID. By default this property is not included in the API model, which is equivalent to an empty string array. |
|
||||
| auditDEnabled | no | Enable auditd enforcement at the OS layer for each node VM. This configuration is only valid on an agent pool with an Ubuntu-backed distro, i.e., the default "aks-ubuntu-16.04" distro, or the "aks-ubuntu-18.04", "ubuntu", "ubuntu-18.04", or "acc-16.04" distro values. Defaults to `false` |
|
||||
| customVMTags | no | Specifies a list of custom tags to be added to the agent VMs or Scale Sets. Each tag is a key/value pair (ie: `"myTagKey": "myTagValue"`). |
|
||||
| diskEncryptionSetID | no | Specifies ResourceId of the disk encryption set to use for enabling encryption at rest (ie: `"/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}"`). More details about [Server side encryption of Azure managed disks](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disk-encryption). |
|
||||
|
@ -901,8 +901,7 @@ You can configure the image used for all Windows nodes one of the following ways
|
|||
|
||||
##### Defaults
|
||||
|
||||
The AKS Engine team produces images that are optimized for and validated with aks-engine.
|
||||
The latest version of these images are used as the default images for Windows nodes.
|
||||
The AKS Engine team produces images that are optimized for and validated against `aks-engine`-created Kubernetes clusters during the regular development and release process. The latest version of these images at the time of a new release of the `aks-engine` binary are used as the default images for Windows nodes.
|
||||
|
||||
These images are published to the Azure Marketplace under the `microsoft-aks` publisher and `aks-windows` offer.
|
||||
Release notes for these images can be found under [releases/vhd-notes/aks-windows](../../releases/vhd-notes/aks-windows).
|
||||
|
@ -931,7 +930,7 @@ If you want to use a specific image then `windowsPublisher`, `windowsOffer`, `wi
|
|||
|
||||
##### Custom Images
|
||||
|
||||
Listed in order of precedence based on what is specified in the api model:
|
||||
Listed in order of precedence based on what is specified in the API model:
|
||||
|
||||
###### VHD
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# CoreDNS customization
|
||||
|
||||
The configuration provided by aks-engine handles most setups by forwarding to
|
||||
The configuration provided by AKS Engine handles most setups by forwarding to
|
||||
the dns server configured on the nodes.
|
||||
|
||||
To customize CoreDNS ([kubernetes docs][Customizing DNS Service]) you can create
|
||||
|
|
|
@ -13,7 +13,7 @@ More info can be found in the following places:
|
|||
|
||||
## Usage
|
||||
|
||||
### Enable in aks-engine
|
||||
### Enable in the cluster definition JSON (or API model)
|
||||
|
||||
Add the following fields to `windowsProfile`:
|
||||
|
||||
|
@ -30,6 +30,6 @@ For testing purposes the following csi-proxy binary may be used:
|
|||
|
||||
- https://k8scsi.blob.core.windows.net/csi-proxy/master/binaries/csi-proxy.tar.gz
|
||||
|
||||
If you want to use another version, replace `master` field to the concrete version number.
|
||||
If you want to use another version, replace `master` field to the concrete version number.
|
||||
|
||||
For example, https://k8scsi.blob.core.windows.net/csi-proxy/v0.1.0/binaries/csi-proxy.tar.gz
|
||||
|
|
|
@ -33,7 +33,7 @@ Enable Managed Identity by adding `useManagedIdentity` in `kubernetesConfig`.
|
|||
|
||||
## Optional: Disable Kubernetes Role-Based Access Control (RBAC) (for clusters running Kubernetes versions before 1.15.0)
|
||||
|
||||
By default, the cluster will be provisioned with [Role-Based Access Control](https://kubernetes.io/docs/admin/authorization/rbac/) enabled. Disable RBAC by adding `enableRbac` in `kubernetesConfig` in the api model:
|
||||
By default, the cluster will be provisioned with [Role-Based Access Control](https://kubernetes.io/docs/admin/authorization/rbac/) enabled. Disable RBAC by adding `enableRbac` in `kubernetesConfig` in the API model:
|
||||
|
||||
```json
|
||||
"kubernetesConfig": {
|
||||
|
@ -78,11 +78,6 @@ storagetier=<Standard_LRS|Premium_LRS>
|
|||
|
||||
They are managed-premium and managed-standard and map to Standard_LRS and Premium_LRS managed disk types respectively.
|
||||
|
||||
In order to use these storage classes the following conditions must be met.
|
||||
|
||||
- The cluster must be running Kubernetes release 1.7 or greater. Refer to this [example](../../examples/kubernetes-releases/kubernetes1.7.json) for how to provision a Kubernetes cluster of a specific version.
|
||||
- The node must support managed disks. See this [example](../../examples/disks-managed/kubernetes-vmas.json) to provision nodes with managed disks. You can also confirm if a node has managed disks using kubectl.
|
||||
|
||||
```console
|
||||
kubectl get nodes -l storageprofile=managed
|
||||
NAME STATUS AGE VERSION
|
||||
|
@ -110,7 +105,7 @@ spec:
|
|||
|
||||
## Using Azure integrated networking (CNI)
|
||||
|
||||
Kubernetes clusters are configured by default to use the [Azure CNI plugin](https://github.com/Azure/azure-container-networking) which provides an Azure native networking experience. Pods will receive IP addresses directly from the vnet subnet on which they're hosted. If the api model doesn't specify explicitly, aks-engine will automatically provide the following `networkPlugin` configuration in `kubernetesConfig`:
|
||||
Kubernetes clusters are configured by default to use the [Azure CNI plugin](https://github.com/Azure/azure-container-networking) which provides an Azure native networking experience. Pods will receive IP addresses directly from the vnet subnet on which they're hosted. If the API model doesn't specify explicitly, aks-engine will automatically provide the following `networkPlugin` configuration in `kubernetesConfig`:
|
||||
|
||||
```json
|
||||
"kubernetesConfig": {
|
||||
|
@ -270,7 +265,7 @@ Depending upon the size of the VNET address space, during deployment, it is poss
|
|||
First, the detail:
|
||||
|
||||
- Azure CNI assigns dynamic IP addresses from the "beginning" of the subnet IP address space (specifically, it looks for available addresses starting at ".4" ["10.0.0.4" in a "10.0.0.0/24" network])
|
||||
- aks-engine will require a range of up to 16 unused IP addresses in multi-master scenarios (1 per master for up to 5 masters, and then the next 10 IP addresses immediately following the "last" master for headroom reservation, and finally 1 more for the load balancer immediately adjacent to the afore-described _n_ masters+10 sequence) to successfully scaffold the network stack for your cluster
|
||||
- AKS Engine will require a range of up to 16 unused IP addresses in multi-master scenarios (1 per master for up to 5 masters, and then the next 10 IP addresses immediately following the "last" master for headroom reservation, and finally 1 more for the load balancer immediately adjacent to the afore-described _n_ masters+10 sequence) to successfully scaffold the network stack for your cluster
|
||||
|
||||
A guideline that will remove the danger of IP address allocation collision during deployment:
|
||||
|
||||
|
@ -302,7 +297,7 @@ Before provisioning, modify the `masterProfile` and `agentPoolProfiles` to match
|
|||
### VirtualMachineScaleSets Masters Custom VNET
|
||||
|
||||
When using custom VNET with `VirtualMachineScaleSets` MasterProfile, make sure to create two subnets within the vnet: `master` and `agent`.
|
||||
Modify `masterProfile` in the api model, `vnetSubnetId`, `agentVnetSubnetId` should be set to the values of the `master` subnet and the `agent` subnet in the existing vnet respectively.
|
||||
Modify `masterProfile` in the API model, `vnetSubnetId`, `agentVnetSubnetId` should be set to the values of the `master` subnet and the `agent` subnet in the existing vnet respectively.
|
||||
Modify `agentPoolProfiles`, `vnetSubnetId` should be set to the value of the `agent` subnet in the existing vnet.
|
||||
|
||||
*NOTE: The `firstConsecutiveStaticIP` configuration should be empty and will be derived from an offset and the first IP in the vnetCidr.*
|
||||
|
@ -328,7 +323,7 @@ For example, if `vnetCidr` is `10.239.0.0/16`, `master` subnet is `10.239.0.0/17
|
|||
|
||||
### Kubenet Networking Custom VNET
|
||||
|
||||
If you're *not- using Azure CNI (e.g., `"networkPlugin": "kubenet"` in the `kubernetesConfig` api model configuration object): After a custom VNET-configured cluster finishes provisioning, fetch the id of the Route Table resource from `Microsoft.Network` provider in your new cluster's Resource Group.
|
||||
If you're *not- using Azure CNI (e.g., `"networkPlugin": "kubenet"` in the `kubernetesConfig` API model configuration object): After a custom VNET-configured cluster finishes provisioning, fetch the id of the Route Table resource from `Microsoft.Network` provider in your new cluster's Resource Group.
|
||||
|
||||
The route table resource id is of the format: `/subscriptions/SUBSCRIPTIONID/resourceGroups/RESOURCEGROUPNAME/providers/Microsoft.Network/routeTables/ROUTETABLENAME`
|
||||
|
||||
|
@ -541,4 +536,4 @@ These parameters are all required.
|
|||
|
||||
As of March 3, 2020, the ContainerD and network plugin repos don't have public builds available. This repo has a script that will build them from source and create two ZIP files: [build-windows-containerd.sh](../../scripts/build-windows-containerd.sh)
|
||||
|
||||
Upload these ZIP files to a location that your cluster will be able to reach, then put those URLs in `windowsContainerdURL` and `windowsSdnPluginURL` in the AKS-Engine apimodel shown above.
|
||||
Upload these ZIP files to a location that your cluster will be able to reach, then put those URLs in `windowsContainerdURL` and `windowsSdnPluginURL` in the AKS-Engine API model shown above.
|
||||
|
|
|
@ -14,7 +14,7 @@ At a high level, it works by establishing a SSH session into each node, executin
|
|||
|
||||
### SSH Authentication
|
||||
|
||||
A valid SSH private key is always required to stablish a SSH session to the cluster Linux nodes. Windows credentials are stored in the apimodel and will be loaded from there. Make sure `windowsprofile.sshEnabled` is set to `true` to enable SSH in your Windows nodes.
|
||||
A valid SSH private key is always required to stablish a SSH session to the cluster Linux nodes. Windows credentials are stored in the API model and will be loaded from there. Make sure `windowsprofile.sshEnabled` is set to `true` to enable SSH in your Windows nodes.
|
||||
|
||||
### Log Collection Scripts
|
||||
|
||||
|
@ -26,7 +26,7 @@ The default OS distro for Windows node pools already includes a [log collection
|
|||
|
||||
## Usage
|
||||
|
||||
Assuming that you have a cluster deployed and the apimodel originally used to deploy that cluster is stored at `_output/<dnsPrefix>/apimodel.json`, then you can collect logs running a command like:
|
||||
Assuming that you have a cluster deployed and the API model originally used to deploy that cluster is stored at `_output/<dnsPrefix>/apimodel.json`, then you can collect logs running a command like:
|
||||
|
||||
```console
|
||||
$ aks-engine get-logs \
|
||||
|
@ -42,7 +42,7 @@ $ aks-engine get-logs \
|
|||
|Parameter|Required|Description|
|
||||
|---|---|---|
|
||||
|--location|yes|Azure location of the cluster's resource group.|
|
||||
|--api-model|yes|Path to the generated api model for the cluster.|
|
||||
|--api-model|yes|Path to the generated API model for the cluster.|
|
||||
|--ssh-host|yes|FQDN, or IP address, of an SSH listener that can reach all nodes in the cluster.|
|
||||
|--linux-ssh-private-key|yes|Path to a SSH private key that can be use to create a remote session on the cluster Linux nodes.|
|
||||
|--linux-script|yes|Custom log collection script. It should produce file `/tmp/logs.zip`.|
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Using GPUs with Kubernetes
|
||||
|
||||
If you created a Kubernetes cluster with one or multiple agent pool(s) whose VM size is `Standard_NC*` or `Standard_NV*` you can schedule GPU workload on your cluster.
|
||||
If you created a Kubernetes cluster with one or multiple node pools whose VM size is `Standard_NC*` or `Standard_NV*` you can schedule GPU workloads on your cluster.
|
||||
The NVIDIA drivers are automatically installed on every GPU agent in your cluster, so you don't need to do that manually, unless you require a specific version of the drivers. Currently, the installed driver is version 418.40.04.
|
||||
|
||||
To make sure everything is fine, run `kubectl describe node <name-of-a-gpu-node>`. You should see the correct number of GPU reported (in this example shows 2 GPU for a NC12 VM):
|
||||
|
|
|
@ -14,7 +14,7 @@ As illustrated on the figure above, we recommand to deploy the Kubernetes cluste
|
|||
|
||||
This document assumes that you are familiar with:
|
||||
|
||||
- Deploying Kubernetes cluster in a [custom VNET using AKS Engine](../../examples/vnet/README.md)
|
||||
- Deploying into a [custom VNET using AKS Engine](../../examples/vnet/README.md)
|
||||
- Azure [VPN Gateway](https://azure.microsoft.com/en-us/services/vpn-gateway/) and/or [Azure Express Route](https://azure.microsoft.com/en-us/services/expressroute/)
|
||||
- Azure [Virtual Network Peering](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview)
|
||||
|
||||
|
@ -27,13 +27,13 @@ The network topology must be well defined beforehand to enable peering between t
|
|||
### DNS
|
||||
|
||||
In a hybrid environment, you usually want to integrate with your on-premises DNS. There is two aspects to this. The first one is to register the VMs forming the cluster, and using your local search domain when resolving other services. The second is getting the services running on Kubernetes to use the external DNS.
|
||||
To benefit the scaling capabilities of the cluster and to ensure resiliency to machine failure, every node configuration needs to be scripted and part of the initial template that aks-engine will deploy. To register the nodes in your DNS at startup, you need to define [an aks-engine extension](extensions.md) that will run your [DNS registration script](https://github.com/Azure/aks-engine/blob/master/extensions/dnsupdate/v1/register-dns.sh).
|
||||
To benefit the scaling capabilities of the cluster and to ensure resiliency to machine failure, every node configuration needs to be scripted and part of the initial template that `aks-engine generate` creates. To register the nodes in your DNS at startup, you need to define [an aks-engine extension](extensions.md) that will run your [DNS registration script](https://github.com/Azure/aks-engine/blob/master/extensions/dnsupdate/v1/register-dns.sh).
|
||||
|
||||
In addition, you might want cluster services to address URLs outside the cluster using your on-premise DNS. To achieve this you need to configure KubeDNS to use your existing nameservice as upstream. [This setup is well documented on kubernetes blog](https://kubernetes.io/blog/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes)
|
||||
|
||||
### Private Cluster
|
||||
|
||||
By default, Kubernetes deployment with aks-engine expose the the admin api publicly (and securely). This can be avoided. Using peering with private/on-premise virtual network with AKS Engine also allows you to create cloud-hosted [private cluster](features.md#private-cluster), with no endpoint exposed over the Internet.
|
||||
By default, Kubernetes deployments with AKS Engine expose the the admin api publicly (and securely). This can be avoided. Using peering with private/on-premise virtual network with AKS Engine also allows you to create cloud-hosted [private cluster](features.md#private-cluster), with no endpoint exposed over the Internet.
|
||||
|
||||
## Kubernetes Network
|
||||
|
||||
|
@ -46,7 +46,7 @@ Deploying AKS Engine on Azure, you have 3 options of network policy. Azure CNI,
|
|||
#### Azure CNI
|
||||
|
||||
By default, AKS Engine is using the [**azure cni** network policy](../../examples/networkpolicy/README.md#azure-container-networking-default) plugin. This has some advantages and some consequences that must be considered when defining the network where we deploy the cluster. CNI provides an integration with azure subnet IP addressing so that every pod created by kubernetes is assigned an IP address from the corresponding subnet.
|
||||
All IP addresses are pre-allocated at provisionning time. By default, [aks-engine will pre-allocate 128 IPs per node](https://github.com/Azure/azure-container-networking/blob/master/docs/acs.md#enabling-azure-vnet-plugins-for-an-acs-kubernetes-cluster) on the subnet.
|
||||
All IP addresses are pre-allocated at provisionning time. By default, [AKS Engine will pre-allocate 128 IPs per node](https://github.com/Azure/azure-container-networking/blob/master/docs/acs.md#enabling-azure-vnet-plugins-for-an-acs-kubernetes-cluster) on the subnet.
|
||||
While this can be configured, new addresses will not be allocated dynamically. That means that you need to anticipate and plan for the maximum number of IP addresses you will need for the maximum scale.
|
||||
|
||||
Consequences:
|
||||
|
|
|
@ -6,7 +6,7 @@ AKS Engine enables you to source the following cluster configuration from Micros
|
|||
|
||||
For official Azure Key Vault documentation go [here](https://docs.microsoft.com/en-us/azure/key-vault/basic-concepts).
|
||||
|
||||
In order to use Key Vault as the source of cluster configuration secrets, you pass in a reference to the secret URI in your api model:
|
||||
In order to use Key Vault as the source of cluster configuration secrets, you pass in a reference to the secret URI in your API model:
|
||||
|
||||
|
||||
```json
|
||||
|
@ -50,7 +50,7 @@ In order to use Key Vault as the source of cluster configuration secrets, you pa
|
|||
|
||||
## Certificate Profile
|
||||
|
||||
For parameters referenced in the `properties.certificateProfile` section of the api model file, the value of each field should be formatted as:
|
||||
For parameters referenced in the `properties.certificateProfile` section of the API model file, the value of each field should be formatted as:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -22,9 +22,9 @@ All VMs are in the same private VNET and are fully accessible to each other.
|
|||
|
||||
After completing this walkthrough you will know how to:
|
||||
|
||||
* Access Kubernetes cluster via SSH,
|
||||
* Access the Azure VM(s) running the Kubernetes control plane via SSH,
|
||||
* Deploy a simple Docker application and expose to the world,
|
||||
* The location of the Kube config file and how to access the Kubernetes cluster remotely,
|
||||
* The location of the kubeconfig file and how to access the Kubernetes cluster remotely,
|
||||
* Use `kubectl exec` to run commands in a container,
|
||||
* And finally access the Kubernetes Dashboard.
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# Monitoring Kubernetes Clusters
|
||||
|
||||
**NOTE:** These docs are stale! See https://github.com/Azure/aks-engine/issues/3176.
|
||||
|
||||
Monitoring your Kubernetes cluster is important to be able to see your cluster's health. By monitoring your cluster, you can see stats such as such as CPU, memory, and disk usage. Monitoring is supported for both Linux as well as Windows nodes in your cluster.
|
||||
|
||||
There are five main options to monitor your cluster:
|
||||
|
|
|
@ -2,15 +2,15 @@
|
|||
|
||||
## Prerequisites
|
||||
|
||||
All the commands in this guide require both the Azure CLI and `aks-engine`. Follow the [quickstart guide](../tutorials/quickstart.md) before continuing.
|
||||
All the commands in this guide require both the Azure `az` CLI tool and the `aks-engine` binary tool. Follow the [quickstart guide](../tutorials/quickstart.md) before continuing.
|
||||
|
||||
This guide assumes you already have deployed a cluster using aks-engine. For more details on how to do that see [deploy](../tutorials/deploy.md).
|
||||
This guide assumes you already have deployed a cluster using `aks-engine`. For more details on how to do that see [deploy](../tutorials/deploy.md).
|
||||
|
||||
## Scale
|
||||
|
||||
The `aks-engine scale` command can increase or decrease the number of nodes in an existing agent pool in an `aks-engine` Kubernetes cluster. Nodes will always be added or removed from the end of the agent pool. Nodes will be cordoned and drained before deletion.
|
||||
The `aks-engine scale` command can increase or decrease the number of nodes in an existing agent pool in an AKS Engine-created Kubernetes cluster. Nodes will always be added or removed from the end of the agent pool. Nodes will be cordoned and drained before deletion.
|
||||
|
||||
This guide will assume you have a cluster deployed and the apimodel originally used to deploy that cluster is stored at `_output/<dnsPrefix>/apimodel.json`. It will also assume there is a node pool named "agentpool1" in your cluster.
|
||||
This guide will assume you have a cluster deployed and the API model originally used to deploy that cluster is stored at `_output/<dnsPrefix>/apimodel.json`. It will also assume there is a node pool named "agentpool1" in your cluster.
|
||||
|
||||
To scale the cluster you will run a command like:
|
||||
|
||||
|
@ -32,7 +32,7 @@ This command will re-use the `apimodel.json` file inside the output directory as
|
|||
|--subscription-id|yes|The subscription id the cluster is deployed in.|
|
||||
|--resource-group|yes|The resource group the cluster is deployed in.|
|
||||
|--location|yes|The location the resource group is in.|
|
||||
|--api-model|yes|Relative path to the generated api model for the cluster.|
|
||||
|--api-model|yes|Relative path to the generated API model for the cluster.|
|
||||
|--client-id|depends| The Service Principal Client ID. This is required if the auth-method is set to service_princpal/client_certificate|
|
||||
|--client-secret|depends| The Service Principal Client secret. This is required if the auth-method is set to service_princpal|
|
||||
|--certificate-path|depends| The path to the file which contains the client certificate. This is required if the auth-method is set to client_certificate|
|
||||
|
|
|
@ -125,7 +125,7 @@ The upgrade operation is a long-running, successive set of ARM deployments, and
|
|||
|
||||
### Cluster-autoscaler + Availability Set
|
||||
|
||||
We don't recommend using `aks-engine upgrade` on clusters that have Availability Set (non-VMSS) agent pools `cluster-autoscaler` at this time.
|
||||
At this time, we don't recommend using `aks-engine upgrade` on clusters running the `cluster-autoscaler` addon that have Availability Set (non-VMSS) node pools.
|
||||
|
||||
<a name="force-upgrade"></a>
|
||||
## Forcing an upgrade
|
||||
|
@ -137,7 +137,7 @@ The upgrade operation takes an optional `--force` argument:
|
|||
force upgrading the cluster to desired version. Allows same version upgrades and downgrades.
|
||||
```
|
||||
|
||||
In some situations, you might want to bypass the AKS-Engine validation of your apimodel versions and cluster nodes versions. This is at your own risk and you should assess the potential harm of using this flag.
|
||||
In some situations, you might want to bypass the AKS-Engine validation of your API model versions and cluster nodes versions. This is at your own risk and you should assess the potential harm of using this flag.
|
||||
|
||||
The `--force` parameter instructs the upgrade process to:
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ If you're trying to deploy Kubernetes with Windows the first time, be sure to ch
|
|||
|
||||
## Customizing Windows deployments
|
||||
|
||||
AKS Engine allows a lot more customizations available in the [docs](../), but here are a few important ones you should know for Windows deployments. Each of these are extra parameters you can add into the AKS Engine apimodel file (such as `kubernetes-windows.json` from the quick start) before running `aks-engine generate`.
|
||||
AKS Engine allows a lot more customizations available in the [docs](../), but here are a few important ones you should know for Windows deployments. Each of these are extra parameters you can add into the AKS Engine API model file (such as `kubernetes-windows.json` from the quick start) before running `aks-engine generate`.
|
||||
|
||||
### Changing the OS disk size
|
||||
|
||||
|
@ -345,7 +345,7 @@ Workaround:
|
|||
|
||||
#### Pods cannot ping default route or internet IPs
|
||||
|
||||
Affects: All clusters deployed by aks-engine
|
||||
Affects: All clusters created by AKS Engine
|
||||
|
||||
ICMP traffic is not routed between private Azure vNETs or to the internet.
|
||||
|
||||
|
|
|
@ -2,15 +2,15 @@
|
|||
|
||||
## Motivation
|
||||
|
||||
The primary motivation for producing AKS specific Windows VHDs is to improve the reliability and deployment time for the configuration phase of Windows nodes configured by aks-engine.
|
||||
The primary motivation for producing AKS specific Windows VHDs is to improve the reliability and deployment time for the configuration phase of Windows nodes configured by AKS Engine.
|
||||
By performing expensive configuration options and pre-downloading installation artifacts as part of the VHD build process we can accomplish both of these goals.
|
||||
|
||||
Producing and publishing AKS specific Windows VHDs to the Azure marketplace also allow us to get targeted windows patches to customers sooner.
|
||||
Producing and publishing AKS specific Windows VHDs to the Azure marketplace also allow us to get targeted windows patches to customers sooner.
|
||||
Today the Windows/Windows Server team only publish new images containing the latest B week patches. Several times recently C week patches contained fixes for issues customers are facing running Kubernetes on Windows nodes. This means customers would either need to wait an additional 3-4 weeks to get these fixes (until the fixes are released in next months cumulative B week update and new marketplace images are published) or some mechanism to deliver these patches to Windows nodes would need to be developed.
|
||||
|
||||
Lastly publishing AKS specific Windows VHDs allows us to perform adequate testing on new patches before allowing customers to customers to upgrade their Windows nodes.
|
||||
Given the challenging nature of validating private Windows fixes, we will work with the Windows team to incorporate Kubernetes testing before patches get released publicly.
|
||||
(see [Testing Private Fixes](#Testing-Private-Fixes) section below for more details)
|
||||
(see [Testing Private Fixes](#Testing-Private-Fixes) section below for more details)
|
||||
|
||||
## Build Process
|
||||
|
||||
|
@ -23,7 +23,7 @@ The build pipeline:
|
|||
- Downloads commonly used container images
|
||||
- Configure system settings (windows update, page file sizes, etc)
|
||||
- Creates release notes detailing what is installed (features/QFEs/services) and what artifacts are cached on the image
|
||||
- Runs aks-engine E2E tests using the VHD produced in the same pipeline
|
||||
- Runs AKS Engine E2E tests using the VHD produced in the same pipeline
|
||||
- Note: A full upstream E2E test pass of Kubernetes will come later
|
||||
- Optionally copies the VHD to another Azure storage account for extra validation and/or publishing
|
||||
|
||||
|
@ -48,10 +48,10 @@ Occasionally it may be necessary to validate private fixes provided by the Windo
|
|||
## Usage in aks-engine
|
||||
|
||||
Both Windows and Linux nodes are configured by executing Custom Script Extensions as part of VM deployment operations.
|
||||
The scripts are generated by aks-engine which populates instance specific information from the [cluster definition API model](clusterdefinitions.md) and get embedded into the ARM templates produced by AKS engine. These scripts are enlightened to understand the work in conjunction with Windows AKS marketplace images to not duplicate configuration steps and/or utilize files cached during the VHD build process.
|
||||
The scripts are generated by `aks-engine generate` or `aks-engine deploy`, which populates instance-specific information from the [cluster definition API model](clusterdefinitions.md) and get embedded into the ARM templates produced by AKS engine. These scripts are enlightened to understand the work in conjunction with Windows AKS marketplace images to not duplicate configuration steps and/or utilize files cached during the VHD build process.
|
||||
|
||||
[Parts/k8s/kuberneteswindowssetup.ps1](../../parts/k8s/kuberneteswindowssetup.ps1) and associated ps1 files are used as templates for the extension scripts.
|
||||
|
||||
It is an explicit goal to maintain aks-engine compatibility with Windows Server marketplace images published by MicrosoftWindowsServer (for most scenarios) and to treat the Windows AKS marketplace images as an optimization for the reasons stated above.
|
||||
It is an explicit goal to maintain AKS Engine compatibility with Windows Server marketplace images published by MicrosoftWindowsServer (for most scenarios) and to treat the Windows AKS marketplace images as an optimization for the reasons stated above.
|
||||
|
||||
It is a non-goal to produce Windows Server marketplace images with only patches installed.
|
||||
It is a non-goal to produce Windows Server marketplace images with only patches installed.
|
||||
|
|
|
@ -10,9 +10,9 @@
|
|||
- [Create a Resource Group and Service Principal](#create-a-resource-group-and-service-principal)
|
||||
- [Create a Resource Group and Service Principal (Windows)](#create-a-resource-group-and-service-principal-windows)
|
||||
- [Create a Resource Group and Service Principal (Mac+Linux)](#create-a-resource-group-and-service-principal-maclinux)
|
||||
- [Create an aks-engine apimodel](#create-an-aks-engine-apimodel)
|
||||
- [Filling out apimodel (Windows)](#filling-out-apimodel-windows)
|
||||
- [Filling out apimodel (Mac & Linux)](#filling-out-apimodel-mac--linux)
|
||||
- [Create an AKS Engine apimodel](#create-an-aks-engine-apimodel)
|
||||
- [Filling out API model (Windows)](#filling-out-apimodel-windows)
|
||||
- [Filling out API model (Mac & Linux)](#filling-out-apimodel-mac--linux)
|
||||
- [Generate Azure Resource Manager template](#generate-azure-resource-manager-template)
|
||||
- [Deploy the cluster](#deploy-the-cluster)
|
||||
- [Check that the cluster is up](#check-that-the-cluster-is-up)
|
||||
|
@ -27,7 +27,7 @@
|
|||
This guide will step through everything needed to build your first Kubernetes cluster and deploy a Windows web server on it. The steps include:
|
||||
|
||||
- Getting the right tools
|
||||
- Completing an AKS Engine apimodel which describes what you want to deploy
|
||||
- Completing an AKS Engine API model which describes what you want to deploy
|
||||
- Running AKS Engine to generate Azure Resource Model templates
|
||||
- Deploying your first Kubernetes cluster with Windows Server 2019 nodes
|
||||
- Managing the cluster from your Windows machine
|
||||
|
@ -315,7 +315,7 @@ After downloading that file, you will need to
|
|||
1. Set the ssh public key that will be used to log into the Linux VM
|
||||
1. Set the Azure service principal for the deployments
|
||||
|
||||
#### Filling out apimodel (Windows)
|
||||
#### Filling out API model (Windows)
|
||||
|
||||
You can use the same PowerShell window from earlier to run this next script to do all that for you. Be sure to replace `$dnsPrefix` with something unique and descriptive, `$windowsUser` and `$windowsPassword` to meet the requirements.
|
||||
|
||||
|
@ -349,7 +349,7 @@ $inJson.properties.servicePrincipalProfile.secret = $sp.password
|
|||
$inJson | ConvertTo-Json -Depth 5 | Out-File -Encoding ascii -FilePath "kubernetes-windows-complete.json"
|
||||
```
|
||||
|
||||
#### Filling out apimodel (Mac & Linux)
|
||||
#### Filling out API model (Mac & Linux)
|
||||
|
||||
Using the same terminal as before, you can use this script to download the template and fill it out. Be sure to set DNSPREFIX, WINDOWSUSER, and WINDOWSPASSWORD to meet the requirements.
|
||||
|
||||
|
|
|
@ -222,7 +222,7 @@ az network vnet subnet update \
|
|||
--ids "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME_VNET/providers/Microsoft.Network/VirtualNetworks/KUBERNETES_CUSTOM_VNET/subnets/KUBERNETES_SUBNET"
|
||||
```
|
||||
|
||||
... where `RESOURCE_GROUP_NAME_KUBE` is the name of the Resource Group that contains the Kubernetes cluster, `SUBSCRIPTION_ID` is the id of the Azure subscription that both the VNET & Cluster are in, `RESOURCE_GROUP_NAME_VNET` is the name of the Resource Group that the VNET is in, `KUBERNETES_SUBNET` is the name of the vnet subnet, and `KUBERNETES_CUSTOM_VNET` is the name of the custom VNET itself.
|
||||
... where `RESOURCE_GROUP_NAME_KUBE` is the name of the Resource Group that contains the AKS Engine-created Kubernetes cluster, `SUBSCRIPTION_ID` is the id of the Azure subscription that both the VNET & Cluster are in, `RESOURCE_GROUP_NAME_VNET` is the name of the Resource Group that the VNET is in, `KUBERNETES_SUBNET` is the name of the vnet subnet, and `KUBERNETES_CUSTOM_VNET` is the name of the custom VNET itself.
|
||||
|
||||
## Connect to your new cluster
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ $ aks-engine deploy --subscription-id 51ac25de-afdg-9201-d923-8d8e8e8e8e8e \
|
|||
--location westus2 \
|
||||
--api-model examples/kubernetes.json
|
||||
|
||||
INFO[0000] new api model file has been generated during merge: /tmp/mergedApiModel619868596
|
||||
INFO[0000] new API model file has been generated during merge: /tmp/mergedApiModel619868596
|
||||
WARN[0002] apimodel: missing masterProfile.dnsPrefix will use "contoso-apple"
|
||||
INFO[0025] Starting ARM Deployment contoso-apple-1423145182 in resource group contoso-apple. This will take some time...
|
||||
INFO[0256] Finished ARM Deployment (contoso-apple-1423145182). Succeeded
|
||||
|
@ -56,7 +56,7 @@ INFO[0256] Finished ARM Deployment (contoso-apple-1423145182). Succeeded
|
|||
* `_output/contoso-apple-59769a59/azureuser_rsa`
|
||||
* `_output/contoso-apple-59769a59/kubeconfig/kubeconfig.westus2.json`
|
||||
|
||||
aks-engine generates kubeconfig files for each possible region. Access the new cluster by using the kubeconfig generated for the cluster's location. This example used `westus2`, so the kubeconfig is `_output/<clustername>/kubeconfig/kubeconfig.westus2.json`:
|
||||
`aks-engine` generates kubeconfig files for each possible region. Access the new cluster by using the kubeconfig generated for the cluster's location. This example used `westus2`, so the kubeconfig is `_output/<clustername>/kubeconfig/kubeconfig.westus2.json`:
|
||||
|
||||
```sh
|
||||
$ KUBECONFIG=_output/contoso-apple-59769a59/kubeconfig/kubeconfig.westus2.json kubectl cluster-info
|
||||
|
@ -67,7 +67,7 @@ Metrics-server is running at https://contoso-apple-59769a59.westus2.cloudapp.azu
|
|||
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
||||
```
|
||||
|
||||
Administrative note: By default, the directory where aks-engine stores cluster configuration (`_output/contoso-apple` above) won't be overwritten as a result of subsequent attempts to deploy a cluster using the same `--dns-prefix`) To re-use the same resource group name repeatedly, include the `--force-overwrite` command line option with your `aks-engine deploy` command. On a related note, include an `--auto-suffix` option to append a randomly generated suffix to the dns-prefix to form the resource group name, for example if your workflow requires a common prefix across multiple cluster deployments. Using the `--auto-suffix` pattern appends a compressed timestamp to ensure a unique cluster name (and thus ensure that each deployment's configuration artifacts will be stored locally under a discrete `_output/<resource-group-name>/` directory).
|
||||
Administrative note: By default, the directory where `aks-engine` stores cluster configuration (`_output/contoso-apple` above) won't be overwritten as a result of subsequent attempts to deploy a cluster using the same `--dns-prefix`) To re-use the same resource group name repeatedly, include the `--force-overwrite` command line option with your `aks-engine deploy` command. On a related note, include an `--auto-suffix` option to append a randomly generated suffix to the dns-prefix to form the resource group name, for example if your workflow requires a common prefix across multiple cluster deployments. Using the `--auto-suffix` pattern appends a compressed timestamp to ensure a unique cluster name (and thus ensure that each deployment's configuration artifacts will be stored locally under a discrete `_output/<resource-group-name>/` directory).
|
||||
|
||||
**Note**: If the cluster is using an existing VNET please see the [Custom VNET](custom-vnet.md) feature documentation for additional steps that must be completed after cluster provisioning.
|
||||
|
||||
|
@ -100,7 +100,7 @@ If you don't have an SSH key [cluster operators may generate a new one](https://
|
|||
|
||||
### Step 2: Create a Service Principal
|
||||
|
||||
Kubernetes clusters have integrated support for various cloud providers as core functionality. On Azure, aks-engine uses a Service Principal to interact with Azure Resource Manager (ARM). Follow the [instructions](../topics/service-principals.md) to create a new service principal and grant it the necessary IAM role to create Azure resources.
|
||||
Kubernetes clusters have integrated support for various cloud providers as core functionality. On Azure, `aks-engine` uses a Service Principal to interact with Azure Resource Manager (ARM). Follow the [instructions](../topics/service-principals.md) to create a new service principal and grant it the necessary IAM role to create Azure resources.
|
||||
|
||||
### Step 3: Edit your Cluster Definition
|
||||
|
||||
|
@ -163,7 +163,7 @@ k8s-master-22116803-0 XXXXXXXXXXXX southeastasia
|
|||
az vm show -g <resource group of cluster> -n <name of Master or agent VM> --query tags
|
||||
```
|
||||
|
||||
Sample JSON out of this command is shown below. This command can also be used to check the aks-engine version which was used to create the cluster
|
||||
Sample JSON out of this command is shown below. This command can also be used to check the `aks-engine` version which was used to create the cluster
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -1,38 +1,38 @@
|
|||
# Quickstart Guide
|
||||
|
||||
The Azure Kubernetes Engine (`aks-engine`) generates ARM (Azure Resource Manager) templates for Kubernetes clusters on Microsoft Azure. The input to aks-engine is a cluster definition file which describes the desired cluster, including orchestrator, features, and agents. The structure of the input files is very similar to the public API for Azure Kubernetes Service.
|
||||
AKS Engine (`aks-engine`) generates ARM (Azure Resource Manager) templates for Kubernetes clusters on Microsoft Azure. The input to the `aks-engine` binary is a cluster definition JSON file (referred to throughout the docs interchangeably as either "cluster config", "cluster definition", or "API model") which describes the desired cluster configuration, including enabled or disabled features, for both the control plane running on "master" VMs and one or more node pools (referred to throughout the docs interchangeably as either "node pools" or "agent pools").
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The following prerequisites are required for a successful use of AKS Engine.
|
||||
The following prerequisites are required:
|
||||
|
||||
1. An [Azure Subscription][azure]
|
||||
1. The [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)
|
||||
|
||||
<a href="#install-aks-engine"></a>
|
||||
|
||||
## Install AKS Engine
|
||||
## Install the `aks-engine` binary
|
||||
|
||||
Binary downloads for the latest version of aks-engine are available [on Github](https://github.com/Azure/aks-engine/releases/latest). Download AKS Engine for your operating system, extract the binary and copy it to your `$PATH`.
|
||||
Binary downloads for the latest version of AKS Engine are available [on Github](https://github.com/Azure/aks-engine/releases/latest). Download the package for your operating system, and extract the `aks-engine` binary (and optionally integrate it to your `$PATH` for more convenient CLI usage).
|
||||
|
||||
You can also choose to install aks-engine using [gofish][gofish-about]. To do so, execute the command `gofish install aks-engine`. You can install gofish following the [instructions][gofish-install] for your OS.
|
||||
You can also choose to install the `aks-engine` binary using [gofish][gofish-about]. To do so, execute the command `gofish install aks-engine`. You can install gofish following the [instructions][gofish-install] for your OS.
|
||||
|
||||
On macOS, you can install aks-engine with [Homebrew][homebrew]. Run the command `brew install Azure/aks-engine/aks-engine` to do so. You can install Homebrew following these [instructions][homebrew-install].
|
||||
On macOS, you can install the `aks-engine` binary with [Homebrew][homebrew]. Run the command `brew install Azure/aks-engine/aks-engine` to do so. You can install Homebrew following these [instructions][homebrew-install].
|
||||
|
||||
On Windows, you can install aks-engine via [Chocolatey][choco] by executing the command `choco install aks-engine`. You can install Chocolatey following these [instructions][choco-install].
|
||||
On Windows, you can install `aks-engine.exe` via [Chocolatey][choco] by executing the command `choco install aks-engine`. You can install Chocolatey following these [instructions][choco-install].
|
||||
|
||||
On Linux, if you prefer, you can install aks-engine via install script doing:
|
||||
On Linux, if you prefer, you can install the `aks-engine` binary via install script doing:
|
||||
```bash
|
||||
$ curl -o get-akse.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh
|
||||
$ chmod 700 get-akse.sh
|
||||
$ ./get-akse.sh
|
||||
```
|
||||
|
||||
If you would prefer to build AKS Engine from source, or you are interested in contributing to AKS Engine, see [the developer guide][developer-guide] for more information.
|
||||
If you would prefer to build the `aks-engine` binary from source, or if you're interested in contributing to AKS Engine, see [the developer guide][developer-guide] for more information.
|
||||
|
||||
## Completion
|
||||
|
||||
AKS Engine supports bash completion. To enable this, add the following to your `.bashrc` or `~/.profile`
|
||||
`aks-engine` supports bash completion. To enable this, add the following to your `.bashrc` or `~/.profile`
|
||||
|
||||
```bash
|
||||
source <(aks-engine completion)
|
||||
|
@ -40,20 +40,21 @@ source <(aks-engine completion)
|
|||
|
||||
## Deploy your First Cluster
|
||||
|
||||
`aks-engine` reads a cluster definition which describes the size, shape, and configuration of your cluster. This guide takes the default configuration of one master and two Linux agents. If you would like to change the configuration, edit `examples/kubernetes.json` before continuing.
|
||||
`aks-engine` reads a cluster definition which describes the size, shape, and configuration of your cluster. This guide takes the default configuration of a control plane configuration with one master VM, and a single node pool with two Linux nodes. If you would like to change the configuration, edit `examples/kubernetes.json` before continuing.
|
||||
|
||||
The `aks-engine deploy` command automates creation of a Service Principal, Resource Group and SSH key for your cluster. If operators need more control or are interested in the individual steps see the ["Long Way" section below](#aks-engine-the-long-way).
|
||||
|
||||
**NOTE:** AKS Engine creates a _cluster_; it _doesn't_ create an Azure Kubernetes Service (AKS) resource. Clusters that you create using the `aks-engine` command (or ARM templates generated by the `aks-engine` command) won't show up as AKS resources, for example when you run `az aks list`. Think of `aks-engine` as the, err, engine which AKS uses to create clusters: you can use the same engine yourself, but AKS won't know about the results.
|
||||
**NOTE:** AKS Engine creates a _cluster_; it _doesn't_ create an Azure Kubernetes Service (AKS) resource. Clusters that you create using the `aks-engine` command (or ARM templates generated by the `aks-engine` command) won't show up as AKS resources, for example when you run `az aks list`. The resultant resource group + IaaS will be entirely under your own control and management, and unknown to AKS or any other Azure service.
|
||||
|
||||
After the cluster is deployed, the upgrade and [scale][] commands can be used to make updates to your cluster.
|
||||
After the cluster is deployed, the [upgrade][] and [scale][] commands may be used to make updates to your cluster, with some conditions ([upgrade][] and [scale][] docs will enumerate these conditions).
|
||||
|
||||
### Gather Information
|
||||
|
||||
* The subscription in which you would like to provision the cluster. This is a UUID which can be found with `az account list -o table`.
|
||||
* Proper access rights within the subscription; especially the right to create and assign [service principals][sp] to applications
|
||||
* A `dnsPrefix` which forms part of the hostname for your cluster (e.g. staging, prodwest, blueberry). The DNS prefix must be unique so pick a random name.
|
||||
* A location to provision the cluster e.g. `westus2`.
|
||||
* A `dnsPrefix` which forms part of the hostname for your cluster (e.g. staging, prodwest, blueberry). In the [example](/examples/kubernetes.json) we're using we are not building a private cluster (a `true` value of `properties.orchestratorProfile.kubernetesConfig.privateCluster.enabled` indicates a private cluster configuration, see [this example](/examples/kubernetes-config/kubernetes-private-cluster.json), and so we have to consider that the value of `dnsPrefix` *must* produce a unique fully-qualified domain name DNS record composed of <value of `dnsPrefix`>.<value of `location`>.cloudapp.azure.com. Depending on the uniqueness of your `dnsPrefix`, it may be a good idea to pre-check the availability of the resultant DNS record prior to deployment. (Also see the `--auto-suffix` option below if this is onerous.)
|
||||
* **NOTE:** The `location` value may be omitted in your cluster definition JSON file if you are deploying to Azure Public Cloud; it will be automatically inferred during ARM template deployment as equal to the location of the resource group at the time of resource group creation. Also **NOTE:** that the ".cloudapp.azure.com" FQDN suffix example above also assumes an Azure Public Cloud deployment. When you provide a `location` value that maps to a non-public cloud, the FQDN suffix will be concatenated appropriately for that supported cloud environment, e.g., ".cloudapp.chinacloudapi.cn" for mooncake (Azure China Cloud); or ".cloudapp.usgovcloudapi.net" for usgov (Azure Government Cloud)
|
||||
* Choose a location to provision the cluster e.g. `westus2`.
|
||||
|
||||
```sh
|
||||
$ az account list -o table
|
||||
|
@ -90,6 +91,16 @@ $ az group create --name contoso-apple --location westus2
|
|||
}
|
||||
```
|
||||
|
||||
Again, because in this example we are deploying to Azure Public Cloud, we may omit the `location` property from our cluster configuration JSON; although strictly speaking we could add this to our [example](/examples/kubernetes.json) and it would be equivalent:
|
||||
|
||||
```
|
||||
{
|
||||
"apiVersion": "vlabs",
|
||||
"location": "westus2",
|
||||
"properties": {
|
||||
(etc ...)
|
||||
```
|
||||
|
||||
Once that's done, we need to create a [service principal][sp] for the Kubernetes cluster so it can talk to any resources that are a part of the same resource group.
|
||||
|
||||
```console
|
||||
|
@ -118,18 +129,18 @@ $ aks-engine deploy --subscription-id 51ac25de-afdg-9201-d923-8d8e8e8e8e8e \
|
|||
--set servicePrincipalProfile.clientId="47a62f0b-917c-4def-aa85-9b010455e591" \
|
||||
--set servicePrincipalProfile.secret="26054d2b-799b-448e-962a-783d0d6f976b"
|
||||
|
||||
INFO[0000] new api model file has been generated during merge: /tmp/mergedApiModel619868596
|
||||
INFO[0000] new API model file has been generated during merge: /tmp/mergedApiModel619868596
|
||||
WARN[0002] apimodel: missing masterProfile.dnsPrefix will use "contoso-apple"
|
||||
INFO[0025] Starting ARM Deployment contoso-apple-1423145182 in resource group contoso-apple. This will take some time...
|
||||
INFO[0256] Finished ARM Deployment (contoso-apple-1423145182). Succeeded
|
||||
```
|
||||
|
||||
`aks-engine` will output Azure Resource Manager (ARM) templates, SSH keys, and a kubeconfig file in `_output/contoso-apple-59769a59` directory:
|
||||
`aks-engine` will output ARM templates, SSH keys, and a kubeconfig (A specification that may be used as input to the `kubectl` command to establish a privileged connection to the Kubernetes apiserver, see [here](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) for more documentation.) file in `_output/contoso-apple-59769a59` directory:
|
||||
|
||||
* `_output/contoso-apple-59769a59/azureuser_rsa`
|
||||
* `_output/contoso-apple-59769a59/kubeconfig/kubeconfig.westus2.json`
|
||||
|
||||
aks-engine generates kubeconfig files for each possible region. Access the new cluster by using the kubeconfig generated for the cluster's location. This example used `westus2`, so the kubeconfig is `_output/<clustername>/kubeconfig/kubeconfig.westus2.json`:
|
||||
`aks-engine` generates kubeconfig files for each possible region. Access the new cluster by using the kubeconfig generated for the cluster's location. This example used `westus2`, so the kubeconfig is `_output/<clustername>/kubeconfig/kubeconfig.westus2.json`:
|
||||
|
||||
```sh
|
||||
$ KUBECONFIG=_output/contoso-apple-59769a59/kubeconfig/kubeconfig.westus2.json kubectl cluster-info
|
||||
|
@ -150,30 +161,22 @@ Administrative note: By default, the directory where aks-engine stores cluster c
|
|||
|
||||
This example uses the more traditional method of generating raw ARM templates, which are submitted to Azure using the `az group deployment create` command.
|
||||
|
||||
For this example, we will use the same information as before: the subscription id is `51ac25de-afdg-9201-d923-8d8e8e8e8e8e`, the DNS prefix is `contoso-apple`, and the location is `westus2`.
|
||||
|
||||
Before we do anything, we need to log in to Azure:
|
||||
|
||||
```console
|
||||
$ az login
|
||||
Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
|
||||
You have logged in. Now let us find all the subscriptions to which you have access...
|
||||
```
|
||||
For this example, we will use the same information as before: the subscription id is `51ac25de-afdg-9201-d923-8d8e8e8e8e8e`, the DNS prefix is `contoso-apple-5eac6ed8` (note the manual use of a unique string suffix to better ensure uniqueness), and the location is `westus2`.
|
||||
|
||||
We will also need to generate an SSH key. When creating VMs, you will need an SSH RSA key for SSH access. Use the following articles to create your SSH RSA Key:
|
||||
|
||||
1. Windows - https://www.digitalocean.com/community/tutorials/how-to-create-ssh-keys-with-putty-to-connect-to-a-vps
|
||||
1. Mac and Linux - https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
|
||||
|
||||
Next, we'll create a resource group. A resource group is a container that holds related resources for an Azure solution. In Azure, you logically group related resources such as storage accounts, virtual networks, and virtual machines (VMs) to deploy, manage, and maintain them as a single entity. In this case, we want to deploy, manage and maintain the whole Kubernetes cluster as a single entity.
|
||||
Next, we'll create a resource group as we did in the "deploy" method above.
|
||||
|
||||
```console
|
||||
$ az group create --name contoso-apple --location westus2
|
||||
$ az group create --name contoso-apple-5eac6ed8 --location westus2
|
||||
{
|
||||
"id": "/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple",
|
||||
"id": "/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple-5eac6ed8",
|
||||
"location": "westus2",
|
||||
"managedBy": null,
|
||||
"name": "contoso-apple",
|
||||
"name": "contoso-apple-5eac6ed8",
|
||||
"properties": {
|
||||
"provisioningState": "Succeeded"
|
||||
},
|
||||
|
@ -181,10 +184,10 @@ $ az group create --name contoso-apple --location westus2
|
|||
}
|
||||
```
|
||||
|
||||
Once that's done, we need to create a [service principal][sp] for the Kubernetes cluster so it can talk to any resources that are a part of the same resource group.
|
||||
Again, we need to create a [service principal][sp].
|
||||
|
||||
```console
|
||||
$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple"
|
||||
$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple-5eac6ed8"
|
||||
{
|
||||
"appId": "47a62f0b-917c-4def-aa85-9b010455e591",
|
||||
"displayName": "azure-cli-2019-01-11-22-22-06",
|
||||
|
@ -194,13 +197,11 @@ $ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/51ac25d
|
|||
}
|
||||
```
|
||||
|
||||
Make a note of the `appId` and the `password` fields, as we will be providing them in the next step.
|
||||
|
||||
AKS Engine consumes a cluster definition which outlines the desired shape, size, and configuration of Kubernetes. There are a number of features that can be enabled through the cluster definition: check the `examples` directory for a number of... examples.
|
||||
We again make a note of the `appId` and the `password` fields, as we will be providing them in the next step.
|
||||
|
||||
Edit the [simple Kubernetes cluster definition](/examples/kubernetes.json) and fill out the required values:
|
||||
|
||||
* `dnsPrefix`: must be a region-unique name and will form part of the hostname (e.g. myprod1, staging, leapingllama) - be unique!
|
||||
* `dnsPrefix`: in this example we're using "contoso-apple-5eac6ed8"
|
||||
* `keyData`: must contain the public portion of the SSH key we generated - this will be associated with the `adminUsername` value found in the same section of the cluster definition (e.g. 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABA....')
|
||||
* `clientId`: this is the service principal's appId UUID or name from earlier
|
||||
* `secret`: this is the service principal's password or randomly-generated password from earlier
|
||||
|
@ -209,7 +210,7 @@ Optional: attach to an existing virtual network (VNET). Details [here][custom-vn
|
|||
|
||||
### Generate the Templates
|
||||
|
||||
The generate command takes a cluster definition and outputs a number of templates which describe your Kubernetes cluster. By default, `generate` will create a new directory named after your cluster nested in the `_output` directory. If your dnsPrefix was `contoso-apple`, your cluster templates would be found in `_output/contoso-apple-`.
|
||||
The generate command takes a cluster definition and outputs a number of templates which describe your Kubernetes cluster. By default, `generate` will create a new directory named after your cluster nested in the `_output` directory. If your dnsPrefix was `contoso-apple-5eac6ed8`, your cluster templates would be found in `_output/contoso-apple-5eac6ed8-`.
|
||||
|
||||
Run `aks-engine generate examples/kubernetes.json`
|
||||
|
||||
|
@ -236,12 +237,18 @@ Using the CLI:
|
|||
```console
|
||||
$ az group deployment create \
|
||||
--name "contoso-apple-k8s" \
|
||||
--resource-group "contoso-apple" \
|
||||
--template-file "./_output/contoso-apple-abc123/azuredeploy.json" \
|
||||
--parameters "./_output/contoso-apple-abc123/azuredeploy.parameters.json"
|
||||
--resource-group "contoso-apple-5eac6ed8" \
|
||||
--template-file "./_output/contoso-apple-5eac6ed8/azuredeploy.json" \
|
||||
--parameters "./_output/contoso-apple-5eac6ed8/azuredeploy.parameters.json"
|
||||
```
|
||||
|
||||
**Note**: If the cluster is using an existing VNET, please see the [Custom VNET][custom-vnet] feature documentation for additional steps that must be completed after cluster provisioning.
|
||||
When your ARM template deployment is complete you should return some JSON output, and a `0` exit code. You now have a Kubernetes cluster with the (mostly complete) set of default configurations.
|
||||
|
||||
```sh
|
||||
export KUBECONFIG=_output/contoso-apple-5eac6ed8/kubeconfig/kubeconfig.westus2.json
|
||||
```
|
||||
|
||||
Now you're ready to start using your Kubernetes cluster with `kubectl`!
|
||||
|
||||
[azure]: https://azure.microsoft.com/
|
||||
[choco]: https://chocolatey.org/
|
||||
|
|
|
@ -37,7 +37,7 @@ Available Commands:
|
|||
help Help about any command
|
||||
scale Scale an existing Kubernetes cluster
|
||||
upgrade Upgrade an existing Kubernetes cluster
|
||||
version Print the version of AKS Engine
|
||||
version Print the version of aks-engine
|
||||
|
||||
Flags:
|
||||
--debug enable verbose debug logs
|
||||
|
|
|
@ -13,7 +13,7 @@ The remaining documentation below will assume all node pools are VMSS.
|
|||
|
||||
# Example
|
||||
|
||||
Here's a simple example of a cluster configuration (api model) that includes the cluster-autoscaler addon:
|
||||
Here's a simple example of a cluster configuration (API model) that includes the cluster-autoscaler addon:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -227,7 +227,7 @@ By default we set the mode to `"EnsureExists"` so that you are able to continous
|
|||
|
||||
If you are already running a cluster built via `aks-engine` v0.43._n_ or earlier with the AKS Engine-provided `cluster-autoscaler` addon enabled, you have been running a cluster-autoscaler configuration that is only aware of the first VMSS node pool in your cluster. If you run `aks-engine upgrade` against the cluster using `aks-engine` v0.44._n_ or later, the `cluster-autoscaler` addon configuration will be automatically updated to the current addon spec as outlined above, including per-pool configuration, and with all the documented cluster-autoscaler runtime configuration options (default values will be assigned). The per-pool addon spec update will adhere to the following logic:
|
||||
|
||||
- For each additional pool in the cluster, cluster-autoscaler will be configured with a `min-nodes` and `max-nodes` value equal to the pool's `count` value in the api model (i.e., the number of current nodes in the pool)
|
||||
- For each additional pool in the cluster, cluster-autoscaler will be configured with a `min-nodes` and `max-nodes` value equal to the pool's `count` value in the API model (i.e., the number of current nodes in the pool)
|
||||
|
||||
The above logic essentially engages cluster-autoscaler against these node pools, but configures the scaling mechanism not to scale up or down, assuming the number of nodes in the pool stays static over time. To maintain the `cluster-autoscaler` configuration over time, you may administer its configuration via `kubectl edit deployment cluster-autoscaler -n kube-system`. For per-pool configuration, look for the `--nodes=` lines that correlate with the specific pool. To remove cluster-autoscaler enforcement entirely from those pools, simply remove the line with the `--nodes=` reference to the pool you wish to no longer use with cluster-autoscaler. To modify the min and max values, simply change the integer values in that line that correlate to min/max. E.g.:
|
||||
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# Azure Key Vault FlexVolume Add-on
|
||||
|
||||
[The Azure Key Vault FlexVolume](https://github.com/Azure/kubernetes-keyvault-flexvol) integrates Azure Key Vault with Kubernetes via a FlexVolume.
|
||||
[The Azure Key Vault FlexVolume](https://github.com/Azure/kubernetes-keyvault-flexvol) integrates Azure Key Vault with Kubernetes via a FlexVolume.
|
||||
|
||||
With the Azure Key Vault FlexVolume, developers can access application-specific secrets, keys, and certs stored in Azure Key Vault directly from their pods.
|
||||
|
||||
Add this add-on to your apimodel as shown below to automatically enable Key Vault FlexVolume in your new Kubernetes cluster.
|
||||
Add this add-on to your API model as shown below to automatically enable Key Vault FlexVolume in your new Kubernetes cluster.
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -67,7 +67,7 @@ keyvault-flexvolume-z6jm6 1/1 Running 0 3m
|
|||
|
||||
Follow the README at https://github.com/Azure/kubernetes-keyvault-flexvol for get started steps.
|
||||
|
||||
##
|
||||
##
|
||||
To update resources:
|
||||
|
||||
```json
|
||||
|
@ -93,4 +93,4 @@ To update resources:
|
|||
|
||||
## Supported Orchestrators
|
||||
|
||||
Kubernetes
|
||||
Kubernetes
|
||||
|
|
|
@ -4,6 +4,6 @@
|
|||
|
||||
AKS Engine enables you to create a customized Kubernetes cluster on Microsoft Azure with attached disks.
|
||||
|
||||
The example JSON apimodel file in this directory shows you how to configure up to 4 attached disks. Disks can range from 1 to 1024 GB in size.
|
||||
The example JSON API model file in this directory shows you how to configure up to 4 attached disks. Disks can range from 1 to 1024 GB in size.
|
||||
|
||||
1. **kubernetes.json** - deploying and using [Kubernetes](../../docs/tutorials/deploy.md)
|
||||
|
|
|
@ -4,7 +4,7 @@ aks-engine supports creating a Kubernetes cluster with more than one node pool.
|
|||
|
||||
A cluster with multiple node pools can help you schedule CPU-intensive jobs to VM nodes with high processing power, or I/O intensive jobs to VMs with the fastest storage. Use [nodeSelectors][] or [resource requests][] to ensure that Pods are scheduled to nodes in the appropriate pool.
|
||||
|
||||
A complete example is contained in the `multipool.json` apimodel in this directory. To add a node pool to an existing apimodel, just add another entry to the `agentPoolProfile` section:
|
||||
A complete example is contained in the `multipool.json` API model in this directory. To add a node pool to an existing apimodel, just add another entry to the `agentPoolProfile` section:
|
||||
|
||||
```json
|
||||
"agentPoolProfiles": [
|
||||
|
|
|
@ -28,7 +28,7 @@ This template will deploy the [Kubernetes Datastore backed version of Calico](ht
|
|||
|
||||
If deploying on a K8s 1.8 or later cluster, then egress policies are also supported!
|
||||
|
||||
To understand how to deploy this template, please read the baseline [Kubernetes](../../docs/tutorials/deploy.md) document, and use the appropriate **kubernetes-calico-[azure|kubenet].json** example file in this folder as an api model reference.
|
||||
To understand how to deploy this template, please read the baseline [Kubernetes](../../docs/tutorials/deploy.md) document, and use the appropriate **kubernetes-calico-[azure|kubenet].json** example file in this folder as an API model reference.
|
||||
|
||||
### Post installation
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
# prometheus-grafana Extension
|
||||
|
||||
|
||||
This is the prometheus-grafana extension. Add this extension to the api model you pass as input into aks-engine as shown below to automatically enable prometheus and grafana in your new Kubernetes cluster.
|
||||
This is the prometheus-grafana extension. Add this extension to the API model you pass as input into aks-engine as shown below to automatically enable prometheus and grafana in your new Kubernetes cluster.
|
||||
|
||||
```
|
||||
{
|
||||
|
|
Загрузка…
Ссылка в новой задаче