* add Helm Tiller to Kubernetes addon modules
This implements feature request #854 by adding the Tiller manifest and
supporting configuration to the Kubernetes addon modules.
Closes#854
* remove defaults from manifests. Add addonmanager labels
* add service account and more reformatting
* add resource request and limits to the tiller deployment
* add TilleBase container repo and manifest fixes
* fixed typing path for tiller
* fix indents
* Fix for 1.5 rbac
* further updates for 1.5 rbac
* use default tiller const
* Enable k8s 1.7.2 release
* remove version specific pieces of the heapster deployment. Rollback reziser to 1.7
* rollback resizer to 1.7
* fixed typo in version
* Add new storageclasses for managed disk types
* add storagetier label to nodes
* add newlines
* Add error handling to getStorageAccountType function
Remove toLower from storage tier name
* fixed indentation
* add StorageClass resources to 1.5
* added retries to apt-get in runcmd for kubernetes cloudinit
* Added retries for systemctl enable commands as part of kubernetes custom
script. (#853)
* Change exit to different number.
Introduction of Kubernetes master VMs to start using managed disks both for OS and etcddisk broke upgrade operation:
Upgrade operation was supported on unmanaged disks only and making manageddisk the default StoragrProfile broke upgrade operation OOB.
Switch to using managed disk also started using a default disk name to be assigned by Disk RP (vs. one assigned by ACS Engine like for other resources). This could be problematic in many ways:
With main one being relying on and understanding DiskRP’s naming convention to discover the right etcd disk for each master VM. This might not be a big issue in the RP because disk names/ids can be saved in the database but from ACS Engine standpoint for operations to be idempotent disk names need to be deterministic.
This also adds unnecessary complexity of loading and editing the template during upgrade with the disk name generated by Disk RP.
This PR adds support to enable upgrade of clusters using managed disk master VMs.
The code has been updated to use a deterministic name for etcd disks. However, any cluster created between June 22nd to until this PR gets merged still gets non-deterministic names for etcd disks.
Change upgrade template to have a managedDisk section when attaching an existing etcddisk
Pending fixes:
Supported attaching of auto generated etcd disks (names) during upgrade.
- add support for 1.7.0 on both linux and windows
- rename KubernetesLatest to KubernetesDefaultVersion
- fix typo in Kubernetes157 comment
- set Kubernetes166 as the default while 1.7.0 bakes in
- moved provision.sh variables ordinality around to accommodate agent usage
- added backoff vars to agent resources template
- re-ordered backoff vars in master resources template to accommodate changes
* acs-engine configs for backoff
* errata
* large cluster support in 1.6.6 only at this point
* add custom data vars to master as well
* updated templates.go
* kube-controller-manager var substitution is unique
* updated generated template
* moving large cluster to examples/largeclusters
* working pool names for large cluster example
* Add multi-gpu support for k8
* Remove kubernetes gpu example
* Update kubernetes gpu README and templates
* Add Accelerator feature gate only for k8s > 1.6
* Parametrize kubernetes version checking function
* remove --feature-gates flag from kuberneteskubelet.service
* add test for VersionOrdinal
Enable cluster upgrade from Kubernetes 1.5.3 -> 1.6.2
------------------------------------------------------------
This PR sets up upgrade workflow that will be followed by ACS Engine and the service to upgrade Kubernetes clusters created using ACS Engine templates running in Azure.
This feature is required as a step towards enabling customer initiated cluster lifecycle management and ACS RP to be a managed service.
This PR supports Kubernetes 1.5.3 -> 1.6.2 upgrade only. More upgrade versions support to come later.
Design assumptions: The upgrade will work as designed only for clusters where etcd is on a separate data disk.
This PR enables following upgrade scenarios for Kubernetes 1.5.3 -> 1.6.2 clusters:
1. Multi master upgrade
2. Multi agent upgrade
3. Multi agent pool (Linux & Windows pools) upgrade
4. Idempotent upgrade operation (with one pending change to enable full idempotency), i.e. operation can be rerun if it fails and the command will pick up where it left off.
5. Upgrade operation will only upgrade nodes in the resource group that belong this cluster and skips
Note:
This is by no means a complete upgrade implementation and other enhancements will follow at later points of time (not all of the following are guaranteed and are examples): Drain and cordon node before upgrading it, etcd upgrade, rollback & downgrade, post upgrade validation & health checks, maintaining HA of pools during the upgrade operation, etc.