9087737408 | ||
---|---|---|
.devcontainer | ||
.github | ||
examples | ||
test | ||
unit-test-fixture | ||
.checkov_config.yaml | ||
.gitignore | ||
CHANGELOG-v4.md | ||
CHANGELOG-v5.md | ||
CHANGELOG-v6.md | ||
CHANGELOG-v7.md | ||
CHANGELOG-v8.md | ||
CHANGELOG.md | ||
CODE_OF_CONDUCT.md | ||
GNUmakefile | ||
LICENSE | ||
NoticeOnUpgradeTov5.0.md | ||
NoticeOnUpgradeTov6.0.md | ||
NoticeOnUpgradeTov7.0.md | ||
NoticeOnUpgradeTov8.0.md | ||
NoticeOnUpgradeTov9.0.md | ||
README.md | ||
SECURITY.md | ||
extra_node_pool.tf | ||
locals.tf | ||
log_analytics.tf | ||
main.tf | ||
outputs.tf | ||
role_assignments.tf | ||
tfvmmakefile | ||
variables.tf | ||
versions.tf |
README.md
terraform-azurerm-aks
Deploys a Kubernetes cluster (AKS) on Azure with monitoring support through Azure Log Analytics
This Terraform module deploys a Kubernetes cluster on Azure using AKS (Azure Kubernetes Service) and adds support for monitoring with Log Analytics.
-> NOTE: If you have not assigned client_id
or client_secret
, A SystemAssigned
identity will be created.
Notice on breaking changes
Please be aware that major version(e.g., from 6.8.0 to 7.0.0) update contains breaking changes that may impact your infrastructure. It is crucial to review these changes with caution before proceeding with the upgrade.
In most cases, you will need to adjust your Terraform code to accommodate the changes introduced in the new major version. We strongly recommend reviewing the changelog and migration guide to understand the modifications and ensure a smooth transition.
To help you in this process, we have provided detailed documentation on the breaking changes, new features, and any deprecated functionalities. Please take the time to read through these resources to avoid any potential issues or disruptions to your infrastructure.
- Notice on Upgrade to v9.x
- Notice on Upgrade to v8.x
- Notice on Upgrade to v7.x
- Notice on Upgrade to v6.x
- Notice on Upgrade to v5.x
Remember, upgrading to a major version with breaking changes should be done carefully and thoroughly tested in your environment. If you have any questions or concerns, please don't hesitate to reach out to our support team for assistance.
Usage in Terraform 1.2.0
Please view folders in examples
.
The module supports some outputs that may be used to configure a kubernetes provider after deploying an AKS cluster.
provider "kubernetes" {
host = module.aks.host
client_certificate = base64decode(module.aks.client_certificate)
client_key = base64decode(module.aks.client_key)
cluster_ca_certificate = base64decode(module.aks.cluster_ca_certificate)
}
There're some examples in the examples folder. You can execute terraform apply
command in examples
's sub folder to try the module. These examples are tested against every PR with the E2E Test.
Enable or disable tracing tags
We're using BridgeCrew Yor and yorbox to help manage tags consistently across infrastructure as code (IaC) frameworks. In this module you might see tags like:
resource "azurerm_resource_group" "rg" {
location = "eastus"
name = random_pet.name
tags = merge(var.tags, (/*<box>*/ (var.tracing_tags_enabled ? { for k, v in /*</box>*/ {
avm_git_commit = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
avm_git_file = "main.tf"
avm_git_last_modified_at = "2023-05-05 08:57:54"
avm_git_org = "lonegunmanb"
avm_git_repo = "terraform-yor-tag-test-module"
avm_yor_trace = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
} /*<box>*/ : replace(k, "avm_", var.tracing_tags_prefix) => v } : {}) /*</box>*/))
}
To enable tracing tags, set the variable to true:
module "example" {
source = "{module_source}"
...
tracing_tags_enabled = true
}
The tracing_tags_enabled
is default to false
.
To customize the prefix for your tracing tags, set the tracing_tags_prefix
variable value in your Terraform configuration:
module "example" {
source = "{module_source}"
...
tracing_tags_prefix = "custom_prefix_"
}
The actual applied tags would be:
{
custom_prefix_git_commit = "3077cc6d0b70e29b6e106b3ab98cee6740c916f6"
custom_prefix_git_file = "main.tf"
custom_prefix_git_last_modified_at = "2023-05-05 08:57:54"
custom_prefix_git_org = "lonegunmanb"
custom_prefix_git_repo = "terraform-yor-tag-test-module"
custom_prefix_yor_trace = "a0425718-c57d-401c-a7d5-f3d88b2551a4"
}
Pre-Commit & Pr-Check & Test
Configurations
We assumed that you have setup service principal's credentials in your environment variables like below:
export ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
export ARM_TENANT_ID="<azure_subscription_tenant_id>"
export ARM_CLIENT_ID="<service_principal_appid>"
export ARM_CLIENT_SECRET="<service_principal_password>"
On Windows Powershell:
$env:ARM_SUBSCRIPTION_ID="<azure_subscription_id>"
$env:ARM_TENANT_ID="<azure_subscription_tenant_id>"
$env:ARM_CLIENT_ID="<service_principal_appid>"
$env:ARM_CLIENT_SECRET="<service_principal_password>"
We provide a docker image to run the pre-commit checks and tests for you: mcr.microsoft.com/azterraform:latest
To run the pre-commit task, we can run the following command:
$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
On Windows Powershell:
$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pre-commit
In pre-commit task, we will:
- Run
terraform fmt -recursive
command for your Terraform code. - Run
terrafmt fmt -f
command for markdown files and go code files to ensure that the Terraform code embedded in these files are well formatted. - Run
go mod tidy
andgo mod vendor
for test folder to ensure that all the dependencies have been synced. - Run
gofmt
for all go code files. - Run
gofumpt
for all go code files. - Run
terraform-docs
onREADME.md
file, then runmarkdown-table-formatter
to format markdown tables inREADME.md
.
Then we can run the pr-check task to check whether our code meets our pipeline's requirement(We strongly recommend you run the following command before you commit):
$ docker run --rm -v $(pwd):/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
On Windows Powershell:
$ docker run --rm -v ${pwd}:/src -w /src mcr.microsoft.com/azterraform:latest make pr-check
To run the e2e-test, we can run the following command:
docker run --rm -v $(pwd):/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
To follow Ensure AKS uses disk encryption set policy we've used azurerm_key_vault
in example codes, and to follow Key vault does not allow firewall rules settings we've limited the ip cidr on it's network_acls
. By default we'll use the ip returned by https://api.ipify.org?format=json
api as your public ip, but in case you need to use another cidr, you can set an environment variable like below:
docker run --rm -v $(pwd):/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
On Windows Powershell:
docker run --rm -v ${pwd}:/src -w /src -e TF_VAR_key_vault_firewall_bypass_ip_cidr="<your_cidr>" -e ARM_SUBSCRIPTION_ID -e ARM_TENANT_ID -e ARM_CLIENT_ID -e ARM_CLIENT_SECRET mcr.microsoft.com/azterraform:latest make e2e-test
Prerequisites
Authors
Originally created by Damien Caro and Malte Lantin
License
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Module Spec
The following sections are generated by terraform-docs and markdown-table-formatter, please DO NOT MODIFY THEM MANUALLY!
Requirements
Name | Version |
---|---|
terraform | >= 1.3 |
azapi | >= 1.4.0, < 2.0 |
azurerm | >= 3.106.1, < 4.0 |
null | >= 3.0 |
tls | >= 3.1 |
Providers
Name | Version |
---|---|
azapi | >= 1.4.0, < 2.0 |
azurerm | >= 3.106.1, < 4.0 |
null | >= 3.0 |
tls | >= 3.1 |
Modules
No modules.
Resources
Inputs
Name | Description | Type | Default | Required |
---|---|---|---|---|
aci_connector_linux_enabled | Enable Virtual Node pool | bool |
false |
no |
aci_connector_linux_subnet_name | (Optional) aci_connector_linux subnet name | string |
null |
no |
admin_username | The username of the local administrator to be created on the Kubernetes cluster. Set this variable to null to turn off the cluster's linux_profile . Changing this forces a new resource to be created. |
string |
null |
no |
agents_availability_zones | (Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_count | The number of Agents that should exist in the Agent Pool. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
number |
2 |
no |
agents_labels | (Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created. | map(string) |
{} |
no |
agents_max_count | Maximum number of nodes in a pool | number |
null |
no |
agents_max_pods | (Optional) The maximum number of pods that can run on each agent. Changing this forces a new resource to be created. | number |
null |
no |
agents_min_count | Minimum number of nodes in a pool | number |
null |
no |
agents_pool_drain_timeout_in_minutes | (Optional) The amount of time in minutes to wait on eviction of pods and graceful termination per node. This eviction wait time honors waiting on pod disruption budgets. If this time is exceeded, the upgrade fails. Unsetting this after configuring it will force a new resource to be created. | number |
null |
no |
agents_pool_kubelet_configs | list(object({ cpu_manager_policy = (Optional) Specifies the CPU Manager policy to use. Possible values are none and static , Changing this forces a new resource to be created.cpu_cfs_quota_enabled = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created. cpu_cfs_quota_period = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created. image_gc_high_threshold = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between 0 and 100 . Changing this forces a new resource to be created.image_gc_low_threshold = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between 0 and 100 . Changing this forces a new resource to be created.topology_manager_policy = (Optional) Specifies the Topology Manager policy to use. Possible values are none , best-effort , restricted or single-numa-node . Changing this forces a new resource to be created.allowed_unsafe_sysctls = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in * ). Changing this forces a new resource to be created.container_log_max_size_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created. container_log_max_line = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created. pod_max_pid = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created. })) |
list(object({ |
[] |
no |
agents_pool_linux_os_configs | list(object({ sysctl_configs = optional(list(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) The sysctl setting net.ipv4.tcp_tw_reuse. Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})), []) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of the swap file on each node in MB. Changing this forces a new resource to be created. })) |
list(object({ |
[] |
no |
agents_pool_max_surge | The maximum number or percentage of nodes which will be added to the Default Node Pool size during an upgrade. | string |
"10%" |
no |
agents_pool_name | The default Azure AKS agentpool (nodepool) name. | string |
"nodepool" |
no |
agents_pool_node_soak_duration_in_minutes | (Optional) The amount of time in minutes to wait after draining a node and before reimaging and moving on to next node. Defaults to 0. | number |
0 |
no |
agents_proximity_placement_group_id | (Optional) The ID of the Proximity Placement Group of the default Azure AKS agentpool (nodepool). Changing this forces a new resource to be created. | string |
null |
no |
agents_size | The default virtual machine size for the Kubernetes agents. Changing this without specifying var.temporary_name_for_rotation forces a new resource to be created. |
string |
"Standard_D2s_v3" |
no |
agents_tags | (Optional) A mapping of tags to assign to the Node Pool. | map(string) |
{} |
no |
agents_taints | (Optional) A list of the taints added to new nodes during node pool create and scale. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_type | (Optional) The type of Node Pool which should be created. Possible values are AvailabilitySet and VirtualMachineScaleSets. Defaults to VirtualMachineScaleSets. | string |
"VirtualMachineScaleSets" |
no |
api_server_authorized_ip_ranges | (Optional) The IP ranges to allow for incoming traffic to the server nodes. | set(string) |
null |
no |
api_server_subnet_id | (Optional) The ID of the Subnet where the API server endpoint is delegated to. | string |
null |
no |
attached_acr_id_map | Azure Container Registry ids that need an authentication mechanism with Azure Kubernetes Service (AKS). Map key must be static string as acr's name, the value is acr's resource id. Changing this forces some new resources to be created. | map(string) |
{} |
no |
auto_scaler_profile_balance_similar_node_groups | Detect similar node groups and balance the number of nodes between them. Defaults to false . |
bool |
false |
no |
auto_scaler_profile_empty_bulk_delete_max | Maximum number of empty nodes that can be deleted at the same time. Defaults to 10 . |
number |
10 |
no |
auto_scaler_profile_enabled | Enable configuring the auto scaler profile | bool |
false |
no |
auto_scaler_profile_expander | Expander to use. Possible values are least-waste , priority , most-pods and random . Defaults to random . |
string |
"random" |
no |
auto_scaler_profile_max_graceful_termination_sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. Defaults to 600 . |
string |
"600" |
no |
auto_scaler_profile_max_node_provisioning_time | Maximum time the autoscaler waits for a node to be provisioned. Defaults to 15m . |
string |
"15m" |
no |
auto_scaler_profile_max_unready_nodes | Maximum Number of allowed unready nodes. Defaults to 3 . |
number |
3 |
no |
auto_scaler_profile_max_unready_percentage | Maximum percentage of unready nodes the cluster autoscaler will stop if the percentage is exceeded. Defaults to 45 . |
number |
45 |
no |
auto_scaler_profile_new_pod_scale_up_delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_scale_down_delay_after_add | How long after the scale up of AKS nodes the scale down evaluation resumes. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_delay_after_delete | How long after node deletion that scale down evaluation resumes. Defaults to the value used for scan_interval . |
string |
null |
no |
auto_scaler_profile_scale_down_delay_after_failure | How long after scale down failure that scale down evaluation resumes. Defaults to 3m . |
string |
"3m" |
no |
auto_scaler_profile_scale_down_unneeded | How long a node should be unneeded before it is eligible for scale down. Defaults to 10m . |
string |
"10m" |
no |
auto_scaler_profile_scale_down_unready | How long an unready node should be unneeded before it is eligible for scale down. Defaults to 20m . |
string |
"20m" |
no |
auto_scaler_profile_scale_down_utilization_threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down. Defaults to 0.5 . |
string |
"0.5" |
no |
auto_scaler_profile_scan_interval | How often the AKS Cluster should be re-evaluated for scale up/down. Defaults to 10s . |
string |
"10s" |
no |
auto_scaler_profile_skip_nodes_with_local_storage | If true cluster autoscaler will never delete nodes with pods with local storage, for example, EmptyDir or HostPath. Defaults to true . |
bool |
true |
no |
auto_scaler_profile_skip_nodes_with_system_pods | If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods). Defaults to true . |
bool |
true |
no |
automatic_channel_upgrade | (Optional) The upgrade channel for this Kubernetes Cluster. Possible values are patch , rapid , node-image and stable . By default automatic-upgrades are turned off. Note that you cannot specify the patch version using kubernetes_version or orchestrator_version when using the patch upgrade channel. See the documentation for more information |
string |
null |
no |
azure_policy_enabled | Enable Azure Policy Addon. | bool |
false |
no |
brown_field_application_gateway_for_ingress | Definition of brown_field * id - (Required) The ID of the Application Gateway that be used as cluster ingress.* subnet_id - (Required) The ID of the Subnet which the Application Gateway is connected to. Must be set when create_role_assignments is true . |
object({ |
null |
no |
client_id | (Optional) The Client ID (appId) for the Service Principal used for the AKS deployment | string |
"" |
no |
client_secret | (Optional) The Client Secret (password) for the Service Principal used for the AKS deployment | string |
"" |
no |
cluster_log_analytics_workspace_name | (Optional) The name of the Analytics workspace | string |
null |
no |
cluster_name | (Optional) The name for the AKS resources created in the specified Azure Resource Group. This variable overwrites the 'prefix' var (The 'prefix' var will still be applied to the dns_prefix if it is set) | string |
null |
no |
cluster_name_random_suffix | Whether to add a random suffix on Aks cluster's name or not. azurerm_kubernetes_cluster resource defined in this module is create_before_destroy = true implicity now(described here), without this random suffix we'll not be able to recreate this cluster directly due to the naming conflict. |
bool |
false |
no |
confidential_computing | (Optional) Enable Confidential Computing. | object({ |
null |
no |
cost_analysis_enabled | (Optional) Enable Cost Analysis. | bool |
false |
no |
create_role_assignment_network_contributor | (Deprecated) Create a role assignment for the AKS Service Principal to be a Network Contributor on the subnets used for the AKS Cluster | bool |
false |
no |
create_role_assignments_for_application_gateway | (Optional) Whether to create the corresponding role assignments for application gateway or not. Defaults to true . |
bool |
true |
no |
default_node_pool_fips_enabled | (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. | bool |
null |
no |
disk_encryption_set_id | (Optional) The ID of the Disk Encryption Set which should be used for the Nodes and Volumes. More information can be found in the documentation. Changing this forces a new resource to be created. | string |
null |
no |
ebpf_data_plane | (Optional) Specifies the eBPF data plane used for building the Kubernetes network. Possible value is cilium . Changing this forces a new resource to be created. |
string |
null |
no |
enable_auto_scaling | Enable node pool autoscaling | bool |
false |
no |
enable_host_encryption | Enable Host Encryption for default node pool. Encryption at host feature must be enabled on the subscription: https://docs.microsoft.com/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli | bool |
false |
no |
enable_node_public_ip | (Optional) Should nodes in this Node Pool have a Public IP Address? Defaults to false. | bool |
false |
no |
green_field_application_gateway_for_ingress | Definition of green_field * name - (Optional) The name of the Application Gateway to be used or created in the Nodepool Resource Group, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_cidr - (Optional) The subnet CIDR to be used to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster.* subnet_id - (Optional) The ID of the subnet on which to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. |
object({ |
null |
no |
http_proxy_config | optional(object({ http_proxy = (Optional) The proxy address to be used when communicating over HTTP. https_proxy = (Optional) The proxy address to be used when communicating over HTTPS. no_proxy = (Optional) The list of domains that will not use the proxy for communication. Note: If you specify the default_node_pool.0.vnet_subnet_id , be sure to include the Subnet CIDR in the no_proxy list. Note: You may wish to use Terraform's ignore_changes functionality to ignore the changes to this field.trusted_ca = (Optional) The base64 encoded alternative CA certificate content in PEM format. })) Once you have set only one of http_proxy and https_proxy , this config would be used for both http_proxy and https_proxy to avoid a configuration drift. |
object({ |
null |
no |
identity_ids | (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this Kubernetes Cluster. | list(string) |
null |
no |
identity_type | (Optional) The type of identity used for the managed cluster. Conflicts with client_id and client_secret . Possible values are SystemAssigned and UserAssigned . If UserAssigned is set, an identity_ids must be set as well. |
string |
"SystemAssigned" |
no |
image_cleaner_enabled | (Optional) Specifies whether Image Cleaner is enabled. | bool |
false |
no |
image_cleaner_interval_hours | (Optional) Specifies the interval in hours when images should be cleaned up. Defaults to 48 . |
number |
48 |
no |
key_vault_secrets_provider_enabled | (Optional) Whether to use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster. For more details: https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver | bool |
false |
no |
kms_enabled | (Optional) Enable Azure KeyVault Key Management Service. | bool |
false |
no |
kms_key_vault_key_id | (Optional) Identifier of Azure Key Vault key. When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. | string |
null |
no |
kms_key_vault_network_access | (Optional) Network Access of Azure Key Vault. Possible values are: Private and Public . |
string |
"Public" |
no |
kubelet_identity | - client_id - (Optional) The Client ID of the user-defined Managed Identity to be assigned to the Kubelets. If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created.- object_id - (Optional) The Object ID of the user-defined Managed Identity assigned to the Kubelets.If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created.- user_assigned_identity_id - (Optional) The ID of the User Assigned Identity assigned to the Kubelets. If not specified a Managed Identity is created automatically. Changing this forces a new resource to be created. |
object({ |
null |
no |
kubernetes_version | Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region | string |
null |
no |
load_balancer_profile_enabled | (Optional) Enable a load_balancer_profile block. This can only be used when load_balancer_sku is set to standard . |
bool |
false |
no |
load_balancer_profile_idle_timeout_in_minutes | (Optional) Desired outbound flow idle timeout in minutes for the cluster load balancer. Must be between 4 and 120 inclusive. |
number |
30 |
no |
load_balancer_profile_managed_outbound_ip_count | (Optional) Count of desired managed outbound IPs for the cluster load balancer. Must be between 1 and 100 inclusive |
number |
null |
no |
load_balancer_profile_managed_outbound_ipv6_count | (Optional) The desired number of IPv6 outbound IPs created and managed by Azure for the cluster load balancer. Must be in the range of 1 to 100 (inclusive). The default value is 0 for single-stack and 1 for dual-stack. Note: managed_outbound_ipv6_count requires dual-stack networking. To enable dual-stack networking the Preview Feature Microsoft.ContainerService/AKS-EnableDualStack needs to be enabled and the Resource Provider re-registered, see the documentation for more information. https://learn.microsoft.com/en-us/azure/aks/configure-kubenet-dual-stack?tabs=azure-cli%2Ckubectl#register-the-aks-enabledualstack-preview-feature |
number |
null |
no |
load_balancer_profile_outbound_ip_address_ids | (Optional) The ID of the Public IP Addresses which should be used for outbound communication for the cluster load balancer. | set(string) |
null |
no |
load_balancer_profile_outbound_ip_prefix_ids | (Optional) The ID of the outbound Public IP Address Prefixes which should be used for the cluster load balancer. | set(string) |
null |
no |
load_balancer_profile_outbound_ports_allocated | (Optional) Number of desired SNAT port for each VM in the clusters load balancer. Must be between 0 and 64000 inclusive. Defaults to 0 |
number |
0 |
no |
load_balancer_sku | (Optional) Specifies the SKU of the Load Balancer used for this Kubernetes Cluster. Possible values are basic and standard . Defaults to standard . Changing this forces a new kubernetes cluster to be created. |
string |
"standard" |
no |
local_account_disabled | (Optional) - If true local accounts will be disabled. Defaults to false . See the documentation for more information. |
bool |
null |
no |
location | Location of cluster, if not defined it will be read from the resource-group | string |
null |
no |
log_analytics_solution | (Optional) Object which contains existing azurerm_log_analytics_solution ID. Providing ID disables creation of azurerm_log_analytics_solution. | object({ |
null |
no |
log_analytics_workspace | (Optional) Existing azurerm_log_analytics_workspace to attach azurerm_log_analytics_solution. Providing the config disables creation of azurerm_log_analytics_workspace. | object({ |
null |
no |
log_analytics_workspace_allow_resource_only_permissions | (Optional) Specifies if the log Analytics Workspace allow users accessing to data associated with resources they have permission to view, without permission to workspace. Defaults to true . |
bool |
null |
no |
log_analytics_workspace_cmk_for_query_forced | (Optional) Is Customer Managed Storage mandatory for query management? | bool |
null |
no |
log_analytics_workspace_daily_quota_gb | (Optional) The workspace daily quota for ingestion in GB. Defaults to -1 (unlimited) if omitted. | number |
null |
no |
log_analytics_workspace_data_collection_rule_id | (Optional) The ID of the Data Collection Rule to use for this workspace. | string |
null |
no |
log_analytics_workspace_enabled | Enable the integration of azurerm_log_analytics_workspace and azurerm_log_analytics_solution: https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-onboard | bool |
true |
no |
log_analytics_workspace_identity | - identity_ids - (Optional) Specifies a list of user managed identity ids to be assigned. Required if type is UserAssigned .- type - (Required) Specifies the identity type of the Log Analytics Workspace. Possible values are SystemAssigned (where Azure will generate a Service Principal for you) and UserAssigned where you can specify the Service Principal IDs in the identity_ids field. |
object({ |
null |
no |
log_analytics_workspace_immediate_data_purge_on_30_days_enabled | (Optional) Whether to remove the data in the Log Analytics Workspace immediately after 30 days. | bool |
null |
no |
log_analytics_workspace_internet_ingestion_enabled | (Optional) Should the Log Analytics Workspace support ingestion over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_internet_query_enabled | (Optional) Should the Log Analytics Workspace support querying over the Public Internet? Defaults to true . |
bool |
null |
no |
log_analytics_workspace_local_authentication_disabled | (Optional) Specifies if the log Analytics workspace should enforce authentication using Azure AD. Defaults to false . |
bool |
null |
no |
log_analytics_workspace_reservation_capacity_in_gb_per_day | (Optional) The capacity reservation level in GB for this workspace. Possible values are 100 , 200 , 300 , 400 , 500 , 1000 , 2000 and 5000 . |
number |
null |
no |
log_analytics_workspace_resource_group_name | (Optional) Resource group name to create azurerm_log_analytics_solution. | string |
null |
no |
log_analytics_workspace_sku | The SKU (pricing level) of the Log Analytics workspace. For new subscriptions the SKU should be set to PerGB2018 | string |
"PerGB2018" |
no |
log_retention_in_days | The retention period for the logs in days | number |
30 |
no |
maintenance_window | (Optional) Maintenance configuration of the managed cluster. | object({ |
null |
no |
maintenance_window_auto_upgrade | - day_of_month - (Optional) The day of the month for the maintenance run. Required in combination with RelativeMonthly frequency. Value between 0 and 31 (inclusive).- day_of_week - (Optional) The day of the week for the maintenance run. Options are Monday , Tuesday , Wednesday , Thurday , Friday , Saturday and Sunday . Required in combination with weekly frequency.- duration - (Required) The duration of the window for maintenance to run in hours.- frequency - (Required) Frequency of maintenance. Possible options are Weekly , AbsoluteMonthly and RelativeMonthly .- interval - (Required) The interval for maintenance runs. Depending on the frequency this interval is week or month based.- start_date - (Optional) The date on which the maintenance window begins to take effect.- start_time - (Optional) The time for maintenance to begin, based on the timezone determined by utc_offset . Format is HH:mm .- utc_offset - (Optional) Used to determine the timezone for cluster maintenance.- week_index - (Optional) The week in the month used for the maintenance run. Options are First , Second , Third , Fourth , and Last .--- not_allowed block supports the following:- end - (Required) The end of a time span, formatted as an RFC3339 string.- start - (Required) The start of a time span, formatted as an RFC3339 string. |
object({ |
null |
no |
maintenance_window_node_os | - day_of_month -- day_of_week - (Optional) The day of the week for the maintenance run. Options are Monday , Tuesday , Wednesday , Thurday , Friday , Saturday and Sunday . Required in combination with weekly frequency.- duration - (Required) The duration of the window for maintenance to run in hours.- frequency - (Required) Frequency of maintenance. Possible options are Daily , Weekly , AbsoluteMonthly and RelativeMonthly .- interval - (Required) The interval for maintenance runs. Depending on the frequency this interval is week or month based.- start_date - (Optional) The date on which the maintenance window begins to take effect.- start_time - (Optional) The time for maintenance to begin, based on the timezone determined by utc_offset . Format is HH:mm .- utc_offset - (Optional) Used to determine the timezone for cluster maintenance.- week_index - (Optional) The week in the month used for the maintenance run. Options are First , Second , Third , Fourth , and Last .--- not_allowed block supports the following:- end - (Required) The end of a time span, formatted as an RFC3339 string.- start - (Required) The start of a time span, formatted as an RFC3339 string. |
object({ |
null |
no |
microsoft_defender_enabled | (Optional) Is Microsoft Defender on the cluster enabled? Requires var.log_analytics_workspace_enabled to be true to set this variable to true . |
bool |
false |
no |
monitor_metrics | (Optional) Specifies a Prometheus add-on profile for the Kubernetes Cluster object({ annotations_allowed = "(Optional) Specifies a comma-separated list of Kubernetes annotation keys that will be used in the resource's labels metric." labels_allowed = "(Optional) Specifies a Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric." }) |
object({ |
null |
no |
msi_auth_for_monitoring_enabled | (Optional) Is managed identity authentication for monitoring enabled? | bool |
null |
no |
net_profile_dns_service_ip | (Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created. | string |
null |
no |
net_profile_outbound_type | (Optional) The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting. Defaults to loadBalancer. | string |
"loadBalancer" |
no |
net_profile_pod_cidr | (Optional) The CIDR to use for pod IP addresses. This field can only be set when network_plugin is set to kubenet or network_plugin is set to azure and network_plugin_mode is set to overlay. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_service_cidr | (Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created. | string |
null |
no |
network_contributor_role_assigned_subnet_ids | Create role assignments for the AKS Service Principal to be a Network Contributor on the subnets used for the AKS Cluster, key should be static string, value should be subnet's id | map(string) |
{} |
no |
network_plugin | Network plugin to use for networking. | string |
"kubenet" |
no |
network_plugin_mode | (Optional) Specifies the network plugin mode used for building the Kubernetes network. Possible value is overlay . Changing this forces a new resource to be created. |
string |
null |
no |
network_policy | (Optional) Sets up network policy to be used with Azure CNI. Network policy allows us to control the traffic flow between pods. Currently supported values are calico and azure. Changing this forces a new resource to be created. | string |
null |
no |
node_network_profile | (Optional) Specifies a mapping of tags to the instance-level public IPs. Changing this forces a new resource to be created. | object({ |
null |
no |
node_os_channel_upgrade | (Optional) The upgrade channel for this Kubernetes Cluster Nodes' OS Image. Possible values are Unmanaged , SecurityPatch , NodeImage and None . |
string |
null |
no |
node_pools | A map of node pools that need to be created and attached on the Kubernetes cluster. The key of the map can be the name of the node pool, and the key must be static string. The value of the map is a node_pool block as defined below:map(object({ name = (Required) The name of the Node Pool which should be created within the Kubernetes Cluster. Changing this forces a new resource to be created. A Windows Node Pool cannot have a name longer than 6 characters. A random suffix of 4 characters is always added to the name to avoid clashes during recreates.node_count = (Optional) The initial number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 (inclusive) for user pools and between 1 and 1000 (inclusive) for system pools and must be a value in the range min_count - max_count .tags = (Optional) A mapping of tags to assign to the resource. At this time there's a bug in the AKS API where Tags for a Node Pool are not stored in the correct case - you may wish to use Terraform's ignore_changes functionality to ignore changes to the casing until this is fixed in the AKS API.vm_size = (Required) The SKU which should be used for the Virtual Machines used in this Node Pool. Changing this forces a new resource to be created. host_group_id = (Optional) The fully qualified resource ID of the Dedicated Host Group to provision virtual machines from. Changing this forces a new resource to be created. capacity_reservation_group_id = (Optional) Specifies the ID of the Capacity Reservation Group where this Node Pool should exist. Changing this forces a new resource to be created. custom_ca_trust_enabled = (Optional) Specifies whether to trust a Custom CA. This requires that the Preview Feature Microsoft.ContainerService/CustomCATrustPreview is enabled and the Resource Provider is re-registered, see the documentation for more information.enable_auto_scaling = (Optional) Whether to enable auto-scaler. enable_host_encryption = (Optional) Should the nodes in this Node Pool have host encryption enabled? Changing this forces a new resource to be created. enable_node_public_ip = (Optional) Should each node have a Public IP Address? Changing this forces a new resource to be created. eviction_policy = (Optional) The Eviction Policy which should be used for Virtual Machines within the Virtual Machine Scale Set powering this Node Pool. Possible values are Deallocate and Delete . Changing this forces a new resource to be created. An Eviction Policy can only be configured when priority is set to Spot and will default to Delete unless otherwise specified.gpu_instance = (Optional) Specifies the GPU MIG instance profile for supported GPU VM SKU. The allowed values are MIG1g , MIG2g , MIG3g , MIG4g and MIG7g . Changing this forces a new resource to be created.kubelet_config = optional(object({ cpu_manager_policy = (Optional) Specifies the CPU Manager policy to use. Possible values are none and static , Changing this forces a new resource to be created.cpu_cfs_quota_enabled = (Optional) Is CPU CFS quota enforcement for containers enabled? Changing this forces a new resource to be created. cpu_cfs_quota_period = (Optional) Specifies the CPU CFS quota period value. Changing this forces a new resource to be created. image_gc_high_threshold = (Optional) Specifies the percent of disk usage above which image garbage collection is always run. Must be between 0 and 100 . Changing this forces a new resource to be created.image_gc_low_threshold = (Optional) Specifies the percent of disk usage lower than which image garbage collection is never run. Must be between 0 and 100 . Changing this forces a new resource to be created.topology_manager_policy = (Optional) Specifies the Topology Manager policy to use. Possible values are none , best-effort , restricted or single-numa-node . Changing this forces a new resource to be created.allowed_unsafe_sysctls = (Optional) Specifies the allow list of unsafe sysctls command or patterns (ending in * ). Changing this forces a new resource to be created.container_log_max_size_mb = (Optional) Specifies the maximum size (e.g. 10MB) of container log file before it is rotated. Changing this forces a new resource to be created. container_log_max_files = (Optional) Specifies the maximum number of container log files that can be present for a container. must be at least 2. Changing this forces a new resource to be created. pod_max_pid = (Optional) Specifies the maximum number of processes per pod. Changing this forces a new resource to be created. })) linux_os_config = optional(object({ sysctl_config = optional(object({ fs_aio_max_nr = (Optional) The sysctl setting fs.aio-max-nr. Must be between 65536 and 6553500 . Changing this forces a new resource to be created.fs_file_max = (Optional) The sysctl setting fs.file-max. Must be between 8192 and 12000500 . Changing this forces a new resource to be created.fs_inotify_max_user_watches = (Optional) The sysctl setting fs.inotify.max_user_watches. Must be between 781250 and 2097152 . Changing this forces a new resource to be created.fs_nr_open = (Optional) The sysctl setting fs.nr_open. Must be between 8192 and 20000500 . Changing this forces a new resource to be created.kernel_threads_max = (Optional) The sysctl setting kernel.threads-max. Must be between 20 and 513785 . Changing this forces a new resource to be created.net_core_netdev_max_backlog = (Optional) The sysctl setting net.core.netdev_max_backlog. Must be between 1000 and 3240000 . Changing this forces a new resource to be created.net_core_optmem_max = (Optional) The sysctl setting net.core.optmem_max. Must be between 20480 and 4194304 . Changing this forces a new resource to be created.net_core_rmem_default = (Optional) The sysctl setting net.core.rmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_rmem_max = (Optional) The sysctl setting net.core.rmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_somaxconn = (Optional) The sysctl setting net.core.somaxconn. Must be between 4096 and 3240000 . Changing this forces a new resource to be created.net_core_wmem_default = (Optional) The sysctl setting net.core.wmem_default. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_core_wmem_max = (Optional) The sysctl setting net.core.wmem_max. Must be between 212992 and 134217728 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_min = (Optional) The sysctl setting net.ipv4.ip_local_port_range min value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_ip_local_port_range_max = (Optional) The sysctl setting net.ipv4.ip_local_port_range max value. Must be between 1024 and 60999 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh1 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh1. Must be between 128 and 80000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh2 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh2. Must be between 512 and 90000 . Changing this forces a new resource to be created.net_ipv4_neigh_default_gc_thresh3 = (Optional) The sysctl setting net.ipv4.neigh.default.gc_thresh3. Must be between 1024 and 100000 . Changing this forces a new resource to be created.net_ipv4_tcp_fin_timeout = (Optional) The sysctl setting net.ipv4.tcp_fin_timeout. Must be between 5 and 120 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_intvl = (Optional) The sysctl setting net.ipv4.tcp_keepalive_intvl. Must be between 10 and 75 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_probes = (Optional) The sysctl setting net.ipv4.tcp_keepalive_probes. Must be between 1 and 15 . Changing this forces a new resource to be created.net_ipv4_tcp_keepalive_time = (Optional) The sysctl setting net.ipv4.tcp_keepalive_time. Must be between 30 and 432000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_syn_backlog = (Optional) The sysctl setting net.ipv4.tcp_max_syn_backlog. Must be between 128 and 3240000 . Changing this forces a new resource to be created.net_ipv4_tcp_max_tw_buckets = (Optional) The sysctl setting net.ipv4.tcp_max_tw_buckets. Must be between 8000 and 1440000 . Changing this forces a new resource to be created.net_ipv4_tcp_tw_reuse = (Optional) Is sysctl setting net.ipv4.tcp_tw_reuse enabled? Changing this forces a new resource to be created. net_netfilter_nf_conntrack_buckets = (Optional) The sysctl setting net.netfilter.nf_conntrack_buckets. Must be between 65536 and 147456 . Changing this forces a new resource to be created.net_netfilter_nf_conntrack_max = (Optional) The sysctl setting net.netfilter.nf_conntrack_max. Must be between 131072 and 1048576 . Changing this forces a new resource to be created.vm_max_map_count = (Optional) The sysctl setting vm.max_map_count. Must be between 65530 and 262144 . Changing this forces a new resource to be created.vm_swappiness = (Optional) The sysctl setting vm.swappiness. Must be between 0 and 100 . Changing this forces a new resource to be created.vm_vfs_cache_pressure = (Optional) The sysctl setting vm.vfs_cache_pressure. Must be between 0 and 100 . Changing this forces a new resource to be created.})) transparent_huge_page_enabled = (Optional) Specifies the Transparent Huge Page enabled configuration. Possible values are always , madvise and never . Changing this forces a new resource to be created.transparent_huge_page_defrag = (Optional) specifies the defrag configuration for Transparent Huge Page. Possible values are always , defer , defer+madvise , madvise and never . Changing this forces a new resource to be created.swap_file_size_mb = (Optional) Specifies the size of swap file on each node in MB. Changing this forces a new resource to be created. })) fips_enabled = (Optional) Should the nodes in this Node Pool have Federal Information Processing Standard enabled? Changing this forces a new resource to be created. FIPS support is in Public Preview - more information and details on how to opt into the Preview can be found in this article. kubelet_disk_type = (Optional) The type of disk used by kubelet. Possible values are OS and Temporary .max_count = (Optional) The maximum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be greater than or equal to min_count .max_pods = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .message_of_the_day = (Optional) A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of the day for Linux nodes. It cannot be specified for Windows nodes and must be a static string (i.e. will be printed raw and not executed as a script). Changing this forces a new resource to be created. mode = (Optional) Should this Node Pool be used for System or User resources? Possible values are System and User . Defaults to User .min_count = (Optional) The minimum number of nodes which should exist within this Node Pool. Valid values are between 0 and 1000 and must be less than or equal to max_count .node_network_profile = optional(object({ node_public_ip_tags = (Optional) Specifies a mapping of tags to the instance-level public IPs. Changing this forces a new resource to be created. })) node_labels = (Optional) A map of Kubernetes labels which should be applied to nodes in this Node Pool. node_public_ip_prefix_id = (Optional) Resource ID for the Public IP Addresses Prefix for the nodes in this Node Pool. enable_node_public_ip should be true . Changing this forces a new resource to be created.node_taints = (Optional) A list of Kubernetes taints which should be applied to nodes in the agent pool (e.g key=value:NoSchedule ). Changing this forces a new resource to be created.orchestrator_version = (Optional) Version of Kubernetes used for the Agents. If not specified, the latest recommended version will be used at provisioning time (but won't auto-upgrade). AKS does not require an exact patch version to be specified, minor version aliases such as 1.22 are also supported. - The minor version's latest GA patch is automatically chosen in that case. More details can be found in the documentation. This version must be supported by the Kubernetes Cluster - as such the version of Kubernetes used on the Cluster/Control Plane may need to be upgraded first.os_disk_size_gb = (Optional) The Agent Operating System disk size in GB. Changing this forces a new resource to be created. os_disk_type = (Optional) The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created.os_sku = (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created.os_type = (Optional) The Operating System which should be used for this Node Pool. Changing this forces a new resource to be created. Possible values are Linux and Windows . Defaults to Linux .pod_subnet_id = (Optional) The ID of the Subnet where the pods in the Node Pool should exist. Changing this forces a new resource to be created. priority = (Optional) The Priority for Virtual Machines within the Virtual Machine Scale Set that powers this Node Pool. Possible values are Regular and Spot . Defaults to Regular . Changing this forces a new resource to be created.proximity_placement_group_id = (Optional) The ID of the Proximity Placement Group where the Virtual Machine Scale Set that powers this Node Pool will be placed. Changing this forces a new resource to be created. When setting priority to Spot - you must configure an eviction_policy , spot_max_price and add the applicable node_labels and node_taints as per the Azure Documentation.spot_max_price = (Optional) The maximum price you're willing to pay in USD per Virtual Machine. Valid values are -1 (the current on-demand price for a Virtual Machine) or a positive value with up to five decimal places. Changing this forces a new resource to be created. This field can only be configured when priority is set to Spot .scale_down_mode = (Optional) Specifies how the node pool should deal with scaled-down nodes. Allowed values are Delete and Deallocate . Defaults to Delete .snapshot_id = (Optional) The ID of the Snapshot which should be used to create this Node Pool. Changing this forces a new resource to be created. ultra_ssd_enabled = (Optional) Used to specify whether the UltraSSD is enabled in the Node Pool. Defaults to false . See the documentation for more information. Changing this forces a new resource to be created.vnet_subnet_id = (Optional) The ID of the Subnet where this Node Pool should exist. Changing this forces a new resource to be created. A route table must be configured on this Subnet. upgrade_settings = optional(object({ drain_timeout_in_minutes = number node_soak_duration_in_minutes = number max_surge = string })) windows_profile = optional(object({ outbound_nat_enabled = optional(bool, true) })) workload_runtime = (Optional) Used to specify the workload runtime. Allowed values are OCIContainer and WasmWasi . WebAssembly System Interface node pools are in Public Preview - more information and details on how to opt into the preview can be found in this articlezones = (Optional) Specifies a list of Availability Zones in which this Kubernetes Cluster Node Pool should be located. Changing this forces a new Kubernetes Cluster Node Pool to be created. create_before_destroy = (Optional) Create a new node pool before destroy the old one when Terraform must update an argument that cannot be updated in-place. Set this argument to true will add add a random suffix to pool's name to avoid conflict. Default to true .})) |
map(object({ |
{} |
no |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. Changing this forces a new resource to be created. | string |
null |
no |
oidc_issuer_enabled | Enable or Disable the OIDC issuer URL. Defaults to false. | bool |
false |
no |
only_critical_addons_enabled | (Optional) Enabling this option will taint default node pool with CriticalAddonsOnly=true:NoSchedule taint. Changing this forces a new resource to be created. |
bool |
null |
no |
open_service_mesh_enabled | Is Open Service Mesh enabled? For more details, please visit Open Service Mesh for AKS. | bool |
null |
no |
orchestrator_version | Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region | string |
null |
no |
os_disk_size_gb | Disk size of nodes in GBs. | number |
50 |
no |
os_disk_type | The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created. |
string |
"Managed" |
no |
os_sku | (Optional) Specifies the OS SKU used by the agent pool. Possible values include: Ubuntu , CBLMariner , Mariner , Windows2019 , Windows2022 . If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 after Windows2019 is deprecated. Changing this forces a new resource to be created. |
string |
null |
no |
pod_subnet_id | (Optional) The ID of the Subnet where the pods in the default Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
prefix | (Optional) The prefix for the resources created in the specified Azure Resource Group. Omitting this variable requires both var.cluster_log_analytics_workspace_name and var.cluster_name have been set. |
string |
"" |
no |
private_cluster_enabled | If true cluster API server will be exposed only on internal IP address and available only in cluster vnet. | bool |
false |
no |
private_cluster_public_fqdn_enabled | (Optional) Specifies whether a Public FQDN for this Private Cluster should be added. Defaults to false . |
bool |
false |
no |
private_dns_zone_id | (Optional) Either the ID of Private DNS Zone which should be delegated to this Cluster, System to have AKS manage this or None . In case of None you will need to bring your own DNS server and set up resolving, otherwise cluster will have issues after provisioning. Changing this forces a new resource to be created. |
string |
null |
no |
public_ssh_key | A custom ssh key to control access to the AKS cluster. Changing this forces a new resource to be created. | string |
"" |
no |
rbac_aad | (Optional) Is Azure Active Directory integration enabled? | bool |
true |
no |
rbac_aad_admin_group_object_ids | Object ID of groups with admin access. | list(string) |
null |
no |
rbac_aad_azure_rbac_enabled | (Optional) Is Role Based Access Control based on Azure AD enabled? | bool |
null |
no |
rbac_aad_client_app_id | The Client ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_managed | Is the Azure Active Directory integration Managed, meaning that Azure will create/manage the Service Principal used for integration. | bool |
false |
no |
rbac_aad_server_app_id | The Server ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_server_app_secret | The Server Secret of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_tenant_id | (Optional) The Tenant ID used for Azure Active Directory Application. If this isn't specified the Tenant ID of the current Subscription is used. | string |
null |
no |
resource_group_name | The resource group name to be imported | string |
n/a | yes |
role_based_access_control_enabled | Enable Role Based Access Control. | bool |
false |
no |
run_command_enabled | (Optional) Whether to enable run command for the cluster or not. | bool |
true |
no |
scale_down_mode | (Optional) Specifies the autoscaling behaviour of the Kubernetes Cluster. If not specified, it defaults to Delete . Possible values include Delete and Deallocate . Changing this forces a new resource to be created. |
string |
"Delete" |
no |
secret_rotation_enabled | Is secret rotation enabled? This variable is only used when key_vault_secrets_provider_enabled is true and defaults to false |
bool |
false |
no |
secret_rotation_interval | The interval to poll for secret rotation. This attribute is only set when secret_rotation is true and defaults to 2m |
string |
"2m" |
no |
service_mesh_profile | mode - (Required) The mode of the service mesh. Possible value is Istio .internal_ingress_gateway_enabled - (Optional) Is Istio Internal Ingress Gateway enabled? Defaults to true .external_ingress_gateway_enabled - (Optional) Is Istio External Ingress Gateway enabled? Defaults to true . |
object({ |
null |
no |
sku_tier | The SKU Tier that should be used for this Kubernetes Cluster. Possible values are Free , Standard and Premium |
string |
"Free" |
no |
snapshot_id | (Optional) The ID of the Snapshot which should be used to create this default Node Pool. temporary_name_for_rotation must be specified when changing this property. |
string |
null |
no |
storage_profile_blob_driver_enabled | (Optional) Is the Blob CSI driver enabled? Defaults to false |
bool |
false |
no |
storage_profile_disk_driver_enabled | (Optional) Is the Disk CSI driver enabled? Defaults to true |
bool |
true |
no |
storage_profile_disk_driver_version | (Optional) Disk CSI Driver version to be used. Possible values are v1 and v2 . Defaults to v1 . |
string |
"v1" |
no |
storage_profile_enabled | Enable storage profile | bool |
false |
no |
storage_profile_file_driver_enabled | (Optional) Is the File CSI driver enabled? Defaults to true |
bool |
true |
no |
storage_profile_snapshot_controller_enabled | (Optional) Is the Snapshot Controller enabled? Defaults to true |
bool |
true |
no |
support_plan | The support plan which should be used for this Kubernetes Cluster. Possible values are KubernetesOfficial and AKSLongTermSupport . |
string |
"KubernetesOfficial" |
no |
tags | Any tags that should be present on the AKS cluster resources | map(string) |
{} |
no |
temporary_name_for_rotation | (Optional) Specifies the name of the temporary node pool used to cycle the default node pool for VM resizing. the var.agents_size is no longer ForceNew and can be resized by specifying temporary_name_for_rotation |
string |
null |
no |
tracing_tags_enabled | Whether enable tracing tags that generated by BridgeCrew Yor. | bool |
false |
no |
tracing_tags_prefix | Default prefix for generated tracing tags | string |
"avm_" |
no |
ultra_ssd_enabled | (Optional) Used to specify whether the UltraSSD is enabled in the Default Node Pool. Defaults to false. | bool |
false |
no |
vnet_subnet_id | (Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
web_app_routing | object({ dns_zone_id = "(Required) Specifies the ID of the DNS Zone in which DNS entries are created for applications deployed to the cluster when Web App Routing is enabled." }) |
object({ |
null |
no |
workload_autoscaler_profile | keda_enabled - (Optional) Specifies whether KEDA Autoscaler can be used for workloads.vertical_pod_autoscaler_enabled - (Optional) Specifies whether Vertical Pod Autoscaler should be enabled. |
object({ |
null |
no |
workload_identity_enabled | Enable or Disable Workload Identity. Defaults to false. | bool |
false |
no |
Outputs
Name | Description |
---|---|
aci_connector_linux | The aci_connector_linux block of azurerm_kubernetes_cluster resource. |
aci_connector_linux_enabled | Has aci_connector_linux been enabled on the azurerm_kubernetes_cluster resource? |
admin_client_certificate | The client_certificate in the azurerm_kubernetes_cluster 's kube_admin_config block. Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
admin_client_key | The client_key in the azurerm_kubernetes_cluster 's kube_admin_config block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
admin_cluster_ca_certificate | The cluster_ca_certificate in the azurerm_kubernetes_cluster 's kube_admin_config block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
admin_host | The host in the azurerm_kubernetes_cluster 's kube_admin_config block. The Kubernetes cluster server host. |
admin_password | The password in the azurerm_kubernetes_cluster 's kube_admin_config block. A password or token used to authenticate to the Kubernetes cluster. |
admin_username | The username in the azurerm_kubernetes_cluster 's kube_admin_config block. A username used to authenticate to the Kubernetes cluster. |
aks_id | The azurerm_kubernetes_cluster 's id. |
aks_name | The azurerm_kubernetes_cluster 's name. |
azure_policy_enabled | The azurerm_kubernetes_cluster 's azure_policy_enabled argument. Should the Azure Policy Add-On be enabled? For more details please visit Understand Azure Policy for Azure Kubernetes Service |
azurerm_log_analytics_workspace_id | The id of the created Log Analytics workspace |
azurerm_log_analytics_workspace_name | The name of the created Log Analytics workspace |
azurerm_log_analytics_workspace_primary_shared_key | Specifies the workspace key of the log analytics workspace |
client_certificate | The client_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public certificate used by clients to authenticate to the Kubernetes cluster. |
client_key | The client_key in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded private key used by clients to authenticate to the Kubernetes cluster. |
cluster_ca_certificate | The cluster_ca_certificate in the azurerm_kubernetes_cluster 's kube_config block. Base64 encoded public CA certificate used as the root of trust for the Kubernetes cluster. |
cluster_fqdn | The FQDN of the Azure Kubernetes Managed Cluster. |
cluster_identity | The azurerm_kubernetes_cluster 's identity block. |
cluster_portal_fqdn | The FQDN for the Azure Portal resources when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
cluster_private_fqdn | The FQDN for the Kubernetes Cluster when private link has been enabled, which is only resolvable inside the Virtual Network used by the Kubernetes Cluster. |
generated_cluster_private_ssh_key | The cluster will use this generated private key as ssh key when var.public_ssh_key is empty or null. Private key data in PEM (RFC 1421) format. |
generated_cluster_public_ssh_key | The cluster will use this generated public key as ssh key when var.public_ssh_key is empty or null. The fingerprint of the public key data in OpenSSH MD5 hash format, e.g. aa:bb:cc:.... Only available if the selected private key format is compatible, similarly to public_key_openssh and the ECDSA P224 limitations. |
host | The host in the azurerm_kubernetes_cluster 's kube_config block. The Kubernetes cluster server host. |
http_application_routing_zone_name | The azurerm_kubernetes_cluster 's http_application_routing_zone_name argument. The Zone Name of the HTTP Application Routing. |
ingress_application_gateway | The azurerm_kubernetes_cluster 's ingress_application_gateway block. |
ingress_application_gateway_enabled | Has the azurerm_kubernetes_cluster turned on ingress_application_gateway block? |
key_vault_secrets_provider | The azurerm_kubernetes_cluster 's key_vault_secrets_provider block. |
key_vault_secrets_provider_enabled | Has the azurerm_kubernetes_cluster turned on key_vault_secrets_provider block? |
kube_admin_config_raw | The azurerm_kubernetes_cluster 's kube_admin_config_raw argument. Raw Kubernetes config for the admin account to be used by kubectl and other compatible tools. This is only available when Role Based Access Control with Azure Active Directory is enabled and local accounts enabled. |
kube_config_raw | The azurerm_kubernetes_cluster 's kube_config_raw argument. Raw Kubernetes config to be used by kubectl and other compatible tools. |
kubelet_identity | The azurerm_kubernetes_cluster 's kubelet_identity block. |
location | The azurerm_kubernetes_cluster 's location argument. (Required) The location where the Managed Kubernetes Cluster should be created. |
network_profile | The azurerm_kubernetes_cluster 's network_profile block |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. |
oidc_issuer_url | The OIDC issuer URL that is associated with the cluster. |
oms_agent | The azurerm_kubernetes_cluster 's oms_agent argument. |
oms_agent_enabled | Has the azurerm_kubernetes_cluster turned on oms_agent block? |
open_service_mesh_enabled | (Optional) Is Open Service Mesh enabled? For more details, please visit Open Service Mesh for AKS. |
password | The password in the azurerm_kubernetes_cluster 's kube_config block. A password or token used to authenticate to the Kubernetes cluster. |
username | The username in the azurerm_kubernetes_cluster 's kube_config block. A username used to authenticate to the Kubernetes cluster. |
web_app_routing_identity | The azurerm_kubernetes_cluster 's web_app_routing_identity block, it's type is a list of object. |