244d6614ab
* add containerd to container runtimes Signed-off-by: Jess Frazelle <acidburn@microsoft.com> * fix flannel and cilium closes #2966 Signed-off-by: Jess Frazelle <acidburn@microsoft.com> |
||
---|---|---|
.. | ||
README.md | ||
kubernetes-azure.json |
README.md
Microsoft Azure Container Service Engine - Network Plugin
There are 2 different Network Plugin options :
- Azure Container Networking (default)
- Kubenet
- Flannel (docs are //TODO)
- Cilium (docs are //TODO)
Azure Container Networking (default)
By default (currently Linux clusters only), the azure
network policy is applied. It is an open source implementation of the CNI Network Plugin interface and the CNI Ipam plugin interface
CNI brings the containers to a single flat L3 Azure subnet. This enables full integration with other SDN features such as network security groups and VNET peering. The plugin creates a bridge for each underlying Azure VNET. The bridge functions in L2 mode and is connected to the host network interface.
If the container host VM has multiple network interfaces, the primary network interface is reserved for management traffic. A secondary interface is used for container traffic whenever possible.
More detailed documentation can be found in the Azure Container Networking Repository
Example of templates enabling CNI:
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"kubernetesConfig": {
"networkPlugin": "azure"
}
}
...
}
Or by not specifying any network policy, leaving the default :
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes"
}
...
}
Kubenet
Also available is the Kubernetes-native kubenet implementation, which is declared as configuration thusly:
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"kubernetesConfig": {
"networkPlugin": "kubenet"
}
}
...
}