Merge branch 'Azure:main' into main

This commit is contained in:
Houssem Dellai 2023-03-30 10:58:03 +02:00 коммит произвёл GitHub
Родитель 8177117690 8872325c9e
Коммит 83abccda93
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
11 изменённых файлов: 693 добавлений и 3 удалений

5
.vscode/cspell.json поставляемый
Просмотреть файл

@ -46,10 +46,13 @@
"jumpbox",
"Jumpstart",
"keyvault",
"Kubelet",
"Kubenet",
"letsencrypt",
"loadtest",
"lycheeverse",
"millicores",
"machinelearningservices",
"MSRC",
"nouser",
"npuser",
@ -57,6 +60,7 @@
"peristent",
"podid",
"Quickstart",
"Quickstarts",
"randomnumbers",
"ratingsapp",
"RESOURCEGROUP",
@ -67,6 +71,7 @@
"stracc",
"tanzu",
"templating",
"tfvars",
"tfvsars",
"Todos",

Просмотреть файл

@ -0,0 +1,270 @@
# Bursting from AKS to ACI
This example scenario showcases how to rapidly scale up the workload instances in Azure Kubernetes Services using the serverless compute capacity provided by Azure Container Instances.
AKS Virtual Nodes allows AKS users to make use of the compute capacity of Azure Container Instances (ACI) to spin up additional containers rather than having to bring up additional VM based worker nodes in the cluster. AKS Virtual Nodes helps leverage Azure's serverless container hosting services to extend Kubernetes service using the Virtual Kubelet implementation. This integration requires the AKS cluster to be created with advanced networking - Azure Container Networking Interface (Azure CNI).
This deployable solution contains two parts:
* Deploying the infrastructure required for virtual nodes.
* Deploying the scalable application components to the AKS cluster and testing the scaling to virtual nodes
In order to deploy this scenario, you will need the following:
- An active [Microsoft Azure](https://azure.microsoft.com/en-us/free "Microsoft Azure") Subscription
- Azure Cloud Shell or bash cli with the following installed:
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/overview?view=azure-cli-latest "Azure CLI") installed
- [Kubernetes CLI (kubectl)](https://kubernetes.io/docs/tasks/tools/install-kubectl/ "Kubernetes CLI (kubectl)") installed
- Azure Bicep installed (optional)
**NOTE**: This scenario will be focusing on showcasing the capability of using virtual nodes with AKS, and for that purpose some of the advanced security configurations like private cluster, Azure policies, ingress controllers, etc. are skipped here. Please refer to the [AKS Accelerator scenarios](https://github.com/Azure/AKS-Landing-Zone-Accelerator/tree/main/Scenarios) for advanced, and secure configurations.
# Infrastructure Deployment
## Create AKS Cluster with Virtual Nodes
Create a new AKS cluster with virtual nodes enabled or enable virtual nodes on an existing AKS cluster by following one of the options below:
### Option1 : Create a new AKS cluster with Virtual Nodes enabled
```bash
# Change the values for the parameters as needed for your own environment
az group create --name aksdemorg --location eastus
az deployment group create \
--name aksinfra-demo \
--resource-group aksdemorg \
--template-file ./deployment/aksinfra.bicep \
--parameters aksname='aksdemo'
--parameters acrname='aksacr0022'
--parameters vnetname='demo-vnet'
--parameters location='eastus'
```
Running above bicep template will create the following new resources/configurations:
* One VNET with two subnets
* One AKS cluster with virtual nodes enabled
* One container registry
* Required RBAC assignments on the VNET and the ACR
### Option2 : Enable Virtual Nodes on an existing AKS cluster
Existing AKS clusters can be updated to enable virtual nodes. Please make sure that Advanced CNI networking is configured for the cluster and there is a new dedicated empty subnet created in the same vnet.
```bash
# Enable virtual nodes on a existing AKS cluster
# Change the values as needed for your own environment
az aks enable-addons \
-g <resource-group> \
--name <aks-cluster> \
--addons virtual-node \
--subnet-name <aci-subnet>
```
## Create the Storage Account and fileshare
In this scenario, we will use an Azure File share as a shared persistent storage consumed by multiple replicas of the application pods.
To create the file share, run the following azure cli commands:
```bash
# Change the values for these four parameters as needed for your own environment
AKS_PERS_STORAGE_ACCOUNT_NAME=acidemostorage$RANDOM
AKS_PERS_RESOURCE_GROUP=aksdemorg
AKS_PERS_LOCATION=eastus
AKS_PERS_SHARE_NAME=aksshare
# Create a storage account
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
# Export the connection string as an environment variable, this is used when creating the Azure file share
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
# Create the file share
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
# Echo storage account name and key
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME | base64
echo Storage account key: $STORAGE_KEY
```
# Application deployment and Testing
## Validate the cluster
Before start deploying the applications, make sure that the virtual nodes are up and running fine within the AKS cluster.
Run the following commands to connect to the cluster and list the cluster nodes
```bash
az aks get-credentials --resource-group aksdemorg --name aksdemo
kubectl get nodes
```
Output will look like below:
```bash
NAME STATUS ROLES AGE VERSION
aks-agentpool-74340005-vmss000000 Ready agent 13m v1.24.6
virtual-node-aci-linux Ready agent 11m v1.19.10-vk-azure-aci-1.4.8
```
The node *virtual-node-aci-linux* in the above output indicates that the virtual nodes are configured and running fine within the AKS cluster.
## Push container image to Azure Container Registry
Before deploying the application to the AKS cluster, we need to have it built and uploaded to the Azure Container registry. To keep this exercise simple, we will import a publicly available image to ACR using 'az acr import' command, which will be used as our demo app. Alternatively, you can build your custom application images and push them to ACR using docker commands or CI/CD pipelines.
```bash
az acr import \
--name <acr-name> \
--source docker.io/library/nginx:latest \
--image aci-aks-demo:latest
```
## Configuring secrets
The application pods use Azure Fileshare as the persistent storage, for which a kubernetes secret referencing the storage account and the access key needs to be created.
Use the exported values from the previous section and run the following command:
```bash
kubectl create secret generic azure-secret \
--from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
```
Similarly for pulling the images from container registry, a secret should be created referencing the service principal credentials which has *AcrPull* access on the registry.
```bash
#!/bin/bash
# Modify for your environment.
# ACR_NAME: The name of your Azure Container Registry
# SERVICE_PRINCIPAL_NAME: Must be unique within your AD tenant
ACR_NAME=aksacr009
SERVICE_PRINCIPAL_NAME=demo-aks-acr-pullsecret
# Obtain the full registry ID
ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query "id" --output tsv)
# Create the service principal with rights scoped to the registry.
PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME --scopes $ACR_REGISTRY_ID --role acrpull --query "password" --output tsv)
USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv)
# Output the service principal's credentials; use these in your services and applications to authenticate to the container registry.
echo "Service principal ID: $USER_NAME"
echo "Service principal password: $PASSWORD"
```
Now create a kubernetes secret with the above credentials to access the container registry:
```bash
kubectl create secret docker-registry acr-pull-secret \
--namespace default \
--docker-server=$ACR_NAME.azurecr.io \
--docker-username=$USER_NAME \
--docker-password=$PASSWORD
```
## Deploy the application
By now, we have setup the cluster with virtual nodes and created all the necessary secrets in the cluster. Next step is the sample application deployment.
Deploy the sample application to the AKS cluster using the following command. Make sure that you have updated the image reference under container specs in the yaml file to point to your Azure Container registry url.
```bash
#Please make sure to modify the <ACR_NAME> in the yaml file before applying it.
kubectl apply -f deployment/demoapp-deploy.yaml
```
The above command deploys one replica of the application and creates a service to expose it on port 80. The application pod will have the Azure File share mounted to the */mnt/azure* directory.
Validate the deployment and service by running the following commands:
```bash
kubectl get deploy demoapp-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
demoapp-deploy 1/1 1 1 14s
```
```bash
kubectl get svc demoapp-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-svc ClusterIP 10.0.50.82 <none> 80/TCP 114s
```
## Configure auto-scaling for the application
We now have the sample application as part of the deployment, and the service is accessible on port 80. To scale the resources, we will use a Horizontal Pod Autoscaler based on CPU usage to scale up the replicas of the pod, when traffic increases and scale down the resources when traffic decreases.
```bash
kubectl apply -f deployment/demoapp-hpa.yaml
```
Verify the HPA deployment:
```bash
kubectl get hpa
```
The above output shows that the HPA maintains between 1 and 10 replicas of the pods controlled by the reference deployment and a target of 50% is the average CPU utilization that the HPA needs to maintain, whereas the target of 0% is the current usage.
## Load Testing
To test HPA in real-time, we will increase the load on the cluster, and check how HPA responds in managing the resources.
First, we need to see the current status of the deployment:
```bash
kubectl get deploy
```
For simulating user load, we will start a few containers in a different namespace and send an infinite loop of queries to the demo-app service, listening on port 80. This will make the CPU consumption in the target containers high.
Open a new bash terminal and execute the below command:
```bash
kubectl create ns loadtest
kubectl apply -f deployment/load_deploy.yaml -n loadtest
```
Once you triggered the load test, use the below command to check the status of the HPA in real time:
```bash
kubectl get hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
demoapp-hpa Deployment/demoapp-deploy 44%/50% 1 10 1 29m
demoapp-hpa Deployment/demoapp-deploy 56%/50% 1 10 2 29m
demoapp-hpa Deployment/demoapp-deploy 100%/50% 1 10 2 30m
```
You can now see that as the usage went up, the number of pods started scaling up.
You should be able to see that the additional pods started getting deployed in the virtual nodes.
```bash
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demoapp-deploy-7544f8b99d-g5kwj 1/1 Running 0 2m21s 10.100.0.26 virtual-node-aci-linux <none> <none>
demoapp-deploy-7544f8b99d-k4w99 1/1 Running 0 13m 10.100.1.28 aks-agentpool-74340005-vmss000000 <none> <none>
demoapp-deploy-7544f8b99d-sqkv8 1/1 Running 0 2m21s 10.100.0.29 virtual-node-aci-linux <none> <none>
```
Create a file with some dummy data in the mounted file share
```bash
kubectl exec -it demoapp-deploy-b9fbcbfcb-57fq8 -- /bin/sh
# echo "hostname" > /mnt/azure/`hostname`
# ls /mnt/azure/
demoapp-deploy-b9fbcbfcb-57fq8
```
Validate the newly created file from one of the replicas running on a different node:
```bash
kubectl exec -it demoapp-deploy-85889899bc-rm6j5 sh
# ls /mnt/azure/
demoapp-deploy-b9fbcbfcb-57fq8
```
You can also view the files in the Azure Fileshare:
![image](https://user-images.githubusercontent.com/40350122/216344051-8f0ca0ec-ba6f-43ba-b1c5-63f9d4887d59.png)
### Stop the load
Stop the load by deleting the *loadtest* namespace:
```bash
kubectl delete ns loadtest
```
Once the load is stopped, you will see that the pod replicas will come down.
```bash
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demoapp-deploy-b9fbcbfcb-57fq8 1/1 Running 0 21m 10.100.1.85 aks-agentpool-74340005-vmss000000 <none> <none>
```

Просмотреть файл

@ -0,0 +1,151 @@
// This BICEP template creates a VNET with 2 subnets, an Azure Container Regitry and an AKS cluster with Virtual nodes enabled
// and necessary RBAC role assignments on ACR and subnets.
param aksname string = 'aksdemo'
param acrname string = 'demoacr09836'
param vnetname string = 'demo-vnet'
param location string = 'eastus'
//Create ACR
resource containerRegistry 'Microsoft.ContainerRegistry/registries@2021-06-01-preview' = {
name: acrname
location: location
sku: {
name: 'Basic'
}
properties: {
adminUserEnabled: false
}
}
//Create VNET & Subnets
resource virtualNetwork_aks 'Microsoft.Network/virtualNetworks@2019-11-01' = {
name: vnetname
location: location
properties: {
addressSpace: {
addressPrefixes: [
'10.100.0.0/16'
]
}
subnets: [
{
name: 'aci-subnet'
properties: {
addressPrefix: '10.100.0.0/24'
}
}
{
name: 'aks-subnet'
properties: {
addressPrefix: '10.100.1.0/24'
}
}
]
}
}
//Create AKS Cluster
resource managedCluster_resource 'Microsoft.ContainerService/managedClusters@2022-06-02-preview' = {
name: aksname
location: location
sku: {
name: 'Basic'
tier: 'Paid'
}
identity: {
type: 'SystemAssigned'
}
properties: {
kubernetesVersion: '1.24.6'
dnsPrefix: '${aksname}-dns'
agentPoolProfiles: [
{
name: 'agentpool'
count: 1
vmSize: 'Standard_DS2_v2'
osDiskSizeGB: 128
osDiskType: 'Managed'
kubeletDiskType: 'OS'
vnetSubnetID: resourceId('Microsoft.Network/virtualNetworks/subnets',virtualNetwork_aks.name, 'aks-subnet')
maxPods: 110
type: 'VirtualMachineScaleSets'
enableAutoScaling: false
orchestratorVersion: '1.23.5'
enableNodePublicIP: false
mode: 'System'
osType: 'Linux'
osSKU: 'Ubuntu'
enableFIPS: false
}
]
servicePrincipalProfile: {
clientId: 'msi'
}
addonProfiles: {
aciConnectorLinux: {
enabled: true
config: {
SubnetName: 'aci-subnet'
}
}
httpApplicationRouting: {
enabled: false
}
}
enableRBAC: true
nodeResourceGroup: 'MC_${aksname}_${location}'
networkProfile: {
networkPlugin: 'azure'
networkPolicy: 'azure'
loadBalancerSku: 'Standard'
serviceCidr: '10.200.0.0/18'
dnsServiceIP: '10.200.0.10'
dockerBridgeCidr: '172.17.0.1/16'
outboundType: 'loadBalancer'
serviceCidrs: [
'10.200.0.0/18'
]
ipFamilies: [
'IPv4'
]
}
disableLocalAccounts: false
}
}
//Assign Network Contributor Role on vnet/subnet
resource NwcontributorRoleDefinition 'Microsoft.Authorization/roleDefinitions@2018-01-01-preview' existing = {
scope: subscription()
name: '4d97b98b-1d4f-4787-a291-c67834d212e7'
}
resource vnetRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
name: guid(resourceGroup().id, managedCluster_resource.id, NwcontributorRoleDefinition.id)
scope: virtualNetwork_aks
properties: {
roleDefinitionId: NwcontributorRoleDefinition.id
//principalId: aciConnectorManagedIdentity.properties.principalId
principalId: managedCluster_resource.properties.addonProfiles.aciConnectorLinux.identity.objectId
principalType: 'ServicePrincipal'
}
}
//Assign acrPull Role on ACR
resource acrPullRoleDefinition 'Microsoft.Authorization/roleDefinitions@2018-01-01-preview' existing = {
scope: subscription()
name: '7f951dda-4ed3-4680-a7ca-43fe172d538d'
}
resource acrRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
name: guid(resourceGroup().id, containerRegistry.id, acrPullRoleDefinition.id)
scope: containerRegistry
properties: {
roleDefinitionId: acrPullRoleDefinition.id
principalId: managedCluster_resource.properties.identityProfile.kubeletidentity.objectId
principalType: 'ServicePrincipal'
}
}
output aks_object string = managedCluster_resource.identity.principalId
output aciConnectorManagedIdentity string = managedCluster_resource.properties.addonProfiles.aciConnectorLinux.identity.objectId

Просмотреть файл

@ -0,0 +1,64 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: demoapp-deploy
spec:
replicas: 1
selector:
matchLabels:
app: demoapp
template:
metadata:
labels:
app: demoapp
spec:
containers:
- name: demoapp
image: <ACR_NAME>.azurecr.io/aci-aks-demo:latest
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "50m"
ports:
- containerPort: 80
volumeMounts:
- name: azure
mountPath: /mnt/azure
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
- key: azure.com/aci
effect: NoSchedule
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
volumes:
- name: azure
csi:
driver: file.csi.azure.com
readOnly: false
volumeAttributes:
secretName: azure-secret
shareName: aksshare
---
apiVersion: v1
kind: Service
metadata:
name: demoapp-svc
labels:
app: demoapp
spec:
ports:
- port: 80
selector:
app: demoapp

Просмотреть файл

@ -0,0 +1,12 @@
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: app1-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: demoapp-deploy
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50

Просмотреть файл

@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: loadgen
name: loadgen
spec:
replicas: 10
selector:
matchLabels:
app: loadgen
strategy: {}
template:
metadata:
labels:
app: loadgen
spec:
containers:
- command:
- /bin/sh
- -c
- while sleep 0.001; do wget -q -O- http://demoapp-svc.default.svc.cluster.local;
done
image: busybox
name: busybox
resources: {}
status: {}

Просмотреть файл

@ -40,8 +40,8 @@ Step 2: (Optional - *if you don't do this, you'll have to manually update the ro
More info:
[Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/configure-kubenet)
[Application Gateway infrastructure configuration](https://docs.microsoft.com/en-us/azure/application-gateway/configuration-infrastructure#supported-user-defined-routes)
[Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/configure-kubenet)
[Application Gateway infrastructure configuration](https://learn.microsoft.com/en-us/azure/application-gateway/configuration-infrastructure#supported-user-defined-routes)
This deployment will need to reference data objects from the Hub deployment and will need access to the pre-existing state file, update the variables as needed. This deployment will also need to use a storage access key (from Azure) to read the storage account data. This is a sensitive variable and should not be committed to the code repo.

Просмотреть файл

@ -14,6 +14,6 @@ Customers are using AKS-HCI to run cloud-native workloads, modernize legacy Wind
## Next
* If you have no hardware, To deploy AKS on Azure Stack HCI in an Azure VM try out our [eval guide](https://docs.microsoft.com/en-us/azure-stack/aks-hci/aks-hci-evaluation-guide)
* If you have no hardware, To deploy AKS on Azure Stack HCI in an Azure VM try out our [eval guide](https://learn.microsoft.com/en-us/azure-stack/aks-hci/aks-hci-evaluation-guide)
* If you already have a configured cluster. Try out the [Jumpstart guide](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_k8s/aks_stack_hci/aks_hci_powershell/). The commands described in this scenario should be run on the management computer or in a host server in a cluster

Просмотреть файл

@ -0,0 +1,61 @@
# Using Azure ML on a Secure AKS Cluster
This architectural pattern describes how to deploy an AKS cluster to be used by Azure ML. This follows the guiding tenets of the [Azure Well-Architected Framework](https://learn.microsoft.com/azure/architecture/framework/).
At its core, this pattern provides a prescriptive way to use Azure Machine Learning in a private AKS cluster using the following topology:
- An AKS Private Cluster
- Jumpbox
- Azure Bastion
- Azure Machine Learning Workspace
- Azure Container Registry
![Architectural diagram for the AzureML baseline scenario.](./media/aks-ml-baseline.png)
In the above mentioned scenario the desired outcome is to apply these changes without affecting the applications and workloads hosted in the AKS cluster.
This pattern is also at the basis for the mission critical deployment of workloads on AKS, the main difference is that in that scenario, the resiliency and AKS distribution in multiple regions are the main drivers and elements of the solution.
For this solution, we will be using the `Machine Learning End-to-End Secure` template available at the [Azure Quickstarts repository](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure). This template can be deployed via the Azure Portal, ARM or through Bicep. We will demonstrate how to deploy the solution using Bicep.
## Procedure using Bicep
1. Clone the `machine-learning-end-to-end-secure` from the `azure-quickstarts` git repository
Since the `azure-quickstarts` contains the templates for many solutions in Azure, we will be cloning only the `Machine Learning End-to-End Secure` directory:
```bash
git clone --depth 1 --no-checkout https://github.com/Azure/azure-quickstart-templates.git
cd azure-quickstart-templates
git sparse-checkout set quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure
git checkout
```
At this point you have successfully cloned only the `quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure` directory to your local machine.
With the cloned locally, navigate to the `quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure` to deploy the Bicep template:
```bash
cd quickstarts/microsoft.machinelearningservices/machine-learning-end-to-end-secure
```
1. Configuring and deploying the template:
```bash
# setup these variables to reflect your environment
RG_NAME=az-k8s-aml-rg
LOCATION=EastUS
```
1. Deploy the template
```bash
# create a resource group
az group create -l $LOCATION -n $RG_NAME
# deploy the solution
az deployment group create -g $RG_NAME --template-file main.bicep
```
## After deployment
After the deployment is done, you can use the solution by navigating to the the Azure Portal and open VM through Azure Bastion.

Двоичные данные
Scenarios/AzureML-on-Private-AKS/media/aks-ml-baseline.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 34 KiB

Просмотреть файл

@ -0,0 +1,100 @@
# Enable Prometheus metric collection & Integration with Azure Managed Grafana
## Introduction
- This guidance helps in configuring your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus.
- This will also help in creation of Azure Managed Grafana workspace to link with Azure workspace
## Prerequisites to create Azure Monitor workspace
- The cluster must use [managed identity authentication](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/azure-monitor-workspace-overview).
- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace.
- Microsoft.ContainerService
- Microsoft.Insights
- Microsoft.AlertsManagement
- Register the AKS-PrometheusAddonPreview feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview.
- The aks-preview extension needs to be installed using the command az extension add --name aks-preview.
- Azure CLI version 2.41.0 or higher is required for this feature. Aks-preview version 0.5.122 or higher is required for this feature. You can check the aks-preview version using the az version command.
> Important : Azure Monitor managed service for Prometheus is intended for storing information about service health of customer machines and applications. It is not intended for storing any data classified as Personal Identifiable Information (PII) or End User Identifiable Information. We strongly recommend that you do not send any sensitive information (usernames, credit card numbers etc.) into Azure Monitor managed service for Prometheus fields like metric names, label names, or label values
For more details , refer https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/prometheus-metrics-overview
## Enable Prometheus metric collection
> Login into Azure CLI
```bash
az login
```
> Update Subscription
```bash
az account set --subscription "subscription-id"
```
> Register Feature
```bash
az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview
```
> Add preview-extension
```bash
az extension add --name aks-preview
```
> Create a new default Azure Monitor workspace. If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the DefaultRG-<cluster_region> following the format DefaultAzureMonitorWorkspace-<mapped_region>. This Azure Monitor Workspace will be in the region specific in Region mappings.
```bash
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group>
```
OR
> Use an existing Azure Monitor workspace. If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana.
```bash
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
```
## Grafana integration with Azure Monitor Workspace
> Prerequisites
- Azure Subscription
- Minimum required role to create an instance: resource group Contributor.
- Minimum required role to access an instance: resource group Owner.
> Implementation
1. Create an Azure Managed Grafana workspace
```bash
az grafana create --name <managed-grafana-resource-name> --resource-group <resourcegroupname> -l <Location>
```
**Note:** that Azure Managed Grafana workspace is available only in specific regions. Before deployment , please choose the appropriate region
Now lets check if you can access your new Managed Grafana instance. Take note of the endpoint URL ending by eus.grafana.azure.com, listed in the CLI output.
![Grafana Dashboard](https://user-images.githubusercontent.com/50182145/215081171-da0d9b79-a3ec-4408-9fad-3eadc2e1a0d5.png)
For more information on this, check out the doc [Create an Azure Managed Grafana instance using the Azure CLI](https://learn.microsoft.com/en-us/azure/managed-grafana/quickstart-managed-grafana-cli)
**Note** : Azure Managed Grafana does not support connecting with personal Microsoft accounts currently. Please refer for additional information https://learn.microsoft.com/en-us/azure/managed-grafana/quickstart-managed-grafana-cli
## Grafana integration with Azure Monitor Workspace
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](https://learn.microsoft.com/en-us/azure/managed-grafana/overview).
Connect Grafana to your Azure monitor workspace by following the instructions in [Connect your Azure Monitor workspace to a Grafana workspace](https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/azure-monitor-workspace-overview#link-a-grafana-workspace). You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
> Below are the steps to complete this:
- Open the Azure Monitor workspace menu in the Azure portal
- Select your workspace
- Click "Linked Grafana Workspaces"
- Select a Grafana workspace