docs -> learn & and remove locale
This commit is contained in:
Родитель
0ab0c17cdd
Коммит
4e1bb64a27
|
@ -4,7 +4,7 @@
|
|||
|
||||
In order to deploy Azure resources manually through bicep or terraform, you can use the following options:
|
||||
|
||||
- [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/about#what-is-wsl-2)
|
||||
- [Windows Subsystem for Linux](https://learn.microsoft.com/windows/wsl/about#what-is-wsl-2)
|
||||
- [Azure Cloud Shell](https://shell.azure.com)
|
||||
- Linux Bash Shell
|
||||
- MacOS Shell
|
||||
|
@ -18,10 +18,10 @@ If you opt-in to setup a shell on your machine, there are required access and to
|
|||
|
||||
> :warning: The user or service principal initiating the deployment process _must_ have the following minimal set of Azure Role-Based Access Control (RBAC) roles:
|
||||
>
|
||||
> - [Contributor role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#contributor) is _required_ at the subscription level to have the ability to create resource groups and perform deployments.
|
||||
> - [Network Contributor role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#network-contributor) is _required_ at the subscription level to have the ability to create and modify Virtual Network resources.
|
||||
> - [User Access Administrator role](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator) is _required_ at the subscription level since you'll be granting least-privilege RBAC access to managed identities.
|
||||
> - One such example is detailed in the [Container Insights documentation](https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-troubleshoot#authorization-error-during-onboarding-or-update-operation).
|
||||
> - [Contributor role](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#contributor) is _required_ at the subscription level to have the ability to create resource groups and perform deployments.
|
||||
> - [Network Contributor role](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#network-contributor) is _required_ at the subscription level to have the ability to create and modify Virtual Network resources.
|
||||
> - [User Access Administrator role](https://learn.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator) is _required_ at the subscription level since you'll be granting least-privilege RBAC access to managed identities.
|
||||
> - One such example is detailed in the [Container Insights documentation](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-troubleshoot#authorization-error-during-onboarding-or-update-operation).
|
||||
|
||||
Example for role assignment of current logged in User. If Service Principal or Managed Identity is used, please replace OID with the object id of those credentials
|
||||
|
||||
|
@ -41,16 +41,16 @@ If you opt-in to setup a shell on your machine, there are required access and to
|
|||
|
||||
> :warning: The user or service principal initiating the deployment process _must_ have the following minimal set of Azure AD permissions assigned:
|
||||
>
|
||||
> - Azure AD [User Administrator](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles#user-administrator-permissions) is _required_ to create a "break glass" AKS admin Active Directory Security Group and User. Alternatively, you could get your Azure AD admin to create this for you when instructed to do so.
|
||||
> - If you are not part of the User Administrator group in the tenant associated to your Azure subscription, please consider [creating a new tenant](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant#create-a-new-tenant-for-your-organization) to use while evaluating this implementation.
|
||||
> - Azure AD [User Administrator](https://learn.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles#user-administrator-permissions) is _required_ to create a "break glass" AKS admin Active Directory Security Group and User. Alternatively, you could get your Azure AD admin to create this for you when instructed to do so.
|
||||
> - If you are not part of the User Administrator group in the tenant associated to your Azure subscription, please consider [creating a new tenant](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant#create-a-new-tenant-for-your-organization) to use while evaluating this implementation.
|
||||
|
||||
3. Required software components.
|
||||
|
||||
>If you opt for Azure Cloud Shell, you don't need to complete these steps and can jump on the next section (step 4).
|
||||
|
||||
>On Windows, you can use the Ubuntu on [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/about#what-is-wsl-2) to run Bash. Once your bash shell is up you will need to install these prerequisites.
|
||||
>On Windows, you can use the Ubuntu on [Windows Subsystem for Linux](https://learn.microsoft.com/windows/wsl/about#what-is-wsl-2) to run Bash. Once your bash shell is up you will need to install these prerequisites.
|
||||
|
||||
> Install latest [Azure CLI installed](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest)
|
||||
> Install latest [Azure CLI installed](https://learn.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest)
|
||||
|
||||
```bash
|
||||
sudo apt install azure-cli
|
||||
|
|
|
@ -20,7 +20,7 @@ To customize the sample bicep templates provided based on your specific needs, f
|
|||
|
||||
Customize these files based on your specific deployment requirements for each resource.
|
||||
|
||||
4. Test the deployment of each Azure resource individually using the [Azure CLI](https://docs.microsoft.com/azure/azure-resource-manager/bicep/deploy-cli) or [PowerShell command](https://docs.microsoft.com/azure/azure-resource-manager/bicep/deploy-powershell).
|
||||
4. Test the deployment of each Azure resource individually using the [Azure CLI](https://learn.microsoft.com/azure/azure-resource-manager/bicep/deploy-cli) or [PowerShell command](https://learn.microsoft.com/azure/azure-resource-manager/bicep/deploy-powershell).
|
||||
|
||||
For example to deploy the cluster with Azure CLI in eastus2 run:
|
||||
|
||||
|
@ -38,7 +38,7 @@ To customize the sample GitHub pipeline provided based on your specific needs, f
|
|||
|
||||
Note that this sample workflow file deploys Azure resources respectively in the hub and spoke resource groups as specified in the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline).
|
||||
|
||||
3. Configure the GitHub Actions to access Azure resources through [Workload Identity federation with OpenID Connect](https://docs.microsoft.com/azure/developer/github/connect-from-azure?tabs=azure-portal%2Cwindows#use-the-azure-login-action-with-openid-connect). This is a more secure access method than using Service Principals because you won't have to manage any secret. Use the script in [this md file](../../docs/oidc-federated-credentials.md) to set it up.
|
||||
3. Configure the GitHub Actions to access Azure resources through [Workload Identity federation with OpenID Connect](https://learn.microsoft.com/azure/developer/github/connect-from-azure?tabs=azure-portal%2Cwindows#use-the-azure-login-action-with-openid-connect). This is a more secure access method than using Service Principals because you won't have to manage any secret. Use the script in [this md file](../../docs/oidc-federated-credentials.md) to set it up.
|
||||
|
||||
|
||||
## Kick-off the GitHub action workflow
|
||||
|
|
|
@ -65,7 +65,7 @@ param location string = 'eastus2'
|
|||
'westus'
|
||||
'westus2'
|
||||
])
|
||||
@description('For Azure resources that support native geo-redunancy, provide the location the redundant service will have its secondary. Should be different than the location parameter and ideally should be a paired region - https://docs.microsoft.com/azure/best-practices-availability-paired-regions. This region does not need to support availability zones.')
|
||||
@description('For Azure resources that support native geo-redunancy, provide the location the redundant service will have its secondary. Should be different than the location parameter and ideally should be a paired region - https://learn.microsoft.com/azure/best-practices-availability-paired-regions. This region does not need to support availability zones.')
|
||||
param geoRedundancyLocation string = 'centralus'
|
||||
|
||||
/*** VARIABLES ***/
|
||||
|
|
|
@ -585,7 +585,7 @@ module PodFailedScheduledQuery '../CARML/Microsoft.Insights/scheduledQueryRules/
|
|||
criterias: {
|
||||
'allOf': [
|
||||
{
|
||||
query: '//https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-alerts \r\n let endDateTime = now(); let startDateTime = ago(1h); let trendBinSize = 1m; let clusterName = "${clusterName}"; KubePodInventory | where TimeGenerated < endDateTime | where TimeGenerated >= startDateTime | where ClusterName == clusterName | distinct ClusterName, TimeGenerated | summarize ClusterSnapshotCount = count() by bin(TimeGenerated, trendBinSize), ClusterName | join hint.strategy=broadcast ( KubePodInventory | where TimeGenerated < endDateTime | where TimeGenerated >= startDateTime | distinct ClusterName, Computer, PodUid, TimeGenerated, PodStatus | summarize TotalCount = count(), PendingCount = sumif(1, PodStatus =~ "Pending"), RunningCount = sumif(1, PodStatus =~ "Running"), SucceededCount = sumif(1, PodStatus =~ "Succeeded"), FailedCount = sumif(1, PodStatus =~ "Failed") by ClusterName, bin(TimeGenerated, trendBinSize) ) on ClusterName, TimeGenerated | extend UnknownCount = TotalCount - PendingCount - RunningCount - SucceededCount - FailedCount | project TimeGenerated, TotalCount = todouble(TotalCount) / ClusterSnapshotCount, PendingCount = todouble(PendingCount) / ClusterSnapshotCount, RunningCount = todouble(RunningCount) / ClusterSnapshotCount, SucceededCount = todouble(SucceededCount) / ClusterSnapshotCount, FailedCount = todouble(FailedCount) / ClusterSnapshotCount, UnknownCount = todouble(UnknownCount) / ClusterSnapshotCount| summarize AggregatedValue = avg(FailedCount) by bin(TimeGenerated, trendBinSize)'
|
||||
query: '//https://learn.microsoft.com/azure/azure-monitor/insights/container-insights-alerts \r\n let endDateTime = now(); let startDateTime = ago(1h); let trendBinSize = 1m; let clusterName = "${clusterName}"; KubePodInventory | where TimeGenerated < endDateTime | where TimeGenerated >= startDateTime | where ClusterName == clusterName | distinct ClusterName, TimeGenerated | summarize ClusterSnapshotCount = count() by bin(TimeGenerated, trendBinSize), ClusterName | join hint.strategy=broadcast ( KubePodInventory | where TimeGenerated < endDateTime | where TimeGenerated >= startDateTime | distinct ClusterName, Computer, PodUid, TimeGenerated, PodStatus | summarize TotalCount = count(), PendingCount = sumif(1, PodStatus =~ "Pending"), RunningCount = sumif(1, PodStatus =~ "Running"), SucceededCount = sumif(1, PodStatus =~ "Succeeded"), FailedCount = sumif(1, PodStatus =~ "Failed") by ClusterName, bin(TimeGenerated, trendBinSize) ) on ClusterName, TimeGenerated | extend UnknownCount = TotalCount - PendingCount - RunningCount - SucceededCount - FailedCount | project TimeGenerated, TotalCount = todouble(TotalCount) / ClusterSnapshotCount, PendingCount = todouble(PendingCount) / ClusterSnapshotCount, RunningCount = todouble(RunningCount) / ClusterSnapshotCount, SucceededCount = todouble(SucceededCount) / ClusterSnapshotCount, FailedCount = todouble(FailedCount) / ClusterSnapshotCount, UnknownCount = todouble(UnknownCount) / ClusterSnapshotCount| summarize AggregatedValue = avg(FailedCount) by bin(TimeGenerated, trendBinSize)'
|
||||
timeAggregation: 'Average'
|
||||
metricMeasureColumn: 'AggregatedValue'
|
||||
operator: 'GreaterThan'
|
||||
|
|
|
@ -129,7 +129,7 @@ module nsgAppGw '../CARML/Microsoft.Network/networkSecurityGroups/deploy.bicep'
|
|||
{
|
||||
name: 'AllowControlPlaneInBound'
|
||||
properties: {
|
||||
description: 'Allow Azure Control Plane in. (https://docs.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups)'
|
||||
description: 'Allow Azure Control Plane in. (https://learn.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups)'
|
||||
protocol: '*'
|
||||
sourcePortRange: '*'
|
||||
sourceAddressPrefix: '*'
|
||||
|
@ -143,7 +143,7 @@ module nsgAppGw '../CARML/Microsoft.Network/networkSecurityGroups/deploy.bicep'
|
|||
{
|
||||
name: 'AllowHealthProbesInBound'
|
||||
properties: {
|
||||
description: 'Allow Azure Health Probes in. (https://docs.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups)'
|
||||
description: 'Allow Azure Health Probes in. (https://learn.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups)'
|
||||
protocol: '*'
|
||||
sourcePortRange: '*'
|
||||
sourceAddressPrefix: 'AzureLoadBalancer'
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# Authentication from GitHub to Azure
|
||||
|
||||
The recommended method of Azure login/authentication is with OpenId Connect using a Federated Identity Credential.
|
||||
Please follow [this guide](https://docs.microsoft.com/azure/developer/github/connect-from-azure) to create the correct credential.
|
||||
Please follow [this guide](https://learn.microsoft.com/azure/developer/github/connect-from-azure) to create the correct credential.
|
||||
|
||||
## Scripted Setup
|
||||
|
||||
This repository uses a script to provide a simple way to create a GitHub OIDC federated credential, it is based on the steps outlined here: [https://docs.microsoft.com/azure/developer/github/connect-from-azure](https://docs.microsoft.com/azure/developer/github/connect-from-azure).
|
||||
This repository uses a script to provide a simple way to create a GitHub OIDC federated credential, it is based on the steps outlined here: [https://learn.microsoft.com/azure/developer/github/connect-from-azure](https://learn.microsoft.com/azure/developer/github/connect-from-azure).
|
||||
|
||||
The script will create a new application, assign the correct Azure RBAC permissions for the Subscription **OR** Resource Group containing your AKS cluster, and create Federated Identity Credentials for both an environment and branch.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# terraform (still in development but you can still try it out)
|
||||
|
||||
This folder contains the code to build the [AKS Baseline reference implementation](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks) using [CAF Terraform Landing zone framework composition](https://github.com/aztfmod/terraform-azurerm-caf).
|
||||
This folder contains the code to build the [AKS Baseline reference implementation](https://learn.microsoft.com/azure/architecture/reference-architectures/containers/aks/secure-baseline-aks) using [CAF Terraform Landing zone framework composition](https://github.com/aztfmod/terraform-azurerm-caf).
|
||||
|
||||
The following components will be deployed as part of this automation:
|
||||
|
||||
|
@ -58,7 +58,7 @@ To customize the sample GitHub pipeline provided based on your specific needs, f
|
|||
|FLUX_TOKEN| [GitHub Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) for Flux V2||
|
||||
|
||||
Note: do not modify the names of these secrets in the workflow yaml file as they are expected in terraform to be named as shown above.
|
||||
Also instead of using a Service Principal and storing the secret in the GitHub Cloud, we will setup [Federated Identity](https://docs.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Cwindows#use-the-azure-login-action-with-openid-connect) once it is supported by terraform.
|
||||
Also instead of using a Service Principal and storing the secret in the GitHub Cloud, we will setup [Federated Identity](https://learn.microsoft.com/azure/developer/github/connect-from-azure?tabs=azure-portal%2Cwindows#use-the-azure-login-action-with-openid-connect) once it is supported by terraform.
|
||||
|
||||
2. Update the workflow [IaC-terraform-AKS.yml](../../.github/workflows/IaC-terraform-AKS.yml) with the name of the Environment you created in the previous step. The default Environment name is "Terraform". Commit the changes to your remote GitHub branch so that you can run the workflow.
|
||||
Note that this sample workflow file deploys Azure resources respectively in the hub and spoke resource groups as specified in the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline).
|
||||
|
|
|
@ -102,7 +102,7 @@ metadata:
|
|||
namespace: a0008
|
||||
annotations:
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
# defines controller implementing this ingress resource: https://docs.microsoft.com/en-us/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# defines controller implementing this ingress resource: https://learn.microsoft.com/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# ingress.class annotation is being deprecated in Kubernetes 1.18: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
|
||||
# For backwards compatibility, when this annotation is set, precedence is given over the new field ingressClassName under spec.
|
||||
kubernetes.io/ingress.class: traefik-internal
|
||||
|
|
|
@ -102,7 +102,7 @@ metadata:
|
|||
namespace: a0008
|
||||
annotations:
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
# defines controller implementing this ingress resource: https://docs.microsoft.com/en-us/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# defines controller implementing this ingress resource: https://learn.microsoft.com/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# ingress.class annotation is being deprecated in Kubernetes 1.18: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
|
||||
# For backwards compatibility, when this annotation is set, precedence is given over the new field ingressClassName under spec.
|
||||
kubernetes.io/ingress.class: traefik-internal
|
||||
|
|
|
@ -21,10 +21,10 @@ In order to manage the complexity of a Kubernetes based solution deployment, it
|
|||
* The **Infrastructure team** responsible for automating the deployment of AKS and the Azure resources that it depends on, such as ACR, KeyVault, Managed Identities, Log Analytics, etc. We will provide sample code to show you how to implement such automation using Infrastructure as Code (IaC). We will use a CI/CD Pipeline built using GitHub Actions and offer you the option to choose between Bicep or Terraform for the code to deploy these resources.
|
||||
* The **Networking team**, which the Infrastructure team has to coordinate their activities closely with and which is responsible for all the networking components of the solution such as Vnets, DNS, App Gateways, etc.
|
||||
* The **Application team** responsible for automating the deployment of their application services into AKS and managing their release to production using a Blue/Green or Canary approach. We will provide sample code and guidance for how these teams can accomplish their goals by packaging their service using helm and deploying them either through a CI/CD pipeline such as GitHub Actions or a GitOp tools such as Flux or ArgoCD.
|
||||
* The **Shared-Services team** responsible for maintaining the overall health of the AKS clusters and the common components that run on them, such as monitoring, networking, security and other utility services. We will provide sample code and guidance for how to bootstrap these services as part of the initial AKS deployment and also how to automate their on-going life-cycle management. These Shared-Services, may be AKS add-ons such as [AAD Pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) or [Secret Store CSI Driver Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure), 3rd party such as [Prisma defender](https://docs.paloaltonetworks.com/prisma/prisma-cloud) or [Splunk](https://github.com/splunk/splunk-connect-for-kubernetes) daemonset, or open source such as [KEDA](https://keda.sh), [External-dns](https://github.com/kubernetes-sigs/external-dns#:~:text=ExternalDNS%20supports%20multiple%20DNS%20providers%20which%20have%20been,and%20we%20have%20limited%20resources%20to%20test%20changes.) or [Cert-manager](https://cert-manager.io/docs/). This team is also responsible for the lifecycle management of the clusters, such as making sure that updates/upgrades are periodically performed on the cluster, its nodes, the Shared-Services running in it and that cluster configuration changes are seamlessly conducted as needed without impacting the applications.
|
||||
* The **Shared-Services team** responsible for maintaining the overall health of the AKS clusters and the common components that run on them, such as monitoring, networking, security and other utility services. We will provide sample code and guidance for how to bootstrap these services as part of the initial AKS deployment and also how to automate their on-going life-cycle management. These Shared-Services, may be AKS add-ons such as [AAD Pod identity](https://learn.microsoft.com/azure/aks/use-azure-ad-pod-identity) or [Secret Store CSI Driver Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure), 3rd party such as [Prisma defender](https://docs.paloaltonetworks.com/prisma/prisma-cloud) or [Splunk](https://github.com/splunk/splunk-connect-for-kubernetes) daemonset, or open source such as [KEDA](https://keda.sh), [External-dns](https://github.com/kubernetes-sigs/external-dns#:~:text=ExternalDNS%20supports%20multiple%20DNS%20providers%20which%20have%20been,and%20we%20have%20limited%20resources%20to%20test%20changes.) or [Cert-manager](https://cert-manager.io/docs/). This team is also responsible for the lifecycle management of the clusters, such as making sure that updates/upgrades are periodically performed on the cluster, its nodes, the Shared-Services running in it and that cluster configuration changes are seamlessly conducted as needed without impacting the applications.
|
||||
* The **Security team** is responsible in making sure that security is built into the pipeline and all components deployed are secured by default. They will also be responsible for maintaining the Azure Policies, NSGs, firewalls rules outside the cluster as well as all security related configuration within the AKS cluster, such as Kubernetes Network Policies, RBAC or authentication and authorization rules within a Service Mesh.
|
||||
|
||||
Each team will be responsible for maintaining their own automation pipeline. These pipelines access to Azure should only be granted through a [Service Principal](https://docs.microsoft.com/azure/aks/kubernetes-service-principal?tabs=azure-cli), a [Managed Identity](https://docs.microsoft.com/azure/aks/use-managed-identity?msclkid=de7668b4afff11ecaaaa893f1acc9f0f) or preferably a [Federated Identity](https://docs.microsoft.com/azure/active-directory/develop/workload-identity-federation) with the minimum set of permissions required to automatically perform the tasks that the team is responsible for.
|
||||
Each team will be responsible for maintaining their own automation pipeline. These pipelines access to Azure should only be granted through a [Service Principal](https://learn.microsoft.com/azure/aks/kubernetes-service-principal?tabs=azure-cli), a [Managed Identity](https://learn.microsoft.com/azure/aks/use-managed-identity?msclkid=de7668b4afff11ecaaaa893f1acc9f0f) or preferably a [Federated Identity](https://learn.microsoft.com/azure/active-directory/develop/workload-identity-federation) with the minimum set of permissions required to automatically perform the tasks that the team is responsible for.
|
||||
|
||||
|
||||
## Infrastructure as Code
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
|
||||
|
||||
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)), please report it to us as described below.
|
||||
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://learn.microsoft.com/previous-versions/tn-archive/cc751383(v=technet.10)), please report it to us as described below.
|
||||
|
||||
## Reporting Security Issues
|
||||
|
||||
|
|
|
@ -59,7 +59,7 @@ Note that most of the parameters requested above will only be available to you a
|
|||
## Kured
|
||||
|
||||
Kured is included as a solution to handle occasional required reboots from daily OS patching. No customization is required for this service to get it started.
|
||||
This open-source software component is only needed if you require a managed rebooting solution between weekly [node image upgrades](https://docs.microsoft.com/azure/aks/node-image-upgrade). Building a process around deploying node image upgrades [every week](https://github.com/Azure/AKS/releases) satisfies most organizational weekly patching cadence requirements. Combined with most security patches on Linux not requiring reboots often, this leaves your cluster in a well supported state. If weekly node image upgrades satisfies your business requirements, then remove Kured from this solution by deleting [`kured.yaml`](./cluster-baseline-settings/kured.yaml). If however weekly patching using node image upgrades is not sufficient and you need to respond to daily security updates that mandate a reboot ASAP, then using a solution like Kured will help you achieve that objective.
|
||||
This open-source software component is only needed if you require a managed rebooting solution between weekly [node image upgrades](https://learn.microsoft.com/azure/aks/node-image-upgrade). Building a process around deploying node image upgrades [every week](https://github.com/Azure/AKS/releases) satisfies most organizational weekly patching cadence requirements. Combined with most security patches on Linux not requiring reboots often, this leaves your cluster in a well supported state. If weekly node image upgrades satisfies your business requirements, then remove Kured from this solution by deleting [`kured.yaml`](./cluster-baseline-settings/kured.yaml). If however weekly patching using node image upgrades is not sufficient and you need to respond to daily security updates that mandate a reboot ASAP, then using a solution like Kured will help you achieve that objective.
|
||||
|
||||
Note that the image for kured is sourced from a public registry and should be changed to your local registry in the **kured.yaml** file prior to use in your environment.
|
||||
|
||||
|
|
|
@ -99,7 +99,7 @@ metadata:
|
|||
namespace: a0008
|
||||
annotations:
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
# defines controller implementing this ingress resource: https://docs.microsoft.com/en-us/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# defines controller implementing this ingress resource: https://learn.microsoft.com/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# ingress.class annotation is being deprecated in Kubernetes 1.18: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
|
||||
# For backwards compatibility, when this annotation is set, precedence is given over the new field ingressClassName under spec.
|
||||
kubernetes.io/ingress.class: traefik-internal
|
||||
|
|
|
@ -4,7 +4,7 @@ metadata:
|
|||
name: azure-vote
|
||||
annotations:
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
# defines controller implementing this ingress resource: https://docs.microsoft.com/en-us/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# defines controller implementing this ingress resource: https://learn.microsoft.com/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# ingress.class annotation is being deprecated in Kubernetes 1.18: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
|
||||
# For backwards compatibility, when this annotation is set, precedence is given over the new field ingressClassName under spec.
|
||||
kubernetes.io/ingress.class: traefik-internal
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Option \#2 Pull-based CI/CD(GitOps)
|
||||
|
||||
This article outlines deploying with the pull option as described in the [automated deployment for container applications](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-with-aks) article. To deploy the **Option \#2 Pull-based CI/CD Architecture** scenario, follow the steps outlined [here](README.md) (if you haven't already), then perform the following steps:
|
||||
This article outlines deploying with the pull option as described in the [automated deployment for container applications](https://learn.microsoft.com/azure/architecture/example-scenario/apps/devops-with-aks) article. To deploy the **Option \#2 Pull-based CI/CD Architecture** scenario, follow the steps outlined [here](README.md) (if you haven't already), then perform the following steps:
|
||||
|
||||
1. Fork this repo to your GitHub: https://github.com/Azure/aks-baseline-automation. Note: Be sure to uncheck "Copy the main branch only".
|
||||
2. Go to Actions on the forked repo and enable Workflows as shown: <https://github.com/YOURUSERNAME/aks-baseline-automation/actions>
|
||||
|
@ -34,7 +34,7 @@ This article outlines deploying with the pull option as described in the [automa
|
|||
You should have the following 3 Federated credentials similar to what is shown *in* the following screenshot:
|
||||
![](media/0664a3dd619ba6e98b475b29856e6c57.png)
|
||||
Next you need to create the Environment and GitHub Actions Repository secrets *in* your repo.
|
||||
5. Create Actions secrets for your Azure subscription in your GitHub Repository *\#Reference: https://docs.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux\#use-the-azure-login-action-with-a-service-principal-secret*
|
||||
5. Create Actions secrets for your Azure subscription in your GitHub Repository *\#Reference: https://learn.microsoft.com/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux\#use-the-azure-login-action-with-a-service-principal-secret*
|
||||
1. Navigate to Github Actions Secrets in your browser: From your repo select *Settings* > on the left plane select *Secrets* > select *Actions* in the dropdown
|
||||
2. Select *New repository secret*
|
||||
3. Name the secret AZURE_CREDENTIALS in the *Name* field
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
## Option \#1 Push-based CI/CD
|
||||
|
||||
This article outlines deploying with the push option as described in the [automated deployment for container applications](https://docs.microsoft.com/en-us/azure/architecture/example-scenario/apps/devops-with-aks) article. To deploy the **Option \#1 Push-based CI/CD Architecture** scenario, follow the steps outlined [here](README.md) (if you haven't already), then perform the following steps:
|
||||
This article outlines deploying with the push option as described in the [automated deployment for container applications](https://learn.microsoft.com/azure/architecture/example-scenario/apps/devops-with-aks) article. To deploy the **Option \#1 Push-based CI/CD Architecture** scenario, follow the steps outlined [here](README.md) (if you haven't already), then perform the following steps:
|
||||
|
||||
1. Fork this repo to your GitHub: https://github.com/Azure/aks-baseline-automation. Note: Be sure to uncheck "Copy the main branch only".
|
||||
2. Go to Actions on the forked repo and enable Workflows as shown: <https://github.com/YOURUSERNAME/aks-baseline-automation/actions>
|
||||
|
@ -34,7 +34,7 @@ This article outlines deploying with the push option as described in the [automa
|
|||
You should have the following 3 Federated credentials similar to what is shown *in* the following screenshot:
|
||||
![](media/0664a3dd619ba6e98b475b29856e6c57.png)
|
||||
Next you need to create the Environment and GitHub Actions Repository secrets *in* your repo.
|
||||
5. Create Actions secrets for your Azure subscription in your GitHub Repository *\#Reference: https://docs.microsoft.com/en-us/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux\#use-the-azure-login-action-with-a-service-principal-secret*
|
||||
5. Create Actions secrets for your Azure subscription in your GitHub Repository *\#Reference: https://learn.microsoft.com/azure/developer/github/connect-from-azure?tabs=azure-portal%2Clinux\#use-the-azure-login-action-with-a-service-principal-secret*
|
||||
1. Navigate to Github Actions Secrets in your browser: From your repo select *Settings* > on the left plane select *Secrets* > select *Actions* in the dropdown
|
||||
2. Select *New repository secret*
|
||||
3. Click *Add secret*
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
This sample uses Docker to build a container image on the GitHub runner from source, before pushing the image to an Azure Container Registry. The workflow then uses several GitHub actions from the [Azure org](https://github.com/Azure) to deploy the application.
|
||||
|
||||
The application is the [ASP.Net Hello World](https://github.com/mspnp/aks-baseline/tree/main/workload), which is used in the [AKS baseline Reference Implementation](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks/baseline-aks). It is a simple ASP.Net Core web application that displays Hello World and some information from the cluster.
|
||||
The application is the [ASP.Net Hello World](https://github.com/mspnp/aks-baseline/tree/main/workload), which is used in the [AKS baseline Reference Implementation](https://learn.microsoft.com/azure/architecture/reference-architectures/containers/aks/baseline-aks). It is a simple ASP.Net Core web application that displays Hello World and some information from the cluster.
|
||||
|
||||
## Sample info
|
||||
|
||||
|
@ -23,6 +23,6 @@ Using Docker to build container images is a very familiar process for most devel
|
|||
Using GitHub actions as part of your workflow abstracts the Kubernetes binaries and commands from the deployment process. The Azure GitHub actions provide a simple but powerful method of deploying.
|
||||
|
||||
## Prerequisites for running this workflow
|
||||
In order for this workflow to successfully deploy the application on the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline), you will need to change the "Networking" settings of your ACR to allow [public access](https://docs.microsoft.com/en-us/azure/container-registry/data-loss-prevention#azure-cli). Otherwise the GitHub runner hosted in the Cloud won't be able to access your ACR to push the docker image. You will also need to enable [Admin account](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication?tabs=azure-cli#admin-account) in your ACR so that "docker login" can be used with a token to authenticate.
|
||||
In order for this workflow to successfully deploy the application on the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline), you will need to change the "Networking" settings of your ACR to allow [public access](https://learn.microsoft.com/azure/container-registry/data-loss-prevention#azure-cli). Otherwise the GitHub runner hosted in the Cloud won't be able to access your ACR to push the docker image. You will also need to enable [Admin account](https://learn.microsoft.com/azure/container-registry/container-registry-authentication?tabs=azure-cli#admin-account) in your ACR so that "docker login" can be used with a token to authenticate.
|
||||
|
||||
Note that both of these steps will weaken the security of your ACR as well as the security of the workloads running on your cluster. Therefore, a better approach is to keep the ACR default settings and instead deploy [Self-hosted GitHub Runners](#self-hosted-github-runners) in your Azure Virtual Network so that they can access your ACR securely through [Private Endpoints](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-private-link).
|
||||
Note that both of these steps will weaken the security of your ACR as well as the security of the workloads running on your cluster. Therefore, a better approach is to keep the ACR default settings and instead deploy [Self-hosted GitHub Runners](#self-hosted-github-runners) in your Azure Virtual Network so that they can access your ACR securely through [Private Endpoints](https://learn.microsoft.com/azure/container-registry/container-registry-private-link).
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
|
||||
This sample leverages Azure Container Registry to build a container image from source. The workflow then uses several GitHub actions from the [Azure org](https://github.com/Azure) to deploy the application.
|
||||
|
||||
The application is the [AKS Voting App](https://github.com/Azure-Samples/azure-voting-app-redis), which is used in the [AKS Getting Started Guide](https://docs.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli). It is a 2 container application that allows the user to use a Web UI to vote between Cats/Dogs, the votes are recorded in a Redis cache.
|
||||
The application is the [AKS Voting App](https://github.com/Azure-Samples/azure-voting-app-redis), which is used in the [AKS Getting Started Guide](https://learn.microsoft.com/azure/aks/learn/quick-kubernetes-deploy-cli). It is a 2 container application that allows the user to use a Web UI to vote between Cats/Dogs, the votes are recorded in a Redis cache.
|
||||
|
||||
## Sample info
|
||||
|
||||
|
@ -34,17 +34,17 @@ The reusable workflow file is located [here](/.github/workflows/app-azurevote-ac
|
|||
|
||||
### ACR Build
|
||||
|
||||
The primary responsibility of the Azure Container Registry is to store a container image. ACR can also take a DockerFile and associated files to [build a container image](https://docs.microsoft.com/azure/container-registry/container-registry-quickstart-task-cli).
|
||||
The primary responsibility of the Azure Container Registry is to store a container image. ACR can also take a DockerFile and associated files to [build a container image](https://learn.microsoft.com/azure/container-registry/container-registry-quickstart-task-cli).
|
||||
|
||||
Using ACR to build the container image offloads build agent responsibility and allows the build to happen in isolation (if using a [dedicated agent pool](https://docs.microsoft.com/azure/container-registry/tasks-agent-pools)). It eliminates the need for storing extra credentials which are normally leveraged to do a Docker Push.
|
||||
Using ACR to build the container image offloads build agent responsibility and allows the build to happen in isolation (if using a [dedicated agent pool](https://learn.microsoft.com/azure/container-registry/tasks-agent-pools)). It eliminates the need for storing extra credentials which are normally leveraged to do a Docker Push.
|
||||
|
||||
### Azure GitHub Actions
|
||||
|
||||
Using GitHub actions as part of your workflow abstracts the Kubernetes binaries and commands from the deployment process. The Azure GitHub actions provide a simple but powerful method of deploying.
|
||||
|
||||
## Prerequisites for running this workflow
|
||||
In order for this workflow to successfully deploy the application on the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline), you will need to change the "Networking" settings of your ACR to allow [public access](https://docs.microsoft.com/en-us/azure/container-registry/data-loss-prevention#azure-cli). Otherwise the GitHub runner hosted in the Cloud won't be able to access your ACR to push the docker image.
|
||||
In order for this workflow to successfully deploy the application on the [AKS Baseline Reference Implementation](https://github.com/mspnp/aks-baseline), you will need to change the "Networking" settings of your ACR to allow [public access](https://learn.microsoft.com/azure/container-registry/data-loss-prevention#azure-cli). Otherwise the GitHub runner hosted in the Cloud won't be able to access your ACR to push the docker image.
|
||||
|
||||
Note that this step will weaken the security of your ACR as well as the security of the workloads running on your cluster. Therefore, a better approach is to keep the ACR default settings and instead:
|
||||
1. Deploy [Self-hosted GitHub Runners](#self-hosted-github-runners) in your Azure Virtual Network so that they can access your ACR securely through [Private Endpoints](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-private-link).
|
||||
2. Optionally also deploy an [ACR Task dedicated agent pool](https://docs.microsoft.com/en-us/azure/container-registry/tasks-agent-pools) so that your image is built on a runner within your Azure virtual network.
|
||||
1. Deploy [Self-hosted GitHub Runners](#self-hosted-github-runners) in your Azure Virtual Network so that they can access your ACR securely through [Private Endpoints](https://learn.microsoft.com/azure/container-registry/container-registry-private-link).
|
||||
2. Optionally also deploy an [ACR Task dedicated agent pool](https://learn.microsoft.com/azure/container-registry/tasks-agent-pools) so that your image is built on a runner within your Azure virtual network.
|
|
@ -2,9 +2,9 @@
|
|||
|
||||
## Overview
|
||||
|
||||
This sample leverages the AKS Run Command ([aks command invoke](https://docs.microsoft.com/en-us/azure/aks/command-invoke)) and performs comprehensive validation steps to ensure the application has been deployed properly.
|
||||
This sample leverages the AKS Run Command ([aks command invoke](https://learn.microsoft.com/azure/aks/command-invoke)) and performs comprehensive validation steps to ensure the application has been deployed properly.
|
||||
|
||||
The application is the [AKS Voting App](https://github.com/Azure-Samples/azure-voting-app-redis), which is used in the [AKS Getting Started Guide](https://docs.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-cli). It is a 2 container application that allows the user to use a Web UI to vote between Cats/Dogs, the votes are recorded in a Redis cache.
|
||||
The application is the [AKS Voting App](https://github.com/Azure-Samples/azure-voting-app-redis), which is used in the [AKS Getting Started Guide](https://learn.microsoft.com/azure/aks/learn/quick-kubernetes-deploy-cli). It is a 2 container application that allows the user to use a Web UI to vote between Cats/Dogs, the votes are recorded in a Redis cache.
|
||||
|
||||
## Workflow steps
|
||||
|
||||
|
@ -90,11 +90,11 @@ The reusable workflow file is located [here](/.github/workflows/App-AzureVote-He
|
|||
[Helm](https://helm.sh/) is a package manager for Kubernetes, used to package and deploy applications with ease.
|
||||
The Helm chart is written using [subcharts](https://helm.sh/docs/topics/charts/) for the deployments, whilst the parent Helm chart creates the Ingress and NetworkPolicy resources.
|
||||
|
||||
The helm charts are packaged as **AzureVote-helm.tgz** and placed under the .\workloads\azure-vote folder of this repo. For information about how to create the helm charts for this application, refer to [this article](https://docs.microsoft.com/en-us/azure/aks/quickstart-helm?tabs=azure-cli#create-your-helm-chart).
|
||||
The helm charts are packaged as **AzureVote-helm.tgz** and placed under the .\workloads\azure-vote folder of this repo. For information about how to create the helm charts for this application, refer to [this article](https://learn.microsoft.com/azure/aks/quickstart-helm?tabs=azure-cli#create-your-helm-chart).
|
||||
|
||||
### AKS Run Command
|
||||
|
||||
The [AKS Run Command](https://docs.microsoft.com/azure/aks/command-invoke) allows you to remotely invoke commands in an AKS cluster through the AKS API. This can greatly assist with access to a private cluster when the client is not on the cluster private network while still retaining and enforcing full RBAC controls.
|
||||
The [AKS Run Command](https://learn.microsoft.com/azure/aks/command-invoke) allows you to remotely invoke commands in an AKS cluster through the AKS API. This can greatly assist with access to a private cluster when the client is not on the cluster private network while still retaining and enforcing full RBAC controls.
|
||||
|
||||
### Key Steps in the Action Workflow
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ spec:
|
|||
# name: flask-ingress
|
||||
# annotations:
|
||||
# kubernetes.io/ingress.allow-http: "false"
|
||||
# # defines controller implementing this ingress resource: https://docs.microsoft.com/en-us/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# # defines controller implementing this ingress resource: https://learn.microsoft.com/azure/dev-spaces/how-to/ingress-https-traefik
|
||||
# # ingress.class annotation is being deprecated in Kubernetes 1.18: https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
|
||||
# # For backwards compatibility, when this annotation is set, precedence is given over the new field ingressClassName under spec.
|
||||
# kubernetes.io/ingress.class: traefik-internal
|
||||
|
|
Загрузка…
Ссылка в новой задаче