Reference architecture design in markdown (#9)

* Update OSS related files
* Remove obsolete ado pipeline
* Add reference architecture documentation
This commit is contained in:
Senthuran Sivananthan 2021-10-09 19:59:02 -04:00 коммит произвёл GitHub
Родитель 176a8c1c03
Коммит a4847742ae
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
49 изменённых файлов: 2386 добавлений и 43 удалений

Просмотреть файл

@ -1,39 +0,0 @@
# ----------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
#
# THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND,
# EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES
# OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.
# ----------------------------------------------------------------------------------
trigger: none
pool:
vmImage: ubuntu-latest
stages:
- stage: CheckBicepCompileStage
displayName: Checks - Bicep Compile Stage
jobs:
- deployment: CheckBicepCompileJob
displayName: Checks - Bicep Compile Job
environment: ${{ variables['Build.SourceBranchName'] }}
strategy:
runOnce:
deploy:
steps:
- checkout: self
- task: Bash@3
displayName: Compile all bicep templates
name: CompileBiceps
inputs:
targetType: 'inline'
script: |
find . -type f -name '*.bicep' | xargs -tn1 az bicep build -f
workingDirectory: ${{ variables['Build.SourcesDirectory'] }}

Просмотреть файл

@ -1,6 +1,17 @@
# Contributing Reference Implementation
We are very happy to accept community contributions to this reference implementation, whether those are Pull Requests, Feature Suggestions or Bug Reports. Please note that by participating in this project, you agree to abide by the [Code of Conduct](CODE_OF_CONDUCT.md), as well as the terms of the [CLA](#cla).
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Getting Started

Просмотреть файл

@ -0,0 +1,293 @@
# Archetype: Generic Subscription
## Table of Contents
* [Overview](#overview)
* [Schema Definition](#schema-definition)
* [Example Deployment Parameters](#example-deployment-parameters)
* [Deployment Instructions](#deployment-instructions)
## Overview
Teams can request subscriptions for **General Use** from CloudOps team with up to Owner permissions, thus democratizing access to deploy, configure, and manage their applications with limited involvement from CloudOps team. CloudOps team can choose to limit the permission using custom roles as deemed appropriate based on risk and requirements.
Examples of generalized use includes:
* Prototypes & Proof of Concepts
* Lift & Modernize
* Specialized architectures including commercial/ISV software deployments
Azure Policies are used to provide governance, compliance and protection while enabling teams to use their preferred toolset to use Azure services.
![Archetype: Generic Subscription](../media/architecture/archetype-generic-subscription.jpg)
**CloudOps team will be required for**
1. Establishing connectivity to Hub virtual network (required for egress traffic flow & Azure Bastion).
2. Creating App Registrations (required for service principal accounts). This is optional based on whether App Registrations are disabled for all users or not.
**Workflow**
* A new subscription is created through existing process (either via ea.azure.com or Azure Portal).
* The subscription will automatically be assigned to the **pubsecSandbox** management group.
* CloudOps will create a Service Principal Account (via App Registration) that will be used for future DevOps automation.
* CloudOps will scaffold the subscription with baseline configuration.
* CloudOps will hand over the subscription to requesting team.
**Subscription Move**
Subscription can be moved to a target Management Group through Azure ARM Templates/Bicep. Move has been incorporated into the landing zone Azure DevOps Pipeline automation.
**Capabilities**
| Capability | Description |
| --- | --- |
| Service Health Alerts | Configures Service Health alerts such as Security, Incident, Maintenance. Alerts are configured with email, sms and voice notifications. |
| Azure Security Center | Configures security contact information (email and phone). |
| Subscription Role Assignments | Configures subscription scoped role assignments. Roles can be built-in or custom. |
| Subscription Budget | Configures monthly subscription budget with email notification. Budget is configured by default for 10 years and the amount. |
| Subscription Tags | A set of tags that are assigned to the subscription. |
| Resource Tags | A set of tags that are assigned to the resource group and resources. These tags must include all required tags as defined the Tag Governance policy. |
| Automation | Deploys an Azure Automation Account in each subscription. |
| Hub Networking | Configures virtual network peering to Hub Network which is required for egress traffic flow and hub-managed DNS resolution (on-premises or other spokes, private endpoints).
| Networking | A spoke virtual network with minimum 4 zones: oz (Opertional Zone), paz (Public Access Zone), rz (Restricted Zone), hrz (Highly Restricted Zone). Additional subnets can be configured at deployment time using configuration (see below). |
## Schema Definition
Reference implementation uses parameter files with `object` parameters to consolidate parameters based on their context. The schemas types are:
* v0.1.0
* [Spoke deployment parameters definition](../../schemas/v0.1.0/landingzones/lz-generic-subscription.json)
* Common types
* [Service Health Alerts](../../schemas/v0.1.0/landingzones/types/serviceHealthAlerts.json)
* [Azure Security Center](../../schemas/v0.1.0/landingzones/types/securityCenter.json)
* [Subscription Role Assignments](../../schemas/v0.1.0/landingzones/types/subscriptionRoleAssignments.json)
* [Subscription Budget](../../schemas/v0.1.0/landingzones/types/subscriptionBudget.json)
* [Subscription Tags](../../schemas/v0.1.0/landingzones/types/subscriptionTags.json)
* [Resource Tags](../../schemas/v0.1.0/landingzones/types/resourceTags.json)
* Spoke types
* [Automation](../../schemas/v0.1.0/landingzones/types/automation.json)
* [Hub Network](../../schemas/v0.1.0/landingzones/types/hubNetwork.json)
## Example Deployment Parameters
This example configures:
1. Service Health Alerts
2. Azure Security Center
3. Subscription Role Assignments using built-in and custom roles
4. Subscription Budget with $1000
5. Subscription Tags
6. Resource Tags (aligned to the default tags defined in [Policies](../../policy/custom/definitions/policyset/Tags.parameters.json))
7. Automation Account
8. Spoke Virtual Network with Hub-managed DNS, Virtual Network Peering, 4 required subnets (zones) and 1 additional subnet `web`.
```json
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceHealthAlerts": {
"value": {
"resourceGroupName": "pubsec-service-health",
"incidentTypes": [ "Incident", "Security" ],
"regions": [ "Global", "Canada East", "Canada Central" ],
"receivers": {
"app": [ "alzcanadapubsec@microsoft.com" ],
"email": [ "alzcanadapubsec@microsoft.com" ],
"sms": [ { "countryCode": "1", "phoneNumber": "5555555555" } ],
"voice": [ { "countryCode": "1", "phoneNumber": "5555555555" } ]
},
"actionGroupName": "Sub1 ALZ action group",
"actionGroupShortName": "sub1-alert",
"alertRuleName": "Sub1 ALZ alert rule",
"alertRuleDescription": "Alert rule for Azure Landing Zone"
}
},
"securityCenter": {
"value": {
"email": "alzcanadapubsec@microsoft.com",
"phone": "5555555555"
}
},
"subscriptionRoleAssignments": {
"value": [
{
"comments": "Built-in Role: Contributor",
"roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
},
{
"comments": "Custom Role: Landing Zone Application Owner",
"roleDefinitionId": "b4c87314-c1a1-5320-9c43-779585186bcc",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
}
]
},
"subscriptionBudget": {
"value": {
"createBudget": true,
"name": "MonthlySubscriptionBudget",
"amount": 1000,
"timeGrain": "Monthly",
"contactEmails": [
"alzcanadapubsec@microsoft.com"
]
}
},
"subscriptionTags": {
"value": {
"ISSO": "isso-tag"
}
},
"resourceTags": {
"value": {
"ClientOrganization": "client-organization-tag",
"CostCenter": "cost-center-tag",
"DataSensitivity": "data-sensitivity-tag",
"ProjectContact": "project-contact-tag",
"ProjectName": "project-name-tag",
"TechnicalContact": "technical-contact-tag"
}
},
"resourceGroups": {
"value": {
"automation": "automation-rg",
"networking": "vnet-rg",
"networkWatcher": "NetworkWatcherRG"
}
},
"automation": {
"value": {
"name": "automation"
}
},
"hubNetwork": {
"value": {
"virtualNetworkId": "/subscriptions/ed7f4eed-9010-4227-b115-2a5e37728f27/resourceGroups/pubsec-hub-networking-rg/providers/Microsoft.Network/virtualNetworks/hub-vnet",
"rfc1918IPRange": "10.18.0.0/22",
"rfc6598IPRange": "100.60.0.0/16",
"egressVirtualApplianceIp": "10.18.1.4"
}
},
"network": {
"value": {
"deployVnet": true,
"peerToHubVirtualNetwork": true,
"useRemoteGateway": false,
"name": "vnet",
"dnsServers": [
"10.18.1.4"
],
"addressPrefixes": [
"10.2.0.0/16"
],
"subnets": {
"oz": {
"comments": "Foundational Elements Zone (OZ)",
"name": "oz",
"addressPrefix": "10.2.1.0/25",
"nsg": {
"enabled": true
},
"udr": {
"enabled": true
}
},
"paz": {
"comments": "Presentation Zone (PAZ)",
"name": "paz",
"addressPrefix": "10.2.2.0/25",
"nsg": {
"enabled": true
},
"udr": {
"enabled": true
}
},
"rz": {
"comments": "Application Zone (RZ)",
"name": "rz",
"addressPrefix": "10.2.3.0/25",
"nsg": {
"enabled": true
},
"udr": {
"enabled": true
}
},
"hrz": {
"comments": "Data Zone (HRZ)",
"name": "hrz",
"addressPrefix": "10.2.4.0/25",
"nsg": {
"enabled": true
},
"udr": {
"enabled": true
}
},
"optional": [
{
"comments": "App Service",
"name": "appservice",
"addressPrefix": "10.2.5.0/25",
"nsg": {
"enabled": false
},
"udr": {
"enabled": false
},
"delegations": {
"serviceName": "Microsoft.Web/serverFarms"
}
}
]
}
}
}
}
}
```
## Deployment Instructions
> Use the [Onboarding Guide for Azure DevOps](../../ONBOARDING_GUIDE_ADO.md) to configure the `subscription` pipeline. This pipeline will deploy workload archetypes such as Generic Subscription.
Parameter files for archetype deployment are configured in [config/subscription folder](../../config/subscriptions). The directory hierarchy is comprised of the following elements, from this directory downward:
1. A environment directory named for the Azure DevOps Org and Git Repo branch name, e.g. 'CanadaESLZ-main'.
2. The management group hierarchy defined for your environment, e.g. pubsec/Platform/LandingZone/Prod. The location of the config file represents which Management Group the subscription is a member of.
For example, if your Azure DevOps organization name is 'CanadaESLZ', you have two Git Repo branches named 'main' and 'dev', and you have top level management group named 'pubsec' with the standard structure, then your path structure would look like this:
```
/config/subscriptions
/CanadaESLZ-main <- Your environment, e.g. CanadaESLZ-main, CanadaESLZ-dev, etc.
/pubsec <- Your top level management root group name
/LandingZones
/Prod
/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_generic-subscription.json
```
The JSON config file name is in one of the following two formats:
- [AzureSubscriptionGUID]\_[TemplateName].json
- [AzureSubscriptionGUID]\_[TemplateName]\_[DeploymentLocation].json
The subscription GUID is needed by the pipeline; since it's not available in the file contents, it is specified in the config file name.
The template name/type is a text fragment corresponding to a path name (or part of a path name) under the '/landingzones' top level path. It indicates which Bicep templates to run on the subscription. For example, the generic subscription path is `/landingzones/lz-generic-subscription`, so we remove the `lz-` prefix and use `generic-subscription` to specify this type of landing zone.
The deployment location is the short name of an Azure deployment location, which may be used to override the `deploymentRegion` YAML variable. The allowable values for this value can be determined by looking at the `Name` column output of the command: `az account list-locations -o table`.

Просмотреть файл

@ -0,0 +1,475 @@
# Archetype: Healthcare
## Table of Contents
* [Overview](#overview)
* [Data Flow](#data-flow)
* [Access Control](#access-control)
* [Networking and Security Configuration](#networking-and-security-configuration)
* [Customer Managed Keys](#customer-managed-keys)
* [Secrets](#secrets)
* [Logging](#logging)
* [Testing](#testing)
* [Schema Definition](#schema-definition)
* [Example Deployment Parameters](#example-deployment-parameters)
* [Deployment Instructions](#deployment-instructions)
## Overview
Teams can request subscriptions from CloudOps team with up to Owner permissions for **Healthcare workloads**, thus democratizing access to deploy, configure, and manage their applications with limited involvement from CloudOps team. CloudOps team can choose to limit the permission using custom roles as deemed appropriate based on risk and requirements.
Azure Policies are used to provide governance, compliance and protection while enabling teams to use their preferred toolset to use Azure services.
![Archetype: Healthcare](../media/architecture/archetype-healthcare.jpg)
**CloudOps team will be required for**
1. Establishing connectivity to Hub virtual network (required for egress traffic flow & Azure Bastion).
2. Creating App Registrations (required for service principal accounts). This is optional based on whether App Registrations are disabled for all users or not.
**Workflow**
* A new subscription is created through existing process (either via ea.azure.com or Azure Portal).
* The subscription will automatically be assigned to the **pubsecSandbox** management group.
* CloudOps will create a Service Principal Account (via App Registration) that will be used for future DevOps automation.
* CloudOps will scaffold the subscription with baseline configuration.
* CloudOps will hand over the subscription to requesting team.
**Subscription Move**
Subscription can be moved to a target Management Group through Azure ARM Templates/Bicep. Move has been incorporated into the landing zone Azure DevOps Pipeline automation.
**Capabilities**
| Capability | Description |
| --- | --- |
| Service Health Alerts | Configures Service Health alerts such as Security, Incident, Maintenance. Alerts are configured with email, sms and voice notifications. |
| Azure Security Center | Configures security contact information (email and phone). |
| Subscription Role Assignments | Configures subscription scoped role assignments. Roles can be built-in or custom. |
| Subscription Budget | Configures monthly subscription budget with email notification. Budget is configured by default for 10 years and the amount. |
| Subscription Tags | A set of tags that are assigned to the subscription. |
| Resource Tags | A set of tags that are assigned to the resource group and resources. These tags must include all required tags as defined the Tag Governance policy. |
| Automation | Deploys an Azure Automation Account in each subscription. |
| Hub Networking | Configures virtual network peering to Hub Network which is required for egress traffic flow and hub-managed DNS resolution (on-premises or other spokes, private endpoints).
| Networking | A spoke virtual network with minimum 4 zones: oz (Opertional Zone), paz (Public Access Zone), rz (Restricted Zone), hrz (Highly Restricted Zone). Additional subnets can be configured at deployment time using configuration (see below). |
| Key Vault | Deploys a spoke managed Azure Key Vault instance that is used for key and secret management. |
| SQL Database | Deploys Azure SQL Database. Optional. |
| Azure Data Lake Store Gen 2 | Deploys an Azure Data Lake Gen 2 instance with hierarchical namespace. *There aren't any parameters for customization.* |
| Synapse Analytics | Deploys Synapse Analytics instance. |
| Azure Machine Learning | Deploys Azure Machine Learning Service. *There aren't any parameters for customization.* |
| Azure Databricks | Deploys an Azure Databricks instance. *There aren't any parameters for customization.* |
| Azure Data Factory | Deploys an Azure Data Factory instance with Managed Virtual Network and Managed Integrated Runtime. *There aren't any parameters for customization.* |
| Azure Container Registry | Deploys an Azure Container Registry to store machine learning models as container images. ACR is used when deploying pods to AKS. *There aren't any parameters for customization.* |
| Azure API for FHIR | Deploys Azure API for FHIR with FHIR-R4. *There aren't any parameters for customization.* |
| Azure Functions | Deploys Azure Functions. *There aren't any parameters for customization.* |
| Azure Stream Analytics | Deploys Stream Analytics instance for streaming scenarios. *There aren't any parameters for customization.* |
| Azure Event Hub | Deploys Azure Event Hub for stream scenarios. *There aren't any parameters for customization.* |
| Application Insights | Deploys an Application Insights instance that is used by Azure Machine Learning instance. *There aren't any parameters for customization.* |
## Data Flow
**Azure Services circled on the diagram are deployed in this archetype.**
![Data Flow](../media/architecture/archetype-healthcare-dataflow.jpg)
| Category | Service | Configuration | Reference |
| --- | --- | --- | --- |
| Storage | Azure Data Lake Gen 2 - Cloud storage enabling big data analytics. | Hierarchical namespace enabled. Optional – Customer Managed Keys. | [Azure Docs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-introduction)
| Compute | Azure Databricks - Managed Spark cloud platform for data analytics and data science | Premium tier; Secured Cluster Connectivity enabled with load balancer for egress. | [Azure Docs](https://docs.microsoft.com/azure/databricks/scenarios/what-is-azure-databricks) |
| Compute | Azure Synapse - End-to-end cloud analytics and data warehousing platform. | Disabled public network access by default. Managed Private Endpoints for Compute & Synapse Studio. Optional – Customer Managed Keys. | [Managed Private Endpoints](https://docs.microsoft.com/azure/synapse-analytics/security/synapse-workspace-managed-private-endpoints) / [Connect to Synapse Studio with private links](https://docs.microsoft.com/azure/synapse-analytics/security/synapse-private-link-hubs)
| Compute | FHIR API - Fast Healthcare Interoperability Resources for healthcare medical exchange. | Private endpoint by default. | [Azure Docs](https://docs.microsoft.com/azure/healthcare-apis/fhir/) |
| Compute | Azure Stream Analytics | Real-time analytics and event-processing engine for process high volumes of fast streaming data from multiple sources simultaneously. | [Azure Docs](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-introduction)
| Compute | Azure Function App - Serverless computing service | Virtual Network Integration for accessing resources in virtual network. | [Azure Docs](https://docs.microsoft.com/azure/azure-functions/functions-overview)
| Ingestion | Azure Data Factory - Managed cloud service for data integration and orchestration | Managed virtual network. Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/data-factory/introduction) |
| Ingestion | Event Hub - Data streaming platform and event ingestion service | N/A | [Azure Docs](https://docs.microsoft.com/azure/event-hubs/event-hubs-about)
| Machine learning and deployment | Azure Machine Learning - Cloud platform for end-to-end machine learning workflows | Optional – Customer Managed Keys, High Business Impact Workspace | [Azure Docs](https://docs.microsoft.com/azure/machine-learning/overview-what-is-azure-ml) |
| Machine learning and deployment | Azure Container Registry - Managed private Docker cloud registry | Premium SKU. Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/container-registry/container-registry-intro) |
| SQL Storage | Azure SQL Database - Fully managed cloud database engine | Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/azure-sql/database/sql-database-paas-overview) |
| Key Management | Azure Key Vault - Centralized cloud storage of secrets and keys | Private Endpoint | [Azure Docs](https://docs.microsoft.com/azure/key-vault/general/overview)
| Monitoring | Application Insights - Application performance and monitoring cloud service | - | [Azure Docs](https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview)
The intended cloud service workflows and data movements for this archetype include:
1. Data can be ingested from data sources using Data Factory with managed virtual network for its Azure hosted integration runtime
2. Streaming data can be ingested using Event Hub and Stream Analytics
3. The data would be stored in Azure Data Lake Gen 2.
4. Healthcare providers can connect to existing data sources with FHIR API.
5. Data engineering and transformation tasks can be done with Spark using Azure Databricks. Transformed data would be stored back in the data lake.
6. End to end analytics and data warehousing can be done with Azure Synapse Analytics.
7. Machine learning would be done using Azure Machine Learning.
8. Monitoring and logging would be through Application Insights.
## Access Control
Once the machine learning archetype is deployed and available to use, access control best practices should be applied. Below is the recommend set of security groups & their respective Azure role assignments. This is not an inclusive list and could be updated as required.
**Replace `PROJECT_NAME` placeholder in the security group names with the appropriate project name for the workload**.
| Security Group | Azure Role | Notes |
| --- | --- | --- |
| SG_PROJECT_NAME_ADMIN | Subscription with `Owner` role. | Admin group for subscription. |
| SG_PROJECT_NAME_READ | Subscription with `Reader` role. | Reader group for subscription. |
| SG_PROJECT_NAME_DATA_PROVIDER | Data Lake (main storage account) service with `Storage Blob Data Contributor` role. Key Vault service with `Key Vault Secrets User`. | Data group with access to data as well as key vault secrets usage.
| SG_PROJECT_NAME_DATA_SCIENCE | Azure ML service with `Contributor` role. Azure Databricks service with `Contributor` role. Key Vault service with `Key Vault Secrets User`. | Data science group with compute access as well as key vault secrets usage. |
## Networking and Security Configuration
![Networking](../media/architecture/archetype-healthcare-networking.jpg)
| Service Name | Settings | Private Endpoints / DNS | Subnet(s)
| --- | --- | --- | --- |
| Azure Key Vault | Network ACL Deny | Private endpoint on `vault` + DNS registration to either hub or spoke | `privateEndpoints`
| SQL Database | N/A | Private endpoint on `sqlserver` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Data Lake Gen 2 | Network ACL deny | Private endpoint on `blob`, `dfs` + DNS registration to either hub or spoke | `privateEndpoints`
| Synapse | Disabled public network access; managed virtual network | * Managed Private Endpoints & Synapse Studio Private Link Hub. Private endpoint DNS registration. | `privateEndpoints` |
| Azure Databricks | No public IP enabled (secure cluster connectivity), load balancer for egress with IP and outbound rules, virtual network ibjection | N/A | `databricksPrivate`, `databricksPublic`
| Azure Machine Learning | No public workspace access | Private endpoint on `amlWorkspace` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Storage Account for Azure ML | Network ACL deny | Private endpoint on `blob`, `file` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Data Factory | Public network access disabled, Azure integration runtime with managed virtual network | Private endpoint on `dataFactory` + DNS registration to either hub or spoke | `privateEndpoints`
| FHIR API | N/A | Private endpoint on `fhir` + DNS registration to either hub or spoke | `privateEndpoints`
| Event Hub | N/A | Private endpoint on `namespace` + DNS registration to either hub or spoke | `privateEndpoints`
| Function App | Virtual Network Integration | N/A | `web` |
| Azure Container Registry | Network ACL deny, public network access disabled | Private endpoint on `registry` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Application Insights | N/A | N/A | N/A |
This archetype also has the following security features as options for deployment:
* Customer managed keys for encryption at rest, including Azure ML, storage, Container Registry, Data Factory, SQL Database, Azure Machine Learning, Synapse Analytics and Kubernetes Service.
* Azure ML has ability to enable high-business impact workspace which controls amount of data Microsoft collects for diagnostic purposes.
## Customer Managed Keys
To enable customer-managed key scenarios, some services including Azure Storage Account and Azure Container Registry require deployment scripts to run with a user-assigned identity to enable encryption key on the respective instances.
Therefore, when the `useCMK` parameter is `true`, a deployment identity is created and assigned `Owner` role to the compute and storage resource groups to run the deployment scripts as needed. Once the services are provisioned with customer-managed keys, the role assignments are automatically deleted.
If customer-managed key is required for the FHIR API, a separate Key Vault with access policy permission model is required.
The artifacts created by the deployment script such as Azure Container Instance and Storage accounts will be automatically deleted 1 hour after completion.
## Secrets
Temporary passwords are autogenerated, and connection strings are automatically stored as secrets in Key Vault. They include:
* SQL Database username, password, and connection string
* Synapse username and password
## Logging
Azure Policy will enable diagnostic settings for all PaaS components in the machine learning archetype and the logs will be sent to the centralized log analytics workspace. These policies are configured at the management group scope and are not explicitly deployed.
## Testing
Test scripts are provided to verify end to end integration. These tests are not automated so minor modifications are needed to set up and run.
The test scripts are located in [tests/landingzones/lz-healthcare/e2e-flow-tests](../../tests/landingzones/lz-healthcare/e2e-flow-tests)
The scripts are:
1. Azure ML Key Vault integration test
2. Azure ML terminal connection to ACR test
3. Databricks integration with Key Vault Data Lake test
4. Synapse integration tests for SQL Serverless, Spark, and SQL DW (dedicated) Pools
**Considerations for testing Azure Data Factory and Synapse using managed virtual networks**
* Data Factory - in order to test connectivity to data lake, ensure a managed private endpoint is set up along with interactive authoring enabled
* Synapse Analytics
* Pipeline, ensure a managed private endpoint is set up along with interactive authoring enabled to test connectivity to data lake
* SQL Serverless connectivity to data lake:
* The default connectivity is to use user identity passthrough, thus, the user should have storage blob data contributor role to the role
* Managed Synapse identity can be used, which the landing zone deployment automatically grants the MSI storage blob data contributor to the data lake
* Upload some data to the default ADLS Gen 2 of Synapse
* Run the integration tests for Synapse SQL Serverless Pool
* Spark pool connectivity to data lake
* Ensure the user has storage blob data contributor role for the data lake
* Upload some data to the default ADLS Gen 2 of Synapse
* Run the integration tests for Synapse Spark Pool
* Dedicated SQL (SQL Data warehouse)
* Ensure the user identity has a SQL Login (e.g. the admin user could be assigned the SQL AD admin)
* Upload some data to the default ADLS Gen 2 of Synapse
* Run the integration tests for Synapse SQL Dedicated Pool (DW)
### Test Scenarios
**Azure ML SQL / Key vault test**
1. Access the ML landing zone network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and create a new notebook to run Python notebook
3. Use the provided test script to test connection to Key Vault by retrieving the SQL password
4. Create a datastore connecting to SQL DB
5. Create a dataset connecting to a table in SQL DB
6. Use the provided dataset consume code to verify connectivity to SQL DB
**Azure ML terminal connection to ACR test**
1. Access the ML landing zone network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and use its built-in terminal
3. Use the provided test script to pull a hello-word Docker image and push to ACR
**Databricks integration tests**
1. Access Azure Databricks workspace
2. Create a new compute cluster
3. Create a new Databricks notebook in the workspace and copy in the integration test script
4. Run the test script to verify connectivity to Key Vault, SQL DB/MI, and data lake
**Azure ML deployment test**
1. Access the ML network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and import the provided tests to the workspace
3. Run the test script, which will build a Docker Azure ML model image, push it to ACR, and then AKS to pull and run the ML model
## Schema Definition
Reference implementation uses parameter files with `object` parameters to consolidate parameters based on their context. The schemas types are:
* v0.1.0
* [Spoke deployment parameters definition](../../schemas/v0.1.0/landingzones/lz-healthcare.json)
* Common types
* [Service Health Alerts](../../schemas/v0.1.0/landingzones/types/serviceHealthAlerts.json)
* [Azure Security Center](../../schemas/v0.1.0/landingzones/types/securityCenter.json)
* [Subscription Role Assignments](../../schemas/v0.1.0/landingzones/types/subscriptionRoleAssignments.json)
* [Subscription Budget](../../schemas/v0.1.0/landingzones/types/subscriptionBudget.json)
* [Subscription Tags](../../schemas/v0.1.0/landingzones/types/subscriptionTags.json)
* [Resource Tags](../../schemas/v0.1.0/landingzones/types/resourceTags.json)
* Spoke types
* [Automation](../../schemas/v0.1.0/landingzones/types/automation.json)
* [Hub Network](../../schemas/v0.1.0/landingzones/types/hubNetwork.json)
* [Azure Key Vault](../../schemas/v0.1.0/landingzones/types/keyVault.json)
* [Azure SQL Database](../../schemas/v0.1.0/landingzones/types/sqldb.json)
* [Azure Synapse Analytics](../../schemas/v0.1.0/landingzones/types/synapse.json)
## Example Deployment Parameters
This example configures:
1. Service Health Alerts
2. Azure Security Center
3. Subscription Role Assignments using built-in and custom roles
4. Subscription Budget with $1000
5. Subscription Tags
6. Resource Tags (aligned to the default tags defined in [Policies](../../policy/custom/definitions/policyset/Tags.parameters.json))
7. Automation Account
8. Spoke Virtual Network with Hub-managed DNS, Hub-managed private endpoint DNS Zones, Virtual Network Peering and all required subnets (zones).
9. Deploys Azure resources with Customer Managed Keys.
> **Note 1:** Azure Automation Account is not deployed with Customer Managed Key as it requires an Azure Key Vault instance with public network access.
> **Note 2:** All secrets stored in Azure Key Vault will have 10 year expiration (configurable) & all RSA Keys (used for CMK) will not have an expiration.
```json
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceHealthAlerts": {
"value": {
"resourceGroupName": "pubsec-service-health",
"incidentTypes": [ "Incident", "Security" ],
"regions": [ "Global", "Canada East", "Canada Central" ],
"receivers": {
"app": [ "alzcanadapubsec@microsoft.com" ],
"email": [ "alzcanadapubsec@microsoft.com" ],
"sms": [ { "countryCode": "1", "phoneNumber": "5555555555" } ],
"voice": [ { "countryCode": "1", "phoneNumber": "5555555555" } ]
},
"actionGroupName": "Sub2 ALZ action group",
"actionGroupShortName": "sub2-alert",
"alertRuleName": "Sub2 ALZ alert rule",
"alertRuleDescription": "Alert rule for Azure Landing Zone"
}
},
"securityCenter": {
"value": {
"email": "alzcanadapubsec@microsoft.com",
"phone": "5555555555"
}
},
"subscriptionRoleAssignments": {
"value": [
{
"comments": "Built-in Role: Contributor",
"roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
},
{
"comments": "Custom Role: Landing Zone Application Owner",
"roleDefinitionId": "b4c87314-c1a1-5320-9c43-779585186bcc",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
}
]
},
"subscriptionBudget": {
"value": {
"createBudget": true,
"name": "MonthlySubscriptionBudget",
"amount": 1000,
"timeGrain": "Monthly",
"contactEmails": [
"alzcanadapubsec@microsoft.com"
]
}
},
"subscriptionTags": {
"value": {
"ISSO": "isso-tag"
}
},
"resourceTags": {
"value": {
"ClientOrganization": "client-organization-tag",
"CostCenter": "cost-center-tag",
"DataSensitivity": "data-sensitivity-tag",
"ProjectContact": "project-contact-tag",
"ProjectName": "project-name-tag",
"TechnicalContact": "technical-contact-tag"
}
},
"resourceGroups": {
"value": {
"automation": "healthcare-Automation",
"compute": "healthcare-Compute",
"monitor": "healthcare-Monitor",
"networking": "healthcare-Network",
"networkWatcher": "NetworkWatcherRG",
"security": "healthcare-Security",
"storage": "healthcare-Storage"
}
},
"useCMK": {
"value": true
},
"keyVault": {
"value": {
"secretExpiryInDays": 3650
}
},
"automation": {
"value": {
"name": "healthcare-automation"
}
},
"sqldb": {
"value": {
"enabled": true,
"username": "azadmin"
}
},
"synapse": {
"value": {
"username": "azadmin"
}
},
"hubNetwork": {
"value": {
"virtualNetworkId": "/subscriptions/ed7f4eed-9010-4227-b115-2a5e37728f27/resourceGroups/pubsec-hub-networking-rg/providers/Microsoft.Network/virtualNetworks/hub-vnet",
"rfc1918IPRange": "10.18.0.0/22",
"rfc6598IPRange": "100.60.0.0/16",
"egressVirtualApplianceIp": "10.18.1.4",
"privateDnsManagedByHub": true,
"privateDnsManagedByHubSubscriptionId": "ed7f4eed-9010-4227-b115-2a5e37728f27",
"privateDnsManagedByHubResourceGroupName": "pubsec-dns-rg"
}
},
"network": {
"value": {
"peerToHubVirtualNetwork": true,
"useRemoteGateway": false,
"name": "healthcare-vnet",
"dnsServers": [
"10.18.1.4"
],
"addressPrefixes": [
"10.5.0.0/16"
],
"subnets": {
"oz": {
"comments": "Foundational Elements Zone (OZ)",
"name": "oz",
"addressPrefix": "10.5.1.0/25"
},
"paz": {
"comments": "Presentation Zone (PAZ)",
"name": "paz",
"addressPrefix": "10.5.2.0/25"
},
"rz": {
"comments": "Application Zone (RZ)",
"name": "rz",
"addressPrefix": "10.5.3.0/25"
},
"hrz": {
"comments": "Data Zone (HRZ)",
"name": "hrz",
"addressPrefix": "10.5.4.0/25"
},
"databricksPublic": {
"comments": "Databricks Public Delegated Subnet",
"name": "databrickspublic",
"addressPrefix": "10.5.5.0/25"
},
"databricksPrivate": {
"comments": "Databricks Private Delegated Subnet",
"name": "databricksprivate",
"addressPrefix": "10.5.6.0/25"
},
"privateEndpoints": {
"comments": "Private Endpoints Subnet",
"name": "privateendpoints",
"addressPrefix": "10.5.7.0/25"
},
"web": {
"comments": "Azure Web App Delegated Subnet",
"name": "webapp",
"addressPrefix": "10.5.8.0/25"
}
}
}
}
}
}
```
## Deployment Instructions
> Use the [Onboarding Guide for Azure DevOps](../../ONBOARDING_GUIDE_ADO.md) to configure the `subscription` pipeline. This pipeline will deploy workload archetypes such as Healthcare.
Parameter files for archetype deployment are configured in [config/subscription folder](../../config/subscriptions). The directory hierarchy is comprised of the following elements, from this directory downward:
1. A environment directory named for the Azure DevOps Org and Git Repo branch name, e.g. 'CanadaESLZ-main'.
2. The management group hierarchy defined for your environment, e.g. pubsec/Platform/LandingZone/Prod. The location of the config file represents which Management Group the subscription is a member of.
For example, if your Azure DevOps organization name is 'CanadaESLZ', you have two Git Repo branches named 'main' and 'dev', and you have top level management group named 'pubsec' with the standard structure, then your path structure would look like this:
```
/config/subscriptions
/CanadaESLZ-main <- Your environment, e.g. CanadaESLZ-main, CanadaESLZ-dev, etc.
/pubsec <- Your top level management root group name
/LandingZones
/Prod
/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_healthcare.json
```
The JSON config file name is in one of the following two formats:
- [AzureSubscriptionGUID]\_[TemplateName].json
- [AzureSubscriptionGUID]\_[TemplateName]\_[DeploymentLocation].json
The subscription GUID is needed by the pipeline; since it's not available in the file contents, it is specified in the config file name.
The template name/type is a text fragment corresponding to a path name (or part of a path name) under the '/landingzones' top level path. It indicates which Bicep templates to run on the subscription. For example, the machine learning path is `/landingzones/lz-healthcare`, so we remove the `lz-` prefix and use `healthcare` to specify this type of landing zone.
The deployment location is the short name of an Azure deployment location, which may be used to override the `deploymentRegion` YAML variable. The allowable values for this value can be determined by looking at the `Name` column output of the command: `az account list-locations -o table`.

Просмотреть файл

@ -0,0 +1,218 @@
# Archetype: Hub Networking with Fortigate Firewalls
## Table of Contents
* [Overview](#overview)
* [Hub Virtual Network](#hub-virtual-network)
* [Management Restricted Zone Virtual Network](#management-restricted-zone-virtual-network)
* [Shared Public Access Zone subnet in the Hub](#shared-public-access-zone-subnet-in-the-hub)
* [User Defined Routes](#user-defined-routes)
* [Network Security Groups](#network-security-groups)
* [Required Routes](#required-routes)
* [Azure Firewall Rules](#azure-firewall-rules)
* [Log Analytics Integration](#log-analytics-integration)
## Overview
The recommended network design achieves the purpose of hosting [**Protected B** workloads on Profile 3][cloudUsageProfiles] (cloud only). This is a simplified network design given all ingress and egress traffic will traverse through the same VIP.
![Hub Networking with Azure Firewall](../media/architecture/hubnetwork-azfw/hubnetwork-azfw-design.jpg)
* Cloud network topology based on proven **hub-and-spoke design**.
* Hub contains a single instance of Azure Firewall and Azure Firewall Policy.
* The firewalls have one interface in the Public Network (uses [RFC 6598][rfc6598] IPs for future use with GCnet).
* The hub a contains subnet acting as a public access zones (PAZ, using [RFC 6598][rfc6598] space) where service delivery occurs (i.e. web application delivery), either dedicated to line of business workload or as a shared system. When using Azure Application Gateway, the subnet will be of its exclusively use.
* Hub links to a spoke MRZ Virtual Network (Management Restricted Zone) for management, security, and shared infrastructure purposes (i.e. Domain Controllers, Secure Jumpbox, Software Management, Log Relays, etc.).
* Spokes contains RZ (Restricted Zone) for line of business workloads, including dedicated PAZ (Public Accces Zone), App RZ (Restricted Zone), and Data RZ (Data Restricted Zone).
* All ingress traffic traverses the hub's firewall, and all egress to internet routed to the firewall for complete traffic inspection for virtual machines. PaaS services will have direct communication with the Azure control plane to avoid asymmetric routing.
* No public IPs allowed in the landing zone spokes for virtual machines. Public IPs for landing zones are only allowed in the external area network (EAN). Azure Policy is in place to present Public IPs from being directly attached to Virtual Machines NICs.
* Spokes have network segmentation and security rules to filter East-West traffic and Spoke-to-Spoke traffic will be denied by default in the firewall.
* Most network operations in the spokes, as well as all operations in the hub, are centrally managed by networking team.
* In this initial design, the hub is in a single region, no BCDR plan yet.
Application Gateway with WAFv2 will be used for ingress traffic and application delivery. Application Gateways will be placed on the shared Public Access Zone (a subnet in the Hub), where public IPs will be protected with Azure DDoS (either Basic or Standard).
Other possible topologies are explained in [Azure documentation](https://docs.microsoft.com/azure/architecture/example-scenario/gateway/firewall-application-gateway) and we recommend reviewing to ensure the topology aligns to your department's network design.
There will be at least one shared Application Gateway instance and multiple dedicated Application Gateways for those line of businesses that require their own deployment (i.e. performance or cost allocation). All egress traffic from the spokes will be routed to the hub's edge firewall, inspected, and authorized/denied based on network (IP/Port) or application rules (FQDNs).
## IP Addresses
Network design will require 3 IP blocks:
* [RFC 1918][rfc1918] for Azure native-traffic (including IaaS and PaaS). Example: `10.18.0.0/16`
* [RFC 1918][rfc1918] for Azure Bastion. Example: `192.168.0.0/16`
* [RFC 6598][rfc1918] for department to department traffic through GCnet. Example: `100.60.0.0/16`
> This document will reference the example IP addresses above to illustrate network flow and configuration.
**Virtual Network Address Space**
![Hub Virtual Network Address Space](../media/architecture/hubnetwork-azfw/hubvnet-address-space.jpg)
## Hub Virtual Network
* Azure Firewall Premium instance configured with
* Either [forced tunneling](https://docs.microsoft.com/azure/firewall/forced-tunneling) (requires the next hop as another device such as NVA, on-premises or another Azure Firewall at the edge) or without forced tunneling. When forced tunneling is turned on, all management traffic will flow through the separate `AzureFirewallManagementSubnet` subnet.
* [DNS Proxy](https://docs.microsoft.com/azure/firewall/dns-details)
* [Threat Intelligence in Alert mode](https://docs.microsoft.com/azure/firewall/threat-intel)
* IDPS in Alert mode
* Azure Firewall Policy
* Base firewall rules to support spoke archetypes
* Azure Bastion
## Management Restricted Zone Virtual Network
* Management Access Zone (OZ) - to host any privileged access workstations (PAW), with Management Public IPs forwarded via the hub's firewall.
* Management (OZ) – hosting the management servers (domain controllers).
* Infrastructure (OZ) – hosting other common infrastructure, like file shares.
* Security Management (OZ) – hosting security, proxies and patching servers.
* Logging (OZ) – hosting logging relays.
* A User-Defined-Route forces all traffic to be sent to the Hub's firewall via the Internal Load Balancer in the Hub (this doesn't apply to Azure Bastion).
**MrzSpokeUdr Route Table**
![MrzSpokeUdr Route Table](../media/architecture/hubnetwork-azfw/mrz-udr.jpg)
## Shared Public Access Zone subnet in the Hub
To simplify management and compliance, all public-facing web servers, reverse proxies and application delivery controllers will be hosted in this subnet, as a sort of DMZ.
Application Gateway can have either public or private frontends (also with [RFC 6598][rfc6598] space) and it requires a full subnet for it's instances.
The Backend URL should map to a VIP and Port mapping in the firewall's External network. In the future, Backend URLs could be directly pointed to the Frontend subnets in the spoke. The firewall performs DNAT and sends to the webserver, which will answer to the source IP (Application Gateway's internal IP), which means the webserver may need a UDR to force traffic destined to Application Gateway to re-traverse the firewall (next-hop), which is considered asymmetric routing ([other example topologies](https://docs.microsoft.com/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)).
## User Defined Routes
All traffic to be sent to the Hub's Azure Firewall VIP.
Azure supports connecting to PaaS services using [RFC 1918][rfc1918] private IPs, avoiding all traffic from the internet and only allowing connections from designated private endpoints as a special kind of NICs in the subnet of choice. Private DNS resolution must be implemented so the PaaS service URLs properly translate to the individual private IP of the private endpoint.
## Network Security Groups
Below is a list of requirements for the NSGs in each subnet:
* Hub Virtual Network
* PazSubnet – must follow [Application Gateway's guidelines][nsgAppGatewayV2]
* TCP ports 65200-65535 for the v2 SKU with the destination subnet as Any and source as GatewayManager service tag
* Defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer])
* AzureBastionSubnet - See [documentation][nsgAzureBastion]
## Subnets and IP Ranges
> This section will reference the [example IP addresses](#ip-addresses) above to illustrate network flow and configuration.
We'll use as few IPs as possible for the Core, MRZ and PAZ virtual networks, and use a small CIDR for a `Generic Subscription archetype` that works as an example for other Prod and Dev workloads. However, it's key to remember that each line of business may have to get their own IP range ([RFC 1918][rfc1918]) and peer it to the Hub network without conflicts (i.e. with `10.18.0.0/16`).
We also use a [RFC 6598][rfc6598] range for external networking, which makes this design compatible with SCED requirements for future hybrid. connectivity.
To leverage Azure Bastion as a shared service for all spoke virtual networks, we use a third IP range (outside of the [RFC 1918][rfc1918] and [RFC 6598][rfc6598] ranges).
### Core network (Firewall and future VPN/ExpressRoute Gateway)
**Hub Virtual Network Address Space**
![Hub Virtual Network Address Space](../media/architecture/hubnetwork-azfw/hubvnet-address-space.jpg)
**Subnets with Network Security Group & User Defined Routes**
![Hub Virtual Network Subnets](../media/architecture/hubnetwork-azfw/hubvnet-subnets.jpg)
| Hub Virtual Network - 10.18.0.0/22, 100.60.0.0/24, 192.168.0.0/16 | Function | IP block |
| --- | --- | --- |
| PAZSubnet | Shared Application Gateways | 100.60.1.0/24 |
| AzureFirewallSubnet | Data plane traffic. When forced tunneling is off, it is also used for management traffic. | 100.60.1.0/24 |
| AzureFirewallManagementSubnet | Management plane traffic. Only used when forced tunneling is on. |
| AzureBastionSubnet | Azure Bastion | 192.168.0.0/24 |
| GatewaySubnet | Gateway Subnet | 10.18.0.0/27 |
### Management Restricted Zone Virtual Network (Spoke)
**MRZ Virtual Network Address Space**
![MRZ Virtual Network Address Space](../media/architecture/hubnetwork-azfw/mrzvnet-address-space.jpg)
**Subnets with Network Security Group & User Defined Routes**
![Hub Virtual Network Subnets](../media/architecture/hubnetwork-azfw/mrzvnet-subnets.jpg)
| Hub Virtual Network - 10.18.4.0/22 | Function | IP block |
| --- | --- | --- |
| MazSubnet | Management (Access Zone) | 10.18.4.0/25 |
| InfSubnet | Infrastructure Services (Restricted Zone)| 10.18.4.128/25 |
| SecSubnet | Security Services (Restricted Zone) | 10.18.5.0/26 |
| LogSubnet | Logging Services (Restricted Zone) | 10.18.5.64/26 |
| MgmtSubnet | Core Management Interfaces | 10.18.4.128/26 |
### Example Spoke: Generic Subscription Archetype
| Spoke Virtual Network - 10.18.16.0/21 | Function | IP block |
| --- | --- | --- |
| oz-subnet | Internal Foundational Elements (OZ) | /25 |
| paz-subnet | Presentation Zone (PAZ) | /25 |
| rz-subnet | Application zone (RZ) | /25 |
| hrz-subnet | Data Zone (HRZ) | /25 |
## Required Routes
Required routing rules to enforce the security controls required to protect the workloads by centralizing all network flows through the Hub's firewall.
**Example: MrzSpokeUdr Route Table**
![MrzSpokeUdr Route Table](../media/architecture/hubnetwork-azfw/mrz-udr.jpg)
| UDR Name | Rules | Applied to | Comments |
| --- | --- | --- | --- |
| PrdSpokesUdr | `0.0.0.0/0`, `10.18.0.0/16` and `100.60.0.0/16` via Azure Firewall VIP. | All production spoke virtual networks. | Via peering, spokes learn static routes to reach any IP in the Hub. Hence, we override the Hub virtual network's IPs (10.18/16 and 100.60/16) and force traffic via Firewall. |
| DevSpokesUdr | Same as above. | All development spoke virtual networks. | Same as above. |
| MrzSpokeUdr | Same as above. | Mrz spoke virtual network | Same as above. |
| PazSubnetUdr | Same as above. | Force traffic from Application Gateway to be sent via the Firewall VIP | Same as above. |
## Azure Firewall Rules
Azure Firewall Rules are configured via Azure Firewall Policy. This allows for firewall rules to be updated without redeploying the Hub Networking elements including Azure Firewall instances.
> Firewall Rule definition is located at [landingzones/lz-platform-connectivity-hub-azfw/azfw-policy/azure-firewall-policy.bicep](../../landingzones/lz-platform-connectivity-hub-azfw/azfw-policy/azure-firewall-policy.bicep)
**Azure Firewall Policy - Rule Collections**
![Azure Firewall Policy - Rule Collections](../media/architecture/hubnetwork-azfw/azfw-policy-rulecollections.jpg)
**Azure Firewall Policy - Network Rules**
![Azure Firewall Policy - Network Rules](../media/architecture/hubnetwork-azfw/azfw-policy-network-rules.jpg)
**Azure Firewall Policy - Application Rules**
![Azure Firewall Policy - Application Rules](../media/architecture/hubnetwork-azfw/azfw-policy-app-rules.jpg)
## Log Analytics Integration
Azure Firewall forwards it's logs to Log Analytics Workspace. This integration is automatically configured through Azure Policy for Diagnostic Settings.
![Diagnostic Dettings](../media/architecture/hubnetwork-azfw/azfw-diagnostic-settings.jpg)
Once Log Analytics Workspace has collected logs, [Azure Monitor Workbook for Azure Firewall](https://docs.microsoft.com/azure/firewall/firewall-workbook) can be used to monitor traffic flows.
Below are sample queries that can also be used to query Log Analytics Workspace directly.
**Sample Firewall Logs Query**
```
AzureDiagnostics
| where Category contains "AzureFirewall"
| where msg_s contains "Deny"
| project TimeGenerated, msg_s
| order by TimeGenerated desc
```
![Sample DNS Logs](../media/architecture/hubnetwork-azfw/azfw-logs-fw.jpg)
**Sample DNS Logs Query**
```
AzureDiagnostics
| where Category == "AzureFirewallDnsProxy"
| where msg_s !contains "NOERROR"
| project TimeGenerated, msg_s
| order by TimeGenerated desc
```
![Sample DNS Logs](../media/architecture/hubnetwork-azfw/azfw-logs-dns.jpg)
[itsg22]: https://www.cyber.gc.ca/sites/default/files/publications/itsg-22-eng.pdf
[cloudUsageProfiles]: https://github.com/canada-ca/cloud-guardrails/blob/master/EN/00_Applicable-Scope.md
[rfc1918]: https://tools.ietf.org/html/rfc1918
[rfc6598]: https://tools.ietf.org/html/rfc6598
[nsgAzureLoadBalancer]: https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview#allowazureloadbalancerinbound
[nsgAzureBastion]: https://docs.microsoft.com/azure/bastion/bastion-nsg#apply
[nsgAppGatewayV2]: https://docs.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups

Просмотреть файл

@ -0,0 +1,253 @@
# Archetype: Hub Networking with Fortigate Firewalls
## Table of Contents
* [Overview](#overview)
* [Hub Virtual Network](#hub-virtual-network)
* [Management Restricted Zone Virtual Network](#management-restricted-zone-virtual-network)
* [Shared Public Access Zone subnet in the Hub](#shared-public-access-zone-subnet-in-the-hub)
* [User Defined Routes](#user-defined-routes)
* [Network Security Groups](#network-security-groups)
* [Subnets and IP Ranges](#subnets-and-ip-ranges)
* [Required Routes](#required-routes)
* [Firewall configuration details](#firewall-configuration-details)
* [Fortigate Licences](#fortigate-licences)
## Overview
The recommended network design achieves the purpose of hosting [**Protected B** workloads on Profile 3][cloudUsageProfiles] (cloud only). In preparation for a future connection to on-premises infrastructure, we've taken recommendations from SSC guidance ([video](https://www.youtube.com/watch?v=rQYyatlO0-k)) detailed [https://github.com/canada-ca/Azure_LZBCA-AIZDB/tree/master/Network](https://github.com/canada-ca/Azure_LZBCA-AIZDB/tree/master/Network).
![Hub Networking with NVA](../media/architecture/hubnetwork-nva/hubnetwork-nva-design.jpg)
* Cloud network topology based on proven **hub-and-spoke design**.
* Hub contains two firewall clusters: one for production and one for non-production (dev) traffic. Each firewall virtual appliance will contain 4 NICs each.
* The firewalls have one interface in the Public Network (uses [RFC 6598][rfc6598] IPs for future use with GCnet). **EAN will not contain F5 load balancers.**
* The firewalls have one interface for their respective Internal Area Networks (Prod/Dev)
* The hub a contains subnet acting as a public access zones (PAZ, using [RFC 6598][rfc6598] space) where service delivery occurs (i.e. web application delivery), either dedicated to line of business workload or as a shared system. When using Azure Application Gateway, the subnet will be of its exclusively use.
* Hub links to a spoke MRZ Virtual Network (Management Restricted Zone) for management, security, and shared infrastructure purposes (i.e. Domain Controllers, Secure Jumpbox, Software Management, Log Relays, etc.).
* Spokes contains RZ (Restricted Zone) for line of business workloads, including dedicated PAZ (Public Accces Zone), App RZ (Restricted Zone), and Data RZ (Data Restricted Zone).
* All ingress traffic traverses the hub's firewall, and all egress to internet routed to the firewall for complete traffic inspection for virtual machines. PaaS services will have direct communication with the Azure control plane to avoid asymmetric routing.
* No public IPs allowed in the landing zone spokes for virtual machines. Public IPs for landing zones are only allowed in the external area network (EAN). Azure Policy is in place to present Public IPs from being directly attached to Virtual Machines NICs.
* Spokes have network segmentation and security rules to filter East-West traffic and Spoke-to-Spoke traffic will be denied by default in the firewall.
* Most network operations in the spokes, as well as all operations in the hub, are centrally managed by networking team.
* In this initial design, the hub is in a single region, no BCDR plan yet.
Application Gateway with WAFv2 will be used for ingress traffic and application delivery. Application Gateways will be placed on the shared Public Access Zone (a subnet in the Hub), where public IPs will be protected with Azure DDoS (either Basic or Standard).
Other possible topologies are explained in [Azure documentation](https://docs.microsoft.com/azure/architecture/example-scenario/gateway/firewall-application-gateway) and we recommend reviewing to ensure the topology aligns to your department's network design.
There will be at least one shared Application Gateway instance and multiple dedicated Application Gateways for those line of businesses that require their own deployment (i.e. performance or cost allocation). All egress traffic from the spokes will be routed to the hub's edge firewall, inspected, and authorized/denied based on network (IP/Port) or application rules (FQDNs).
## IP Addresses
Network design will require 3 IP blocks:
* [RFC 1918][rfc1918] for Azure native-traffic (including IaaS and PaaS). Example: `10.18.0.0/16`
* [RFC 1918][rfc1918] for Azure Bastion. Example: `192.168.0.0/16`
* [RFC 6598][rfc1918] for department to department traffic through GCnet. Example: `100.60.0.0/16`
> This document will reference the example IP addresses above to illustrate network flow and configuration.
**Virtual Network Address Space**
![Hub Virtual Network Address Space](../media/architecture/hubnetwork-nva/hubvnet-address-space.jpg)
## Hub Virtual Network
* 2 Firewall clusters (4 VMs total), one for Prod, one for Dev.
* Each Firewall has NICs connected to 4 subnets: External, Internal (Prod or Dev), HA and Management.
* (Optional) An External Load Balancer maps Public IPs to the external VIP of each FW pair.
* Internal Load Balancers maps Private IPs to the NICs of each FW pair (External, and either Int_Prod or Int_Dev).
* The HA network doesn't require Load Balancers.
## Management Restricted Zone Virtual Network
* Management Access Zone (OZ) - to host any privileged access workstations (PAW), with Management Public IPs forwarded via the hub's firewall.
* Management (OZ) – hosting the management servers (domain controllers).
* Infrastructure (OZ) – hosting other common infrastructure, like file shares.
* Security Management (OZ) – hosting security, proxies and patching servers.
* Logging (OZ) – hosting logging relays.
* A User-Defined-Route forces all traffic to be sent to the Hub's firewall via the Internal Load Balancer in the Hub (this doesn't apply to Azure Bastion).
**MrzSpokeUdr Route Table**
![MrzSpokeUdr Route Table](../media/architecture/hubnetwork-nva/mrz-udr.jpg)
## Shared Public Access Zone subnet in the Hub
To simplify management and compliance, all public-facing web servers, reverse proxies and application delivery controllers will be hosted in this subnet, as a sort of DMZ.
Application Gateway can have either public or private frontends (also with [RFC 6598][rfc6598] space) and it requires a full subnet for it's instances.
The Backend URL should map to a VIP and Port mapping in the firewall's External network. In the future, Backend URLs could be directly pointed to the Frontend subnets in the spoke. The firewall performs DNAT and sends to the webserver, which will answer to the source IP (Application Gateway's internal IP), which means the webserver may need a UDR to force traffic destined to Application Gateway to re-traverse the firewall (next-hop), which is considered asymmetric routing ([other example topologies](https://docs.microsoft.com/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)).
## User Defined Routes
All traffic to be sent to the Hub's firewall via the Internal Load Balancer in the Int_Prod Zone (or for Dev Landing Zones, the Int_Dev ILB) Private Endpoints and Private DNS Design.
Azure supports connecting to PaaS services using [RFC 1918][rfc1918] private IPs, avoiding all traffic from the internet and only allowing connections from designated private endpoints as a special kind of NICs in the subnet of choice. Private DNS resolution must be implemented so the PaaS service URLs properly translate to the individual private IP of the private endpoint.
## Network Security Groups
Below is a list of requirements for the NSGs in each subnet:
* Hub Virtual Network
* PublicSubnet: defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer] for health probes)
* PrdIntSubnet: defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer])
* DevIntSubnet: defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer])
* MrzIntSubnet
* Allow Bastion (see doc) from [AzureBastionSubnet][nsgAzureBastion]
* Defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer])
* HASubnet
* PazSubnet – must follow [Application Gateway's guidelines][nsgAppGatewayV2]
* TCP ports 65200-65535 for the v2 SKU with the destination subnet as Any and source as GatewayManager service tag
* Defaults (Allow [AzureLoadBalancer][nsgAzureLoadBalancer])
* AzureBastionSubnet - See [documentation][nsgAzureBastion]
## Subnets and IP Ranges
> This section will reference the [example IP addresses](#ip-addresses) above to illustrate network flow and configuration.
We'll use as few IPs as possible for the Core, MRZ and PAZ virtual networks, and use a small CIDR for a `Generic Subscription archetype` that works as an example for other Prod and Dev workloads. However, it's key to remember that each line of business may have to get their own IP range ([RFC 1918][rfc1918]) and peer it to the Hub network without conflicts (i.e. with `10.18.0.0/16`).
We also use a [RFC 6598][rfc6598] range for external networking, which makes this design compatible with SCED requirements for future hybrid. connectivity.
To leverage Azure Bastion as a shared service for all spoke virtual networks, we use a third IP range (outside of the [RFC 1918][rfc1918] and [RFC 6598][rfc6598] ranges).
### Core network (Firewall and future VPN/ExpressRoute Gateway)
**Hub Virtual Network Address Space**
![Hub Virtual Network Address Space](../media/architecture/hubnetwork-nva/hubvnet-address-space.jpg)
**Subnets with Network Security Group & User Defined Routes**
![Hub Virtual Network Subnets](../media/architecture/hubnetwork-nva/hubvnet-subnets.jpg)
| Hub Virtual Network - 10.18.0.0/22, 100.60.0.0/24, 192.168.0.0/16 | Function | IP block |
| --- | --- | --- |
| PublicSubnet | External Facing (Internet/Ground) | 100.60.0.0/24 |
| PAZSubnet | Shared Application Gateways | 100.60.1.0/24 |
| EanSubnet | External Access Network | 10.18.0.0/27 |
| PrdIntSubnet | Internal Facing Prod (Connect PROD Virtual Networks) | 10.18.0.32/27 |
| DevIntSubnet | Internal Facing Dev (Connect Dev Virtual Networks) | 10.18.0.64/27
| MrzIntSubnet | Management Restricted Zone (connect Management Virtual Network) | 10.18.0.96/27 |
| HASubnet | High Availability (Firewall VM <-> Firewall VM heartbeat) | 10.18.0.128/28 |
| AzureBastionSubnet | Azure Bastion | 192.168.0.0/24 |
| GatewaySubnet | Gateway Subnet | 10.18.1.0/27 |
### Management Restricted Zone Virtual Network (Spoke)
**MRZ Virtual Network Address Space**
![MRZ Virtual Network Address Space](../media/architecture/hubnetwork-nva/mrzvnet-address-space.jpg)
**Subnets with Network Security Group & User Defined Routes**
![Hub Virtual Network Subnets](../media/architecture/hubnetwork-nva/mrzvnet-subnets.jpg)
| Hub Virtual Network - 10.18.4.0/22 | Function | IP block |
| --- | --- | --- |
| MazSubnet | Management (Access Zone) | 10.18.4.0/25 |
| InfSubnet | Infrastructure Services (Restricted Zone)| 10.18.4.128/25 |
| SecSubnet | Security Services (Restricted Zone) | 10.18.5.0/26 |
| LogSubnet | Logging Services (Restricted Zone) | 10.18.5.64/26 |
| MgmtSubnet | Core Management Interfaces | 10.18.4.128/26 |
### Example Spoke: Generic Subscription Archetype
| Spoke Virtual Network - 10.18.16.0/21 | Function | IP block |
| --- | --- | --- |
| oz-subnet | Internal Foundational Elements (OZ) | /25 |
| paz-subnet | Presentation Zone (PAZ) | /25 |
| rz-subnet | Application zone (RZ) | /25 |
| hrz-subnet | Data Zone (HRZ) | /25 |
## Required Routes
Required routing rules to enforce the security controls required to protect the workloads by centralizing all network flows through the Hub's firewall.
**Example: MrzSpokeUdr Route Table**
![MrzSpokeUdr Route Table](../media/architecture/hubnetwork-nva/mrz-udr.jpg)
| UDR Name | Rules | Applied to | Comments |
| --- | --- | --- | --- |
| PrdSpokesUdr | 0.0.0.0/0 via PrdInt ILB VIP<br />10.18.0.0/16 via PrdInt ILB VIP<br />100.60.0.0/16 via PrdInt ILB VIP | All production spoke virtual networks. | Via peering, spokes learn static routes to reach any IP in the Hub. Hence, we override the Hub virtual network's IPs (10.18/16 and 100.60/16) and force traffic via Firewall. |
| DevSpokesUdr | 0.0.0.0/0 via DevInt ILB VIP<br />10.18.0.0/16 via DevInt ILB VIP<br />100.60.0.0/16 via DevInt ILB VIP | All development spoke virtual networks. | Same as above. |
| MrzSpokeUdr | 0.0.0.0/0 via PrdInt ILB VIP<br />10.18.0.0/16 via PrdInt ILB VIP<br />100.60.0.0/16 via PrdInt ILB VIP | Mrz spoke virtual network | Same as above |
| PazSubnetUdr | 10.18.4.0/24 via PrdExtFW VIP<br />(Future) ProdSpokeIPs via PrdExt ILB VIP<br />(Future) DevSpokeIPs via DevExt ILB VIP | Shared PAZ subnet (Application Gateway) | Force traffic from Application Gateway to be sent via the Firewall External ILBs |
## Firewall configuration details
Two firewalls with 4 NICs each will be used, one for prod, one for dev, both with FortiOS 6.4.5 (Azure Image: [fortinet_fortiweb-vm_v5 via Marketplace][azmarketplacefortinet]), no VDOM (virtual domains).
Note: In order to use automation to provision Marketplace images, the terms need to be accepted through PowerShell or Azure CLI. You may use this Azure CLI command to accept the terms.
```
az vm image terms accept --publisher fortinet --offer fortinet_fortigate-vm_v5 --plan fortinet_fg-vm --subscription SUBSCRIPTION_ID
```
They will have different sizes: vm8v (Prod) and vm04v (Dev), however both need an Azure VM with 4 NICs. For more information, see https://www.fortinet.com/content/dam/fortinet/assets/data-sheets/fortigate-vm.pdf and https://github.com/fortinetsolutions/Azure-Templates/tree/master/FortiGate/Active-Passive-HA-w-Azure-LBs
The 4 NICs will be mapped as follows (IPs shown for firewall 1 and 2, and the VIP is the ILB frontend).
* **NIC 1 – Public Network** – used as SNAT, so all traffic received on other NICs gets natted when leaving the firewall via this NIC.
* Prod Firewall: (100.60.0.5, 100.60.0.6), VIP is .4
* Dev Firewall: (100.60.0.8, 100.60.0.9), VIP is .7
* **NIC 2 – Management Network**
* Prod Firewall: (10.18.0.101, 10.18.0.102), VIP is .100
* Dev Firewall: (10.18.0.104, 10.18.0.105), VIP is .103
* **NIC 3 for prod – Internal Prod Area Network**
* Prod Firewall: (10.18.0.37, 10.18.0.38), VIP is .36
* **NIC 3 for dev – Internal Dev Area Network**
* Dev Firewall: (10.18.0.69, 10.18.0.70), VIP is .68
* **NIC 4 – HA clustering**
* Prod Firewall: (10.18.0.132, 10.18.0.133), VIP is .131
* Dev Firewall: (10.18.0.134, 10.18.0.135), VIP is .133
### Flows will enter the FW via the following NICs
| From \ To | Public | MRZ | PAZ | Internal (LZ Spokes) |
| --- | --- | --- | --- | --- |
| Public | - | *Forbidden* | NIC 1 | *Forbidden* |
| MRZ | NIC 2 | - | NIC 2 | NIC 2 |
| PAZ (via Public) | NIC 1 | NIC 1 | - | NIC 1 |
| Internal (LZ Spokes) | NIC 3 | NIC 3 | NIC 3 | -
## Fortigate Licences
The Fortigate firewall can be consumed in two modes: bring-your-own-license (BYOL) or pay-as-you-go (PAYG), where the hourly fee includes the fortigate license premium. Both require acceptance of the Fortigate license and billing plans, which can be automated with the following CLI:
**Bring your own license (BYOL)**
```
az vm image accept-terms --plan fortinet_fw-vm --offer fortinet_fortiweb-vm_v5 --publish fortinet --subscription XXX
```
**Pay as you go license (PAYG)**
```
az vm image accept-terms --plan fortinet_fw-vm-payg_20190624 --offer fortinet_fortiweb-vm_v5 --publish fortinet --subscription XXX
```
The BYOL boots the VM and expects a first login to its Web UI via https on its first nic, to upload the license, which will unlock the firewall and its SSH port.
For that reason, it's recommended to boot a Windows management VM in the MRZ (Management Restricted Zone Virtual Network) and use Azure Bastion to RDP in Windows and then open the Edge browser and navigate to https://firewall-ip . For more information, visit the following links:
* https://docs.fortinet.com/vm/azure/fortigate/6.2/azure-cookbook/6.2.0/358099/locating-fortigate-ha-for-azure-in-the-azure-portal-marketplace
* https://portal.azure.com/#create/fortinet.fortigatengfw-high-availabilityfortigate-ha
[itsg22]: https://www.cyber.gc.ca/sites/default/files/publications/itsg-22-eng.pdf
[cloudUsageProfiles]: https://github.com/canada-ca/cloud-guardrails/blob/master/EN/00_Applicable-Scope.md
[rfc1918]: https://tools.ietf.org/html/rfc1918
[rfc6598]: https://tools.ietf.org/html/rfc6598
[nsgAzureLoadBalancer]: https://docs.microsoft.com/azure/virtual-network/network-security-groups-overview#allowazureloadbalancerinbound
[nsgAzureBastion]: https://docs.microsoft.com/azure/bastion/bastion-nsg#apply
[nsgAppGatewayV2]: https://docs.microsoft.com/azure/application-gateway/configuration-infrastructure#network-security-groups
[azmarketplacefortinet]: https://portal.azure.com/#blade/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/fortinet.fortigatengfw-high-availability/product/%7B%22displayName%22%3A%22FortiGate%20NGFW%20for%20Azure%20LB%20HA%20with%20ARM%20template%22%2C%22itemDisplayName%22%3A%22FortiGate%20NGFW%20for%20Azure%20LB%20HA%20with%20ARM%20template%22%2C%22id%22%3A%22fortinet.fortigatengfw-high-availability%22%2C%22offerId%22%3A%22fortigatengfw-high-availability%22%2C%22publisherId%22%3A%22fortinet%22%2C%22publisherDisplayName%22%3A%22Fortinet%22%2C%22summary%22%3A%22FortiGate%20NGFW%20improves%20on%20the%20Azure%20firewall%20with%20complete%20data%2C%20application%20and%20network%20security%22%2C%22longSummary%22%3A%22Automated%20deployment%20for%20the%20combined%20use%20of%20Azure%20LB%20and%20NGFW%20configurations%20(2%20FortiGate%20virtual%20machines)%20to%20support%20your%20Enterprise%20Cloud%20workload%22%2C%22description%22%3A%22%3Cp%3EThe%20FortiGate%20Next-Generation%20Firewall%20combines%20a%20comprehensive%20suite%20of%20powerful%20security%20tools%20into%20a%20high-performance%20virtual%20device.%20FortiGate%20NGFWs%20can%20be%20combined%20with%20other%20Fortinet%20solutions%20to%20form%20a%20unified%20security%20fabric%20to%20secure%20your%20network%2C%20users%2C%20data%20and%20applications%20across%20clouds%20and%20enterprises.%3Cbr%3E%3C%2Fp%3E%3Cp%20class%3D%5C%22x_xmsonormal%5C%22%3EThe%20FortiGate%20NGFW%20includes%20application%20aware%20network%20security%2C%20secure%20SD-WAN%2C%20virus%20protection%2C%20IPS%2C%20Web%20filtering%20and%20VPN%20along%20with%20advanced%20features%20such%20as%20an%20extreme%20threat%20database%2C%20vulnerability%20management%20and%20flow-based%20inspection%20work%20in%20concert%20to%20identify%20and%20mitigate%20the%20latest%20complex%20security%20threats.%20The%20security-hardened%20FortiOS%20operating%20system%20is%20purpose-built%20for%20inspection%20and%20identification%20of%20malware.%3C%2Fp%3E%3Cp%20class%3D%5C%22x_xmsonormal%5C%22%3EDesigned%20to%20ensure%20easy%2C%20consistent%20deployment%20for%20the%20most%20ef

Просмотреть файл

@ -0,0 +1,57 @@
# Archetype: Central Logging
## Table of Contents
* [Overview](#overview)
* [Schema Definition](#schema-definition)
* [Deployment Instructions](#deployment-instructions)
## Overview
Centralized logging landing zone allows a common subscription for managing Log Analytics Workspace & Automation Account. This landing zone will be in the pubsecPlatform management group.
![Archetype: Central Logging](../media/architecture/archetype-logging.jpg)
**Workflow**
* A new subscription is created through existing process (either via ea.azure.com or Azure Portal).
* The subscription will automatically be assigned to the **pubsecSandbox** management group.
* Update configuration in Azure DevOps Git repo.
* Execute the **Platform – Logging** Azure DevOps Pipeline. The pipeline will:
* Move it to the target management group.
* Scaffold the subscription with baseline configuration.
**Subscription Move**
Subscription can be moved to a target Management Group through Azure ARM Templates/Bicep. Move has been incorporated into the landing zone Azure DevOps Pipeline automation.
**Capabilities**
| Capability | Description |
| --- | --- |
| Service Health Alerts | Configures Service Health alerts such as Security, Incident, Maintenance. Alerts are configured with email, sms and voice notifications. |
| Azure Security Center | Configures security contact information (email and phone). |
| Subscription Role Assignments | Configures subscription scoped role assignments. Roles can be built-in or custom. |
| Subscription Budget | Configures monthly subscription budget with email notification. Budget is configured by default for 10 years and the amount. |
| Log Analytics | Configures Automation Account, Log Analytics Workspace and Log Analytics Solutions (AgentHealthAssessment, AntiMalware, AzureActivity, ChangeTracking, Security, SecurityInsights, ServiceMap, SQLAssessment, Updates, VMInsights). **SecurityInsights** solution pack will enable Azure Sentinel. |
| Subscription Tags | A set of tags that are assigned to the subscription. |
| Resource Tags | A set of tags that are assigned to the resource group and resources. These tags must include all required tags as defined the Tag Governance policy. |
## Schema Definition
Reference implementation uses parameter files with `object` parameters to consolidate parameters based on their context. The schemas types are:
* v0.1.0
* Common
* [Service Health Alerts](../../schemas/v0.1.0/landingzones/types/serviceHealthAlerts.json)
* [Azure Security Center](../../schemas/v0.1.0/landingzones/types/securityCenter.json)
* [Subscription Role Assignments](../../schemas/v0.1.0/landingzones/types/subscriptionRoleAssignments.json)
* [Subscription Budget](../../schemas/v0.1.0/landingzones/types/subscriptionBudget.json)
* [Subscription Tags](../../schemas/v0.1.0/landingzones/types/subscriptionTags.json)
* [Resource Tags](../../schemas/v0.1.0/landingzones/types/resourceTags.json)
## Deployment Instructions
Use the [Onboarding Guide for Azure DevOps](../../ONBOARDING_GUIDE_ADO.md) to configure this archetype.

Просмотреть файл

@ -0,0 +1,461 @@
# Archetype: Machine Learning
## Table of Contents
* [Overview](#overview)
* [Data Flow](#data-flow)
* [Access Control](#access-control)
* [Networking and Security Configuration](#networking-and-security-configuration)
* [Customer Managed Keys](#customer-managed-keys)
* [Secrets](#secrets)
* [Logging](#logging)
* [Testing](#testing)
* [Schema Definition](#schema-definition)
* [Example Deployment Parameters](#example-deployment-parameters)
* [Deployment Instructions](#deployment-instructions)
## Overview
Teams can request subscriptions from CloudOps team with up to Owner permissions for **Data & AI workloads**, thus democratizing access to deploy, configure, and manage their applications with limited involvement from CloudOps team. CloudOps team can choose to limit the permission using custom roles as deemed appropriate based on risk and requirements.
Azure Policies are used to provide governance, compliance and protection while enabling teams to use their preferred toolset to use Azure services.
![Archetype: Machine Learning](../media/architecture/archetype-machinelearning.jpg)
**CloudOps team will be required for**
1. Establishing connectivity to Hub virtual network (required for egress traffic flow & Azure Bastion).
2. Creating App Registrations (required for service principal accounts). This is optional based on whether App Registrations are disabled for all users or not.
**Workflow**
* A new subscription is created through existing process (either via ea.azure.com or Azure Portal).
* The subscription will automatically be assigned to the **pubsecSandbox** management group.
* CloudOps will create a Service Principal Account (via App Registration) that will be used for future DevOps automation.
* CloudOps will scaffold the subscription with baseline configuration.
* CloudOps will hand over the subscription to requesting team.
**Subscription Move**
Subscription can be moved to a target Management Group through Azure ARM Templates/Bicep. Move has been incorporated into the landing zone Azure DevOps Pipeline automation.
**Capabilities**
| Capability | Description |
| --- | --- |
| Service Health Alerts | Configures Service Health alerts such as Security, Incident, Maintenance. Alerts are configured with email, sms and voice notifications. |
| Azure Security Center | Configures security contact information (email and phone). |
| Subscription Role Assignments | Configures subscription scoped role assignments. Roles can be built-in or custom. |
| Subscription Budget | Configures monthly subscription budget with email notification. Budget is configured by default for 10 years and the amount. |
| Subscription Tags | A set of tags that are assigned to the subscription. |
| Resource Tags | A set of tags that are assigned to the resource group and resources. These tags must include all required tags as defined the Tag Governance policy. |
| Automation | Deploys an Azure Automation Account in each subscription. |
| Hub Networking | Configures virtual network peering to Hub Network which is required for egress traffic flow and hub-managed DNS resolution (on-premises or other spokes, private endpoints).
| Networking | A spoke virtual network with minimum 4 zones: oz (Opertional Zone), paz (Public Access Zone), rz (Restricted Zone), hrz (Highly Restricted Zone). Additional subnets can be configured at deployment time using configuration (see below). |
| Key Vault | Deploys a spoke managed Azure Key Vault instance that is used for key and secret management. |
| SQL Database | Deploys Azure SQL Database. Optional. |
| SQL Managed Instances | Deploys Azure SQL Managed Instance. Optional. |
| Azure Data Lake Store Gen 2 | Deploys an Azure Data Lake Gen 2 instance with hierarchical namespace. *There aren't any parameters for customization.* |
| Azure Machine Learning | Deploys Azure Machine Learning Service. |
| Azure Databricks | Deploys an Azure Databricks instance. *There aren't any parameters for customization.* |
| Azure Data Factory | Deploys an Azure Data Factory instance with Managed Virtual Network and Managed Integrated Runtime. *There aren't any parameters for customization.* |
| Azure Kubernetes Services | Deploys an AKS with Kubenet network policy that will be used for deploying machine learning models. |
| Azure Container Registry | Deploys an Azure Container Registry to store machine learning models as container images. ACR is used when deploying pods to AKS. *There aren't any parameters for customization.* |
| Application Insights | Deploys an Application Insights instance that is used by Azure Machine Learning instance. *There aren't any parameters for customization.* |
## Data Flow
![Data Flow](../media/architecture/archetype-machinelearning-dataflow.jpg)
| Category | Service | Configuration | Reference |
| --- | --- | --- | --- |
| Storage | Azure Data Lake Gen 2 - Cloud storage enabling big data analytics | Hierarchical namespace enabled. Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/storage/blobs/data-lake-storage-introduction)
| Compute | Azure Databricks - Managed Spark cloud platform for data analytics and data science | Premium tier; Secured Cluster Connectivity enabled with load balancer for egress | [Azure Docs](https://docs.microsoft.com/azure/databricks/scenarios/what-is-azure-databricks) |
| Ingestion | Azure Data Factory - Managed cloud service for data integration and orchestration | Managed virtual network. Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/data-factory/introduction) |
| Machine learning and deployment | Azure Machine Learning - Cloud platform for end-to-end machine learning workflows | Optional – Customer Managed Keys, High Business Impact Workspace | [Azure Docs](https://docs.microsoft.com/azure/machine-learning/overview-what-is-azure-ml) |
| Machine learning and deployment | Azure Container Registry - Managed private Docker cloud registry | Premium SKU. Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/container-registry/container-registry-intro) |
| Machine learning and deployment | Azure Kubernetes Service - Cloud hosted Kubernetes service | Private cluster enabled; Managed identity type; Network plugin set to kubenet. Optional – Customer Managed Keys for Managed Disks | [Azure Docs](https://docs.microsoft.com/azure/aks/intro-kubernetes) |
| SQL Storage | Azure SQL Managed Instance - Cloud database storage enabling lift and shift on-premise application migrations | Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview)
| SQL Storage | Azure SQL Database - Fully managed cloud database engine | Optional – Customer Managed Keys | [Azure Docs](https://docs.microsoft.com/azure/azure-sql/database/sql-database-paas-overview) |
| Key Management | Azure Key Vault - Centralized cloud storage of secrets and keys | Private Endpoint | [Azure Docs](https://docs.microsoft.com/azure/key-vault/general/overview)
| Monitoring | Application Insights - Application performance and monitoring cloud service | - | [Azure Docs](https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview)
The intended cloud service workflows and data movements for this archetype include:
1. Data can be ingested from various sources using Data Factory, which uses managed virtual network for its Azure hosted integration runtime.
2. The data would be stored in Azure Data Lake Gen 2.
3. Structured data can be stored in SQL Database, or SQL Managed Instance.
4. Data engineering and transformation tasks can be done with Spark using Azure Databricks. Transformed data would be stored back in the data lake.
5. Machine learning would be done using Azure Machine Learning.
6. Models would be containerized and pushed to Azure Container Registry from Azure ML.
7. Models would be the deployed as services to Azure Kubernetes Service from Container Registry.
8. Secrets and keys would be stored safely in Azure Key Vault.
9. Monitoring and logging would be through Application Insights.
## Access Control
Once the machine learning archetype is deployed and available to use, access control best practices should be applied. Below is the recommend set of security groups & their respective Azure role assignments. This is not an inclusive list and could be updated as required.
**Replace `PROJECT_NAME` placeholder in the security group names with the appropriate project name for the workload**.
| Security Group | Azure Role | Notes |
| --- | --- | --- |
| SG_PROJECT_NAME_ADMIN | Subscription with `Owner` role. | Admin group for subscription. |
| SG_PROJECT_NAME_READ | Subscription with `Reader` role. | Reader group for subscription. |
| SG_PROJECT_NAME_DATA_PROVIDER | Data Lake (main storage account) service with `Storage Blob Data Contributor` role. Key Vault service with `Key Vault Secrets User`. | Data group with access to data as well as key vault secrets usage.
| SG_PROJECT_NAME_DATA_SCIENCE | Azure ML service with `Contributor` role. Azure Databricks service with `Contributor` role. Key Vault service with `Key Vault Secrets User`. | Data science group with compute access as well as key vault secrets usage. |
## Networking and Security Configuration
![Networking](../media/architecture/archetype-machinelearning-networking.jpg)
| Service Name | Settings | Private Endpoints / DNS | Subnet(s)
| --- | --- | --- | --- |
| Azure Key Vault | Network ACL Deny | Private endpoint on `vault` + DNS registration to either hub or spoke | `privateEndpoints`
| SQL Mananged Instance | N/A | N/A | `sqlmi`
| SQL Database | N/A | Private endpoint on `sqlserver` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Data Lake Gen 2 | Network ACL deny | Private endpoint on `blob`, `dfs` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Databricks | No public IP enabled (secure cluster connectivity), load balancer for egress with IP and outbound rules, virtual network ibjection | N/A | `databricksPrivate`, `databricksPublic`
| Azure Machine Learning | No public workspace access | Private endpoint on `amlWorkspace` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Storage Account for Azure ML | Network ACL deny | Private endpoint on `blob`, `file` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Data Factory | Public network access disabled, Azure integration runtime with managed virtual network | Private endpoint on `dataFactory` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Kubernetes Service | Private cluster, network profile with kubenet | N/A | `aks`
| Azure Container Registry | Network ACL deny, public network access disabled | Private endpoint on `registry` + DNS registration to either hub or spoke | `privateEndpoints`
| Azure Application Insights | N/A | N/A | N/A |
This archetype also has the following security features as options for deployment:
* Customer managed keys for encryption at rest, including Azure ML, storage, Container Registry, Data Factory, SQL Database / Managed Instance, and Kubernetes Service.
* Azure ML has ability to enable high-business impact workspace which controls amount of data Microsoft collects for diagnostic purposes.
## Customer Managed Keys
To enable customer-managed key scenarios, some services including Azure Storage Account and Azure Container Registry require deployment scripts to run with a user-assigned identity to enable encryption key on the respective instances.
Therefore, when the `useCMK` parameter is `true`, a deployment identity is created and assigned `Owner` role to the compute and storage resource groups to run the deployment scripts as needed. Once the services are provisioned with customer-managed keys, the role assignments are automatically deleted.
The artifacts created by the deployment script such as Azure Container Instance and Storage accounts will be automatically deleted 1 hour after completion.
## Secrets
Temporary passwords are autogenerated, and connection strings are automatically stored as secrets in Key Vault. They include:
* SQL Database username, password, and connection string
* SQL Managed Instance username, password, and connection string
## Logging
Azure Policy will enable diagnostic settings for all PaaS components in the machine learning archetype and the logs will be sent to the centralized log analytics workspace. These policies are configured at the management group scope and are not explicitly deployed.
## Testing
Test scripts are provided to verify end to end integration. These tests are not automated so minor modifications are needed to set up and run.
The test scripts are located in [tests/landingzones/lz-machinelearning/e2e-flow-tests](../../tests/landingzones/lz-machinelearning/e2e-flow-tests)
The scripts are:
1. Azure ML SQL connection and Key Vault integration test
2. Azure ML terminal connection to ACR test
3. Databricks integration with Key Vault, SQL MI, SQL Database, Data Lake test
4. Azure ML deployment through ACR to AKS test
### Test Scenarios
**Azure ML SQL / Key vault test**
1. Access the ML landing zone network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and create a new notebook to run Python notebook
3. Use the provided test script to test connection to Key Vault by retrieving the SQL password
4. Create a datastore connecting to SQL DB
5. Create a dataset connecting to a table in SQL DB
6. Use the provided dataset consume code to verify connectivity to SQL DB
**Azure ML terminal connection to ACR test**
1. Access the ML landing zone network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and use its built-in terminal
3. Use the provided test script to pull a hello-word Docker image and push to ACR
**Databricks integration tests**
1. Access Azure Databricks workspace
2. Create a new compute cluster
3. Create a new Databricks notebook in the workspace and copy in the integration test script
4. Run the test script to verify connectivity to Key Vault, SQL DB/MI, and data lake
**Azure ML deployment test**
1. Access the ML network and log into Azure ML through https://ml.azure.com
2. Set up a compute instance and import the provided tests to the workspace
3. Run the test script, which will build a Docker Azure ML model image, push it to ACR, and then AKS to pull and run the ML model
## Schema Definition
Reference implementation uses parameter files with `object` parameters to consolidate parameters based on their context. The schemas types are:
* v0.1.0
* [Spoke deployment parameters definition](../../schemas/v0.1.0/landingzones/lz-machinelearning.json)
* Common types
* [Service Health Alerts](../../schemas/v0.1.0/landingzones/types/serviceHealthAlerts.json)
* [Azure Security Center](../../schemas/v0.1.0/landingzones/types/securityCenter.json)
* [Subscription Role Assignments](../../schemas/v0.1.0/landingzones/types/subscriptionRoleAssignments.json)
* [Subscription Budget](../../schemas/v0.1.0/landingzones/types/subscriptionBudget.json)
* [Subscription Tags](../../schemas/v0.1.0/landingzones/types/subscriptionTags.json)
* [Resource Tags](../../schemas/v0.1.0/landingzones/types/resourceTags.json)
* Spoke types
* [Automation](../../schemas/v0.1.0/landingzones/types/automation.json)
* [Hub Network](../../schemas/v0.1.0/landingzones/types/hubNetwork.json)
* [Azure Kubernetes Service](../../schemas/v0.1.0/landingzones/types/aks.json)
* [Azure Machine Learning](../../schemas/v0.1.0/landingzones/types/aml.json)
* [Azure Key Vault](../../schemas/v0.1.0/landingzones/types/keyVault.json)
* [Azure SQL Database](../../schemas/v0.1.0/landingzones/types/sqldb.json)
* [Azure SQL Managed Instances](../../schemas/v0.1.0/landingzones/types/sqlmi.json)
## Example Deployment Parameters
This example configures:
1. Service Health Alerts
2. Azure Security Center
3. Subscription Role Assignments using built-in and custom roles
4. Subscription Budget with $1000
5. Subscription Tags
6. Resource Tags (aligned to the default tags defined in [Policies](../../policy/custom/definitions/policyset/Tags.parameters.json))
7. Automation Account
8. Spoke Virtual Network with Hub-managed DNS, Hub-managed private endpoint DNS Zones, Virtual Network Peering and all required subnets (zones).
9. Deploys Azure resources with Customer Managed Keys.
> **Note 1:** Azure Automation Account is not deployed with Customer Managed Key as it requires an Azure Key Vault instance with public network access.
> **Note 2:** All secrets stored in Azure Key Vault will have 10 year expiration (configurable) & all RSA Keys (used for CMK) will not have an expiration.
```json
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"serviceHealthAlerts": {
"value": {
"resourceGroupName": "pubsec-service-health",
"incidentTypes": [ "Incident", "Security" ],
"regions": [ "Global", "Canada East", "Canada Central" ],
"receivers": {
"app": [ "alzcanadapubsec@microsoft.com" ],
"email": [ "alzcanadapubsec@microsoft.com" ],
"sms": [ { "countryCode": "1", "phoneNumber": "5555555555" } ],
"voice": [ { "countryCode": "1", "phoneNumber": "5555555555" } ]
},
"actionGroupName": "Sub5 ALZ action group",
"actionGroupShortName": "sub5-alert",
"alertRuleName": "Sub5 ALZ alert rule",
"alertRuleDescription": "Alert rule for Azure Landing Zone"
}
},
"securityCenter": {
"value": {
"email": "alzcanadapubsec@microsoft.com",
"phone": "5555555555"
}
},
"subscriptionRoleAssignments": {
"value": [
{
"comments": "Built-in Role: Contributor",
"roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
},
{
"comments": "Custom Role: Landing Zone Application Owner",
"roleDefinitionId": "b4c87314-c1a1-5320-9c43-779585186bcc",
"securityGroupObjectIds": [
"38f33f7e-a471-4630-8ce9-c6653495a2ee"
]
}
]
},
"subscriptionBudget": {
"value": {
"createBudget": true,
"name": "MonthlySubscriptionBudget",
"amount": 1000,
"timeGrain": "Monthly",
"contactEmails": [
"alzcanadapubsec@microsoft.com"
]
}
},
"subscriptionTags": {
"value": {
"ISSO": "isso-tag"
}
},
"resourceTags": {
"value": {
"ClientOrganization": "client-organization-tag",
"CostCenter": "cost-center-tag",
"DataSensitivity": "data-sensitivity-tag",
"ProjectContact": "project-contact-tag",
"ProjectName": "project-name-tag",
"TechnicalContact": "technical-contact-tag"
}
},
"resourceGroups": {
"value": {
"automation": "azml-Automation",
"compute": "azml-Compute",
"monitor": "azml-Monitor",
"networking": "azml-Network",
"networkWatcher": "NetworkWatcherRG",
"security": "azml-Security",
"storage": "azml-Storage"
}
},
"useCMK": {
"value": true
},
"automation": {
"value": {
"name": "azml-automation"
}
},
"keyVault": {
"value": {
"secretExpiryInDays": 3650
}
},
"aks": {
"value": {
"version": "1.21.2"
}
},
"sqldb": {
"value": {
"enabled": true,
"username": "azadmin"
}
},
"sqlmi": {
"value": {
"enabled": true,
"username": "azadmin"
}
},
"aml": {
"value": {
"enableHbiWorkspace": false
}
},
"hubNetwork": {
"value": {
"virtualNetworkId": "/subscriptions/ed7f4eed-9010-4227-b115-2a5e37728f27/resourceGroups/pubsec-hub-networking-rg/providers/Microsoft.Network/virtualNetworks/hub-vnet",
"rfc1918IPRange": "10.18.0.0/22",
"rfc6598IPRange": "100.60.0.0/16",
"egressVirtualApplianceIp": "10.18.1.4",
"privateDnsManagedByHub": true,
"privateDnsManagedByHubSubscriptionId": "ed7f4eed-9010-4227-b115-2a5e37728f27",
"privateDnsManagedByHubResourceGroupName": "pubsec-dns-rg"
}
},
"network": {
"value": {
"peerToHubVirtualNetwork": true,
"useRemoteGateway": false,
"name": "azml-vnet",
"dnsServers": [
"10.18.1.4"
],
"addressPrefixes": [
"10.4.0.0/16"
],
"subnets": {
"oz": {
"comments": "Foundational Elements Zone (OZ)",
"name": "oz",
"addressPrefix": "10.4.1.0/25"
},
"paz": {
"comments": "Presentation Zone (PAZ)",
"name": "paz",
"addressPrefix": "10.4.2.0/25"
},
"rz": {
"comments": "Application Zone (RZ)",
"name": "rz",
"addressPrefix": "10.4.3.0/25"
},
"hrz": {
"comments": "Data Zone (HRZ)",
"name": "hrz",
"addressPrefix": "10.4.4.0/25"
},
"sqlmi": {
"comments": "SQL Managed Instances Delegated Subnet",
"name": "sqlmi",
"addressPrefix": "10.4.5.0/25"
},
"databricksPublic": {
"comments": "Databricks Public Delegated Subnet",
"name": "databrickspublic",
"addressPrefix": "10.4.6.0/25"
},
"databricksPrivate": {
"comments": "Databricks Private Delegated Subnet",
"name": "databricksprivate",
"addressPrefix": "10.4.7.0/25"
},
"privateEndpoints": {
"comments": "Private Endpoints Subnet",
"name": "privateendpoints",
"addressPrefix": "10.4.8.0/25"
},
"aks": {
"comments": "AKS Subnet",
"name": "aks",
"addressPrefix": "10.4.9.0/25"
}
}
}
}
}
}
```
## Deployment Instructions
> Use the [Onboarding Guide for Azure DevOps](../../ONBOARDING_GUIDE_ADO.md) to configure the `subscription` pipeline. This pipeline will deploy workload archetypes such as Machine Learning.
Parameter files for archetype deployment are configured in [config/subscription folder](../../config/subscriptions). The directory hierarchy is comprised of the following elements, from this directory downward:
1. A environment directory named for the Azure DevOps Org and Git Repo branch name, e.g. 'CanadaESLZ-main'.
2. The management group hierarchy defined for your environment, e.g. pubsec/Platform/LandingZone/Prod. The location of the config file represents which Management Group the subscription is a member of.
For example, if your Azure DevOps organization name is 'CanadaESLZ', you have two Git Repo branches named 'main' and 'dev', and you have top level management group named 'pubsec' with the standard structure, then your path structure would look like this:
```
/config/subscriptions
/CanadaESLZ-main <- Your environment, e.g. CanadaESLZ-main, CanadaESLZ-dev, etc.
/pubsec <- Your top level management root group name
/LandingZones
/Prod
/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_machinelearning.json
```
The JSON config file name is in one of the following two formats:
- [AzureSubscriptionGUID]\_[TemplateName].json
- [AzureSubscriptionGUID]\_[TemplateName]\_[DeploymentLocation].json
The subscription GUID is needed by the pipeline; since it's not available in the file contents, it is specified in the config file name.
The template name/type is a text fragment corresponding to a path name (or part of a path name) under the '/landingzones' top level path. It indicates which Bicep templates to run on the subscription. For example, the machine learning path is `/landingzones/lz-machinelearning`, so we remove the `lz-` prefix and use `machinelearning` to specify this type of landing zone.
The deployment location is the short name of an Azure deployment location, which may be used to override the `deploymentRegion` YAML variable. The allowable values for this value can be determined by looking at the `Name` column output of the command: `az account list-locations -o table`.

614
docs/architecture.md Normal file
Просмотреть файл

@ -0,0 +1,614 @@
# Azure Landing Zones for Canadian Public Sector
The purpose of the reference implementation is to guide Canadian Public Sector customers on building Landing Zones in their Azure environment. The reference implementation is based on [Cloud Adoption Framework for Azure][cafLandingZones] and provides an opininated implementation that enables ITSG-33 regulatory compliance by using [NIST SP 800-53 Rev. 4][nist80053r4Policyset] and [Canada Federal PBMM][pbmmPolicyset] Regulatory Compliance Policy Sets.
Architecture supports up to **Treasury Board of Canada Secretariat (TBS) Cloud Profile 3** - Cloud Only Applications. This proflie is applicable to Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) with [characteristics][cloudUsageProfiles]:
* Cloud-based services hosting sensitive (up to Protected B) information
* No direct system to system network interconnections required with GC data centers
This document describes the architecture and design decisions for building a **[Protected B][pbmm] capable** Azure Landing Zones.
---
## Table of Contents
1. [Key Decisions](\#1.-key-decisions)
2. [Security Controls](\#2.-security-controls)
3. [Management Groups](\#3.-management-groups)
4. [Identity](\#4.-identity)
5. [Network](\#5.-network)
6. [Logging](\#6.-logging)
7. [Tagging](\#7.-tagging)
8. [Archetypes](\#8.-archetypes)
9. [Automation](\#9.-automation)
---
## 1. Key Decisions
The table below outlines the key decisions each department must consider as part of adopting Azure. This list is provided to help guide and is not meant to be exhaustive.
| Topic | Scenario | Ownership | Complexity to change | Decision |
| --- | --- | --- | --- | --- |
| Private IP range for Cloud | Based on [RFC 1918][rfc1918] and [RFC 6598][rfc6598], to allow seamless routing for hybrid connectivity. | | | |
| Ground to Cloud Network Connectivity | Use either: Express Route; or SCED for hybrid connectivity. | | | |
| Firewalls | Central firewalls for all egress and non-HTTP/S ingress traffic to VMs. | | | |
| Spoke Network Segmentations | Subnet Addressing & Network Security Groups. | | | |
| Application Gateway + WAF | Application Gateway per spoke subscription to allow direct delivery for HTTP/S traffic. WAF and routing rules are managed by CloudOps. | | | |
| Security Incident & Monitoring | Centralized security monitoring. | | | |
| Logging (IaaS & PaaS) | Centralized Log Analytics Workspace with RBAC permissions to allow resource owners to access resource logs & Security Monitor to access all logs. | | | |
| RBAC / IAM | Roles, security groups and access control for management groups, subscriptions & resource groups. | | | |
| Service Principals (App Registration) | Service Principals are required for automation and will require elevated permissions for role assignments. | | | |
| VM Patching | Centralized Patch Management with either Azure native tools or non-Azure solutions. | | | |
| Tag Governance | Tags that are required on all subscriptions, resource groups and resources to provide resource aggregation for reporting and cost management. | | | |
---
## 2. Security Controls
### 2.1 Scope
Departments are targeting workloads with **Unclassified**, **Protected A** and **Protected B** data classifications in Azure. These classifications are based on [ITSG-33][itsg33] which is derived from [NIST SP 800-53 Revision 4][nist80053R4].
Guardrails in Azure are deployed through [Azure Policy](https://docs.microsoft.com/azure/governance/policy/overview). Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. Policy definitions for these common use cases are already available in your Azure environment as built-ins to help you get started.
Azure Landing Zones for Canadian Public Sector is configured with a set of built-in Azure Policy Sets. These can be further extended or removed as required by the department through automation:
* [Canada Federal PBMM Policy Set][pbmmPolicySet]
* [NIST SP 800-53 Revision 4 Policy Set][nist80053R4policySet]
* [NIST SP 800-53 Revision 5 Policy Set][nist80053R5policySet]
* [Azure Security Benchmark][asbPolicySet]
* [CIS Microsoft Azure Foundations Benchmark 1.3.0][cisMicrosoftAzureFoundationPolicySet]
* [FedRAMP Moderate][fedrampmPolicySet]
* [HIPAA / HITRUST 9.2][hipaaHitrustPolicySet]
> **Note**: The built-in policy sets are used as-is to ensure future improvements from Azure Engineering teams are automatically incorproated into the Azure environment.
Azure Policy Compliance dashboard provides an up-to-date compliance view across the Azure environment. Non-compliant resources can then be addressed through automated remediations, exemptions or through appropriate teams within the department.
![Azure Policy Compliance](media/architecture/policy-compliance.jpg)
Custom policy sets have been designed to increase compliance for logging, networking & tagging requirements. These include:
* **Azure Defender for Azure Services**
* A vulnerability assessment solution should be enabled on your virtual machines
* Deploy Azure Defender for Azure Container Registry
* Deploy Azure Defender for Azure Kubernetes Service
* Deploy Azure Defender for Azure Key Vault
* Deploy Azure Defender for Azure App Services
* Deploy Azure Defender for Azure Resource Manager
* Deploy Azure Defender for DNS
* Deploy Azure Defender for Open-source relational databases
* Deploy Azure Defender for Azure SQL Databases
* Deploy Azure Defender for SQL on Virtual Machines
* Deploy Azure Defender for Virtual Machines
* Deploy Azure Defender for Storage Account
* Deploy Azure Defender for Virtual Machines
* Deploy Advanced Data Security on SQL servers
* Deploy Advanced Threat Protection on Storage Accounts
* Deploy Threat Detection on SQL servers
* Vulnerabilities in security configuration on your machines should be remediated
* Vulnerabilities in security configuration on your virtual machine scale sets should be remediated
* Vulnerabilities on your SQL databases should be remediated
* Configure machines to receive the Qualys vulnerability assessment agent
* **Log Analytics for Azure Services**
* [Preview]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted
* Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted
* Audit Log Analytics workspace for VM - Report Mismatch
* Configure diagnostic settings for storage accounts to Log Analytics workspace
* Deploy Dependency agent for Linux virtual machine scale sets
* Deploy Dependency agent for Linux virtual machines
* Deploy Dependency agent for Windows virtual machine scale sets
* Deploy Dependency agent for Windows virtual machines
* Deploy Diagnostic Settings for Automation Account to Log Analytics Workspaces
* Deploy Diagnostic Settings for Azure Application Gateway to Log Analytics Workspaces
* Deploy Diagnostic Settings for Azure Cognitive Services to Log Analytics Workspaces
* Deploy Diagnostic Settings for Azure Container Registry to Log Analytics Workspaces
* Deploy Diagnostic Settings for Azure Machine Learning workspaces to Log Analytics Workspaces
* Deploy Diagnostic Settings for Bastion Hosts to Log Analytics Workspaces
* Deploy Diagnostic Settings for Batch Account to Log Analytics workspace
* Deploy Diagnostic Settings for Data Factory to Log Analytics Workspaces
* Deploy Diagnostic Settings for Data Lake Analytics to Log Analytics workspace
* Deploy Diagnostic Settings for Data Lake Storage Gen1 to Log Analytics workspace
* Deploy Diagnostic Settings for Databricks to Log Analytics Workspaces
* Deploy Diagnostic Settings for Event Hub to Log Analytics workspace
* Deploy Diagnostic Settings for FHIR R4 to Log Analytics Workspaces
* Deploy Diagnostic Settings for FHIR STU3 to Log Analytics Workspaces
* Deploy Diagnostic Settings for Key Vault to Log Analytics workspace
* Deploy Diagnostic Settings for Logic Apps to Log Analytics workspace
* Deploy Diagnostic Settings for Network Security Groups to Log Analytics Workspaces
* Deploy Diagnostic Settings for Recovery Services Vault to Log Analytics workspace for resource specific categories.
* Deploy Diagnostic Settings for SQL Managed Instance to Log Analytics Workspaces
* Deploy Diagnostic Settings for SQLDB Database to Log Analytics Workspaces
* Deploy Diagnostic Settings for Search Services to Log Analytics workspace
* Deploy Diagnostic Settings for Service Bus to Log Analytics workspace
* Deploy Diagnostic Settings for Stream Analytics to Log Analytics workspace
* Deploy Diagnostic Settings for Synapse workspace to Log Analytics Workspaces
* Deploy Diagnostic Settings for Subscriptions to Log Analytics Workspaces
* Deploy Diagnostic Settings for Virtual Network to Log Analytics Workspaces
* Deploy Log Analytics agent for Linux virtual machine scale sets
* Deploy Log Analytics agent for Linux VMs
* Deploy Log Analytics agent for Windows virtual machine scale sets
* Deploy Log Analytics agent for Windows VMs
* Public IP addresses should have resource logs enabled for Azure DDoS Protection Standard
* [Custom] Audit diagnostic setting – Logs
* **Azure Kubernetes Service**
* Deploy Azure Policy Add-on to Azure Kubernetes Service clusters
* Kubernetes cluster pod security restricted standards for Linux-based workloads
* Kubernetes cluster pod security baseline standards for Linux-based workloads
* **Network**
* Network interfaces should not have public IPs
* Audit for missing UDR on subnets
* **Tag Governance**
### 2.3 Policy Remediations
Resources that are non-compliant can be put into a compliant state through [Remediation][policyRemediation]. Remediation is accomplished by instructing Azure Policy to run the deployment instructions of the assigned policy on your existing resources and subscriptions, whether that assignment is to a management group, a subscription, a resource group, or an individual resource. This article shows the steps needed to understand and accomplish remediation with Azure Policy.
**Non-compliant resources**
![Azure Policy Remediation](media/architecture/policy-remediation.jpg)
**Remediation history**
![Azure Policy Remediation](media/architecture/policy-remediation-status.jpg)
### 2.4 Azure Security Center Integration
The benefit of aligning to built-in policy sets is the Azure Security Center integration. Azure Security Center can infer the built-in policy sets and build a Regulatory Compliance dashboard across each regulatory standard. This compliance view is applicable to SP NIST 800-53 R4, SP NIST 800-53 R5, Canada Federal PBMM, Azure Security Benchmark, Azure CIS 1.3.0, HIPAA/HITRUST and FedRAMP Moderate.
**Compliance View**
The integration is based on the scope that the policy sets are assigned, and those assignments are inherited by all subscriptions within it. There is no manual configuration required in Azure Security Center.
![Azure Security Center - Security Policy](media/architecture/asc-security-policy.jpg)
The compliance reporting will outline the Azure Policies, the resource types, the # of resources, and compliance status. Data is grouped by control groups within each regulatory standard. The data can also be exported as PDF or CSV based on need.
> It is not possible to exclude control groups.
![Azure Security Center - Regulatory Compliance](media/architecture/asc-regulatory-compliance.jpg)
### 2.5 Compliance Data Export
For custom reporting requirements, the raw compliance data can be exported using [Azure Resource Graph](https://docs.microsoft.com/azure/governance/resource-graph/overview). This export allows for additional analysis and align to operational requirements. A custom data export pipeline and processes will be needed to operationalize the dataset. Primary queries to access the data are:
```
securityresources
| where type == "microsoft.security/regulatorycompliancestandards"
securityresources
| where type == "microsoft.security/regulatorycompliancestandards/regulatorycompliancecontrols"
securityresources
| where type == "microsoft.security/regulatorycompliancestandards/regulatorycompliancecontrols/regulatorycomplianceassessments"
```
---
## 3. Management Groups
[Management Groups](https://docs.microsoft.com/azure/governance/management-groups/overview) enable organizations to efficiently manage access, governance and compliance across all subscriptions. Azure management groups provide a level of scope above subscriptions. Subscriptions are organized into containers called "management groups" and apply Azure Policies and role-based access control to the management groups. All subscriptions within a management group automatically inherit the settings applied to the management group.
Management groups give you enterprise-grade management at a large scale no matter what type of subscriptions you might have. All subscriptions within a single management group must trust the same Azure Active Directory tenant.
Azure Landing Zones for Canadian Public Sector recommends the following Management Group structure. This structure can be customized based on your organization's requirements. Specifically:
* Landing Zones will be split by 3 groups of environments (DEV/TEST, QA, PROD).
* Sandbox management group is used for any new subscriptions that will be created. This will remove the subscription sprawl from the Root Tenant Group and will pull all subscriptions into the security compliance.
![Management Group Structure](media/architecture/management-group-structure.jpg)
Customers with existing management group structure can consider merging the recommended structure to continue to use the existing structure. The new structure deployed side-by-side will enable the ability to:
* Configure all controls in the new management group without impacting existing subscriptions.
* Migrate existing subscriptions one-by-one (or small batches) to the new management group to reduce the impact of breaking changes.
* Learn from each migration, apply policy exemptions, and reconfigure Policy assignment scope from pubsec to another scope that's appropriate.
> Management Group structure can be modified through [Azure Bicep template located in "management-groups" folder](../management-groups)
---
## 4. Identity
Azure Landing Zones for Canadian Public Sector assumes that Azure Active Directory has been provisioned and configured based on department's requirements. It is important to check the following configuration for Azure Active Directory:
* License - Consider Azure PD Premium P2
* Multi-Factor Authentication - Enabled for all useres
* Conditional Access Policies - Configured based on location & devices
* Privileged Identity Management (PIM) - Enabled for elevated access control.
* App Registration - Consider disabling for all users and created on-demand by CloudOps teams.
* Sign-In Logs - Logs are expected to Log Analytics workspace & Sentinel used for threat hunting (Security Monitoring Team).
* Break-glass procedure - Process documented and implemented including 2 break glass accounts with different MFA devices & split up passwords.
* Azure Directory to Azure Active Directory sychronization - Are the identities synchronized or using cloud only account?
### 4.1 Service Principal Accounts
To support the landing zone deployment, **one** service principal account will be used for management. This service principal account should be limited to the Platform Automation as it has Owner permission across all management group scopes. Owner role is automatically assigned when management groups are created.
Additional service principal accounts must be created and scoped to child management groups, subscriptions or resource groups based on tasks that are expected of the service principal accounts.
### 4.2 User Accounts
It is common for user accounts to have access to an Azure environment with permanent permissions. Our recommendation is to limit permanent permissions and elevate roles using time-limited, MFA verified access through Privilege Identity Management (Azure AD PIM).
All user accounts should be assigned to Security Groups and access should be granted to user accounts based on membership.
### 4.3 Recommendations for Management Groups
Access Control at Management Group scope enables management and oversight at scale. Permissions assigned at Management Group scopes will automatically be inherited by all child resources including child management groups, subscriptions, resource groups and resources. Therefore, it is an ideal scope for the following 4 scenarios.
| Scenario | Permanent Assignment | On-Demand Assignment (through Azure AD PIM) |
| --- | --- | --- |
| Global Reader | [Reader](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#reader) | - |
| Governance | - | [Resource Policy Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#resource-policy-contributor) |
| Log Management | [Log Analytics Reader](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#log-analytics-reader) | [Log Analytics Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#log-analytics-contributor) |
| Security Management | [Security Reader](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#security-reader) | [Security Admin](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#security-admin) |
| User Management | - | [User Access Administrator](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator) |
| Cost Management | [Billing Reader](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#billing-reader) | - |
### 4.4 Recommendations for Subscriptions
The table provides the 3 generic roles that are commonly used in Azure environment. Granular built-in roles can be used based on use case to further limit the access control. Our recommendation is to assign the least privileged role that is required for a person or service principal to complete the tasks.
Review the [Azure Built-In roles](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles) to evaluate applicability.
| Environment | Scenario | Considerations | Permanent Assignment | On-Demand Assignment (through Azure AD PIM)
| --- | --- | --- | --- | --- |
| All | Read Access | Permanent role assigned to all users who need access to the Azure resources. | [Reader](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#reader) | - |
| Dev/Test, QA | Manage Azure resources | Contributor role can deploy all Azure resources, however any RBAC assignments will require the permissions to be elevated to Owner.<br /><br />Alternative is to leverage DevOps Pipeline and the Service Principal Account with elevated permissions. | [Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#contributor) | [Owner](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#owner) |
| Production | Manage Azure resources | No standing management permissions in Production.<br /><br />Owner role is only required for RBAC changes, otherwise, use Contributor role or another built-in role for all other operations. | - | [Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#contributor) or [Owner](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#owner)
### 4.5 Recommendations for Resource Groups
Follow the same guidance as Subscriptions.
### 4.6 Recommendations for Resources
Due to overhead of access control and assignments, avoid assigning permissions per resource. Consider using Resource Group or Subscription scope permissions.
---
## 5. Network
The recommended network design achieves the purpose of hosting [**Protected B** workloads on Profile 3][pbmm] (cloud only). In preparation for a future connection to on-premises infrastructure, we've taken recommendations from SSC guidance ([video](https://www.youtube.com/watch?v=rQYyatlO0-k)) detailed [https://github.com/canada-ca/Azure_LZBCA-AIZDB/tree/master/Network](https://github.com/canada-ca/Azure_LZBCA-AIZDB/tree/master/Network).
### IP Addresses
Both network designs will require 3 IP blocks:
* [RFC 1918][rfc1918] for Azure native-traffic (including IaaS and PaaS). Example: `10.18.0.0/16`
* [RFC 1918][rfc1918] for Azure Bastion. Example: `192.168.0.0/16`
* [RFC 6598][rfc1918] for department to department traffic through GCnet. Example: `100.60.0.0/16`
> This document will reference the example IP addresses above to illustrate network flow and configuration.
### Topology
Reference implementation provides two topologies Hub Network design:
1. [Hub Networking with Fortigate Firewalls](archetypes/hubnetwork-nva-fortigate.md) (departments must configure the firewalls)
2. [Hub Networking with Azure Firewalls](archetypes/hubnetwork-azfw.md) (pre-configured with firewall rules, DNS proxy and forced tunneling mode)
### Azure Bastion
Bastion [does not support User Defined Route](https://docs.microsoft.com/azure/bastion/bastion-faq#udr) but can work with Virtual Machines on peered virtual networks as long as the [Network Security Groups allow][nsgAzureBastion] it and the user has the [required role based access control](https://docs.microsoft.com/azure/bastion/bastion-faq#i-have-access-to-the-peered-vnet-but-i-cant-see-the-vm-deployed-there)
### Azure Application Gateway
Application Gateway [does not support default UDRs to an NVA](https://docs.microsoft.com/en-us/azure/application-gateway/configuration-infrastructure):
> "Any scenario where 0.0.0.0/0 needs to be redirected through any virtual appliance, a hub/spoke virtual network, or on-premise (forced tunneling) isn't supported for V2.".
Even though we could set UDRs to specific spoke IPs, we chose to place the Shared Application Gateway instances in the hub to avoid having to update UDRs each time a new spoke is added. Application Gateway will only have to have such an UDR if the department wants to configure Backends directly in the spoke, but still want to force that traffic via the firewall.
By default, via peering, Application Gateway will know the routes of the spokes, so adding a backend with an FQDN that resolves to an IP in the spoke will allow Application Gateway to reach that endpoint without traversing the firewall. That may be acceptable if the WAF features of Application Gateway are enabled.
### Private DNS Zones
Azure PaaS services use Private DNS Zones to map their fully qualified domain names (FQDNs) when Private Endpoints are used. Managing Private DNS Zones at scale requires additional configuration to ensure:
* All Private DNS Zones for private endpoints are created in the Hub Virtual Network.
* Private DNS Zones from being created in the spoke subscriptions. These can only be created in the designated resource group in the Hub Subscription.
* Ensure private endpoints can be automatically mapped to the centrally managed Private DNS Zones.
The following diagram shows a typical high-level architecture for enterprise environments with central DNS resolution and name resolution for Private Link resources via Azure Private DNS. This topology provides:
* Name resolution from hub to spoke
* Name resolution from spoke to spoke
* Name resolution from on-premises to Azure (Hub & Spoke resources). Additional configuration is required to deploy DNS resolvers in the Hub Network & provide DNS forwarding from on-premises to Azure.
![Hub Managed DNS](media/architecture/hubnetwork-private-link-central-dns.png)
**Reference:** [Private Link and DNS integration at scale](https://docs.microsoft.com/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale)
Reference implementation provides the following capabilities:
* Deploy Private DNS Zones to the Hub Networking subscription. Enable/disable via configuration.
* Azure Policy to block private zones from being created outside of the designated resource group in the Hub networking subscription.
* Azure Policy to automatically detect new private endpoints and add their A records to their respective Private DNS Zone.
* Support to ensure Hub managed Private DNS Zones are used when deploying archetypes.
The reference implementation does not deploy DNS Servers (as Virtual Machines) in the Hub nor Spoke for DNS resolution. It can:
* Leverage Azure Firewall's DNS Proxy where the Private DNS Zones are linked only to the Hub Virtual Network. DNS resolution for all spokes will be through the VIP provided by Azure Firewall.
* Link Private DNS Zones directly to the spoke virtual networks and use the [built-in DNS resolver in each virtual network](https://docs.microsoft.com/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances). Virtual network(s) in spoke subscriptions be configured through Virtual Network Link for name resolution. DNS resolution is automatic once the Private DNS Zone is linked to the virtual network.
* Leverage DNS Servers on virtaul machines that are managed by department's IT.
### Spoke Landing Zone Networks
Following the nomenclature of [ITSG-22][itsg22], these would be the default subnets created in the spokes as part of new subscriptions.
* Presentation (PAZ) - frontend web servers (not exposed to the internet, using RFC1918 IPs that only receive traffic via the application delivery controllers or L7 firewalls in the PAZ).
* Application (RZ) - middleware application servers (only allow connections from the frontend).
* Data (HRZ) - backend servers (only allow connections from the application RZ).
* App Management Zone (OZ), an optional network for app management servers in the spoke.
* All zones would allow management traffic from the Management Access Zone (OZ).
---
## 6. Logging
### 6.1 Scope
Microsoft's recommendation is [one central Log Analytics workspace](https://docs.microsoft.com/azure/azure-monitor/logs/design-logs-deployment#important-considerations-for-an-access-control-strategy) that will be shared by IT, Security Analysts and Application Teams.
The design and recommendation are based on the following requirements:
* Collect all logs from VMs and PaaS services.
* Central Logging for security monitoring.
* Limit data access based on resource permissions granted to individuals and teams.
* Tune alerting based on environments (i.e., less alerts from non-production environments).
![Log Analytics Worksace](media/architecture/log-analytics-workspace.jpg)
This approach offers:
* Streamlined log correlation across multiple environments (Dev, QA, Prod) & line of businesses.
* Avoids log analytics workspace sprawl and streamlines tenant-wide governance through Azure Policy.
* Integration with compliance standards such as NIST 800-53 R4 and Protected B built-in Policy Sets to verify log collection compliance.
* Integration with Azure Security Center.
* Data access to logs are controlled through RBAC where Security Monitoring teams will access all data, while line of business teams access logs of the resources they manage.
* Cost optimization and better pricing at larger volume through capacity reservations.
* Tunable based on the types of logs and data retention as data ingestion grows.
* Modifiable as CloudOps and Cloud Security Monitoring evolves.
The workspace will be configured as:
* Workspace will be centrally managed and deployed in the **pubsecPlatform** management group. Workspace is managed by CloudOps team.
* Workspace will have the access mode set as use resource or workspace permissions.
* Data Retention set to **2 years** for all data types (i.e., Security Events, syslog).
* Log Analytics Workspace will be stored in **Canada Central**.
As the logging strategy evolves, Microsoft recommends considering the following improvements:
* To optimize cost, configure [data retention periods by data type](https://docs.microsoft.com/azure/azure-monitor/logs/manage-cost-storage#retention-by-data-type).
* To optimize cost, collect only the logs that are required for operations and security monitoring. Current requirement is to collect all logs.
* For data retention greater than 2 years, export logs to Azure Storage and [leverage immutable storage](https://docs.microsoft.com/azure/storage/blobs/storage-blob-immutable-storage) with WORM policy (Write Once, Read Many) to make data non-erasable and non-modifiable.
* Use Security Groups to control access to all or per-resource logs.
### 6.2 Design considerations for multiple Log Analytics workspaces
| Rationale | Applicability |
| --- | --- |
| Require log data stored in specific regions for data sovereignty or compliance reasons. | Not applicable to current environment since all Azure deployments will be in Canada. |
| Avoid outbound data transfer charges by having a workspace in the same region as the Azure resources it manages. | Not applicable to current environment since all Azure deployments will be in Canada Central. |
| Manage multiple departments or business groups, and need each to see their own data, but not data from others. Also, there is no business requirement for a consolidated cross department or business group view. | Not applicable since security analysts require cross department querying capabilities, but each department or Application Team can only see their data. Data access control is achieved through role-based access control. |
**Reference**: [Designing your Azure Monitor Logs deployment](https://docs.microsoft.com/en-ca/azure/azure-monitor/logs/design-logs-deployment#important-considerations-for-an-access-control-strategy)
### 6.3 Access Control - Use resource or workspace permissions
With Azure role-based access control (Azure RBAC), you can grant users and groups appropriate access they need to work with monitoring data in a workspace. This allows you to align with your IT organization operating model using a single workspace to store collected data enabled on all your resources.
For example, when you grant access to your team responsible for infrastructure services hosted on Azure virtual machines (VMs), and as a result they'll have access to only the logs generated by those VMs. This is following **resource-context** log model. The basis for this model is for every log record emitted by an Azure resource, it is automatically associated with this resource. Logs are forwarded to a central workspace that respects scoping and Azure RBAC based on the resources.
**Reference**: [Designing your Azure Monitor Logs deployment - Access Control](https://docs.microsoft.com/en-ca/azure/azure-monitor/logs/design-logs-deployment?WT.mc_id=modinfra-11671-pierrer#access-control-overview)
| Scenario | Log Access Mode | Log Data Visibility |
| --- | --- | --- |
| Security Analyst with [Log Analytics Reader or Log Analytics Contributor](https://docs.microsoft.com/en-ca/azure/azure-monitor/logs/manage-access#manage-access-using-azure-permissions) RBAC role assignment. | Access the Log Analytics workspace directly through Azure Portal or through Azure Sentinel. | All data in the Log Analytics Workspace. |
| IT Teams responsible for one or more line of business with permissions to one or more subscriptions, resource groups or resources with at least Reader role. | Access the logs through the resource's Logs menu for the Azure resource (i.e., VM or Storage Account or Database). | Only to Azure resources based on RBAC. User can query logs for specific resources, resource groups, or subscription they have access to from any workspace but can't query logs for other resources. |
| Application Team with permissions to one or more subscriptions, resource groups or resources with at least Reader role. | Access the logs through the resource's Logs menu for the Azure resource (i.e., VM or Storage Account or Database). | Only to Azure resources based on RBAC. User can query logs for specific resources, resource groups, or subscription they have access to from any workspace but can't query logs for other resources. |
---
## 7. Tagging
Organize cloud assets to support governance, operational management, and accounting requirements. Well-defined metadata tagging conventions help to quickly locate and manage resources. These conventions also help associate cloud usage costs with business teams via chargeback and show back accounting mechanisms.
A tagging strategy include business and operational details:
* The business side of this strategy ensures that tags include the organizational information needed to identify the teams. Use a resource along with the business owners who are responsible for resource costs.
* The operational side ensures that tags include information that IT teams use to identify the workload, application, environment, criticality, and other information useful for managing resources.
Tags can be assigned to resources using 3 approaches:
| Approach | Mechnishm |
| --- | --- |
| Automatically assigned from the Subscription tags | Azure Policy: Inherit a tag from the subscription if missing |
| Automatically assigned from the Resource Group tags | Azure Policy: Inherit a tag from the resource group if missing |
| Explicitly set on a Resource | Azure Portal, ARM templates, CLI, PowerShell, etc.<br /><br />**Note:** It's recommended to inherit tags that are required by the organization through Subscription & Resource Group. Per resource tags are typically added by Application Teams for their own purposes. |
Azure Landing Zones for Canadian Public Sector recommends the following tagging structure.
> The tags can be modified through Azure Policy. Modify [Tag Azure Policy definition configuration](../policy/custom/definitions/policyset/Tags.parameters.json) to set the required Resource Group & Resource tags.
![Tags](media/architecture/tags.jpg)
To achieve this design, built-in and custom Azure Policies are used to automatically propagate tags from Resource Group, validate mandatory tags at Resource Groups and to provide remediation to back-fill resources with missing tags. Azure Policies used to achieve this design are:
* [Built-in] Inherit a tag from the resource group if missing
* [Custom] Require a tag on resource groups (1 policy per tag)
* [Custom] Audit missing tag on resource (1 policy per tag)
This approach ensures that:
* All resource groups contain the expected tags; and
* All resources in that resource groups will automatically inherit those tags.
This helps remove deployment friction by eliminating the explicit tagging requirement per resource. The tags can be override per resource if required.
*We chose custom policies so that they can be grouped in a policy set (initiative) and have unique names to describe their purpose.*
**Design Considerations**
* Only one policy can update a tag per deployment. Therefore, to setup automatic assignments, the Azure Policy must be created at either Subscription or Resource Group scope, not both scopes. This rule is applied per tag. **This reference implementation has chosen to use Resource group tags only.**
* Do not enter names or values that could make your resources less secure or that contain personal/sensitive information because tag data will be replicated globally.
* There is a maximum of 50 tags that is assignable per subscription, resource group or resource.
---
## 8. Archetypes
| Archetype | Design | Documentation |
| --- | --- | --- |
| **Central Logging** | ![Archetype: Central Logging](media/architecture/archetype-logging.jpg) | [Archetype definition](archetypes/logging.md) |
| **Generic Subscription** | ![Archetype: Generic Subscription](media/architecture/archetype-generic-subscription.jpg) | [Archetype definition](archetypes/generic-subscription.md)
| **Machine Learning** | ![Archetype: Machine Learning](media/architecture/archetype-machinelearning.jpg) | [Archetype definition](archetypes/machinelearning.md) |
| **Healthcare** | ![Archetype: Healthcare](media/architecture/archetype-healthcare.jpg) | [Archetype definition](archetypes/healthcare.md) |
---
## 9. Automation
There are 3 principles that are being followed to help automate Azure Landing Zones for Canadian Public Sector design:
* Start with Automation – We must automate all configurations. There will be activities that are needed once or twice, but those too should be automated so that they can be applied consistently in many tenants. Procedures that don't have a reasonable means to automate should be documented as manual steps.
* Reduce security surface – Automation accounts can have broad access control and we must limit the permissions when reasonably possible. Start with least-privilege accounts as described in this document. Least-privilege accounts will reduce the attack surface and create separation of duty.
* Integrate with native services and capabilities – Reduce the number of tools used for automation and favor built-in capabilities offered by Azure.
### 9.1 Tools
Azure DevOps Repository, Azure DevOps Pipelines, Azure CLI, Bicep, ARM Templates are used to create the environment as configuration and deploy those configurations to a target tenant. All services and tools are supported by Microsoft.
| Tool | Purpose |
| --- | --- |
| Azure DevOps Repository | Git repository for versioning and single source of truth for automation scripts. |
| Azure DevOps Pipelines | Multi-stage, YAML-based orchestration to deploy Azure artifacts. |
| Azure CLI | Command-line interface used for deployment operations across multiple scopes like management groups, subscriptions, and resource groups. |
| Bicep | A domain-specific language for authoring Azure deployment scripts. Azure Bicep is the primary language used for automation. |
| ARM Templates | JSON-based configuration that is used for authoring Azure deployment scripts. Bicep scripts will be converted to ARM templates before execution. ARM Templates will only be used when features are missing from Bicep. |
### 9.2 Structure
Repository is organized into different focus areas such as Management Groups, Landing Zones, Platform, Policy and Roles. Each focus area can be deployed independently or orchestrated through a pipeline.
| Folder | Purpose |
| --- | --- |
| .pipelines | Orchestration to configure a target environment. Since each component in this repository can have its own lifecycle (i.e., management groups rarely change, but landing zones have frequent changes), the approach is to separate orchestration based on the lifecycle requirements. Approach allows for independent deployment of Azure capabilities. |
| azresources | Azure Resource definitions. |
| config | Environment specific configuration used by the Azure DevOps Pipelines.
| landingzones | Deployment templates required for any landing zones. These can be converted to an Azure Blueprint if required. |
| management-groups | Deployment template to create the management group structure. |
| policy | Custom Policy Definitions & Built-in/Custom Policy Assignments. Approach taken is to use the built-in policies & policy sets (initiatives) and only build custom policies when one is required. Built-in policies will not be converted to custom policies for the purpose of version control. |
| roles | Custom role definitions |
| schemas | Schema definition for landing zone parameter files |
| tests | Unit & integration tests |
### 9.3 Azure DevOps Pipelines
Following list of pipelines are used to configure an Azure environment.
![Azure DevOps Pipelines](media/architecture/ado-pipelines.jpg)
All pipelines are in **.pipelines/** folder.
Pipelines are stored as YAML definitions in Git and imported into Azure DevOps Pipelines. This approach allows for portability and change tracking. To import a pipeline:
1. Go to Pipelines
2. New Pipeline
3. Choose Azure Repos Git
4. Select Repository
5. Select Existing Azure Pipeline YAML file
6. Identify the pipeline using the table below and add.
Use the [onboarding guide for Azure DevOps](../ONBOARDING_GUIDE_ADO.md) to configure each pipeline.
> Imported pipelines should be renamed to match the names in the table.
| Pipeline | YAML Definition | Pipeline Name | Purpose | Service Principal Account | Variables |
| --- | --- | --- | --- | --- | --- |
| Management Group | management-groups.yml | management-groups-ci | Deploys management group structure to a tenant. | spn-azure-platform-ops | None |
| Roles | roles.yml | roles-ci | Configures custom role definitions. | spn-azure-platform-ops | None |
| Azure Policy | policy.yml | policy-ci | Deploys policy definitions & assignments at Management Group scope. | spn-azure-platform-ops | None |
| Platform - Logging | platform-logging.yml | platform-logging-ci | Configures a Logging Landing Zone that will be used by all landing zones for managing their logs. | spn-azure-platform-ops | None |
| Platform – Hub Networking using NVAs | platform-connectivity-hub-nva.yml | platform-connectivity-hub-nva-ci | Configures Hub Networking with Fortigate Firewalls. | spn-azure-platform-ops | None |
| Platform – Hub Networking with Azure Firewall - Firewall Policy | platform-connectivity-hub-azfw-policy.yml | platform-connectivity-hub-azfw-policy-ci | Configures Azure Firewall Policy. A policy containts firewall rules and firewall configuration such as enabling DNS Proxy. Firewall policies can be updated independently of Azure Firewall. | spn-azure-platform-ops | None |
| Platform – Hub Networking with Azure Firewall | platform-connectivity-hub-azfw.yml | platform-connectivity-hub-azfw-ci | Configures Hub Networking with Azure Firewall. | spn-azure-platform-ops | None |
| Subscriptions | subscription.yml | subscription-ci | Configures a new subscription based on the archetype defined in the configuration file name. | spn-azure-platform-ops | None |
| Pull Request Validation | pull-request-check.yml | pull-request-validation-ci | Checks for breaking changes to Bicep templates & parameter scehmas prior to merging the change to main branch. This pipeline must be configured as a check for the `main` branch. | spn-azure-platform-ops | None |
### 9.4 Release Process
By using gates, approvals, and manual intervention you can take full control of your releases to meet a wide range of deployment requirements. Typical scenarios where approvals, gates, and manual intervention are useful include the following.
| Scenario | Feature(s) to use |
| --- | --- |
| A user must manually validate the change request and approve the deployment to a certain stage. | [Pre-deployment approvals](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals?view=azure-devops) |
| A user must manually sign out after deployment before the release is triggered to other stages. | [Post-deployment approvals](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals?view=azure-devops) |
| A team wants to ensure there are no active issues in the work item or problem management system before deploying a build to a stage. | [Pre-deployment gates](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates?view=azure-devops) |
| A team wants to ensure there are no reported incidents after deployment, before triggering a release. | [Post-deployment gates](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates?view=azure-devops) |
| After deployment, a team wants to wait for a specified time before prompting users to sign out. | [Post-deployment gates](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates?view=azure-devops) and [post-deployment approvals](https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals?view=azure-devops) |
| During deployment, a user must manually follow specific instructions and then resume the deployment. | [Manual Intervention](https://docs.microsoft.com/azure/devops/pipelines/release/deploy-using-approvals?view=azure-devops#configure-maninter) or [Manual Validation](https://docs.microsoft.com/azure/devops/pipelines/release/deploy-using-approvals?view=azure-devops#view-approvals) |
| During deployment, a team wants to prompt users to enter a value for a parameter used by the deployment tasks or allow users to edit the release. | [Manual Intervention](https://docs.microsoft.com/azure/devops/pipelines/release/deploy-using-approvals?view=azure-devops#configure-maninter) or [Manual Validation](https://docs.microsoft.com/azure/devops/pipelines/release/deploy-using-approvals?view=azure-devops#view-approvals) |
| During deployment, a team wants to wait for monitoring or information portals to detect any active incidents, before continuing with other deployment jobs. | Planned, but not yet implemented for YAML pipelines |
You can combine all three techniques within a release pipeline to fully achieve your own deployment requirements.
### 9.5 Manual Validation
Manual validation can be done in one of two ways:
1. Add an agentless (server) job before the existing pipeline job(s) where you want to enforce pre-deployment user validation.
2. Create an Environment (or multiple environments) in your Azure DevOps project where you can specify pre-deployment user validations via “Approvals and checks”.
We will focus on the second option, as it allows for the following additional types of approvals and checks:
![Azure DevOps - Checks](media/architecture/ado-approvals-checks.jpg)
Steps to implement user validation (approval) check:
1. Create an Environment named after the branch (e.g. “main”, “sandbox”) you want to protect. You can do this manually through the web UI or by running the pipeline (if the environment does not exist, it will be created).
2. In the web UI, navigate to Pipelines | Environments, select the environment corresponding to the branch you want to protect, and select “Approvals and checks” from the context menu.
3. Select the “Approval” option to add a new user validation approval.
4. Add user(s)/group(s) to the “Approvers” field. Approval check will require approval from all listed users/groups. For a group approval, any one member of the group is sufficient for approval. Note that you may use Azure DevOps and Azure Active Directory groups and may want to do this to minimize administrative overhead associated with managing individual users roles and responsibilities.
5. Under “Advanced” options, decide if you want to allow users in the Approvers list to approve their own pipeline runs.
6. Under “Control options”, set an appropriate “Timeout” after which approval requests will expire. The default is 30 days, however you may wish to reduce this time window.
[itsg33]: https://www.cyber.gc.ca/en/guidance/it-security-risk-management-lifecycle-approach-itsg-33
[itsg22]: https://www.cyber.gc.ca/sites/default/files/publications/itsg-22-eng.pdf
[pbmm]: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/cloud-services/government-canada-security-control-profile-cloud-based-it-services.html
[cloudUsageProfiles]: https://github.com/canada-ca/cloud-guardrails/blob/master/EN/00_Applicable-Scope.md
[rfc1918]: https://tools.ietf.org/html/rfc1918
[rfc6598]: https://tools.ietf.org/html/rfc6598
[nist80053r4]: https://csrc.nist.gov/publications/detail/sp/800-53/rev-4/archive/2015-01-22
[nist80053r4Policyset]: https://docs.microsoft.com/azure/governance/policy/samples/nist-sp-800-53-r4
[nist80053r5Policyset]: https://docs.microsoft.com/azure/governance/policy/samples/nist-sp-800-53-r5
[pbmmPolicyset]: https://docs.microsoft.com/azure/governance/policy/samples/canada-federal-pbmm
[asbPolicySet]: https://docs.microsoft.com/security/benchmark/azure/overview
[cisMicrosoftAzureFoundationPolicySet]: https://docs.microsoft.com/azure/governance/policy/samples/cis-azure-1-3-0
[fedrampmPolicySet]: https://docs.microsoft.com/azure/governance/policy/samples/fedramp-moderate
[hipaaHitrustPolicySet]: https://docs.microsoft.com/azure/governance/policy/samples/hipaa-hitrust-9-2
[cafLandingZones]: https://docs.microsoft.com/azure/cloud-adoption-framework/ready/landing-zone/
[policyRemediation]: https://docs.microsoft.com/azure/governance/policy/how-to/remediate-resources
[nsgAzureBastion]: https://docs.microsoft.com/azure/bastion/bastion-nsg#apply

Двоичные данные
docs/media/architecture/ado-approvals-checks.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 42 KiB

Двоичные данные
docs/media/architecture/ado-pipelines.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 276 KiB

Двоичные данные
docs/media/architecture/archetype-generic-subscription.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 258 KiB

Двоичные данные
docs/media/architecture/archetype-healthcare-dataflow.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 224 KiB

Двоичные данные
docs/media/architecture/archetype-healthcare-networking.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 102 KiB

Двоичные данные
docs/media/architecture/archetype-healthcare.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 292 KiB

Двоичные данные
docs/media/architecture/archetype-logging.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 216 KiB

Двоичные данные
docs/media/architecture/archetype-machinelearning-dataflow.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 118 KiB

Двоичные данные
docs/media/architecture/archetype-machinelearning-networking.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 78 KiB

Двоичные данные
docs/media/architecture/archetype-machinelearning.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 300 KiB

Двоичные данные
docs/media/architecture/asc-regulatory-compliance.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 547 KiB

Двоичные данные
docs/media/architecture/asc-security-policy.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 278 KiB

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 185 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/azfw-logs-dns.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 557 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/azfw-logs-fw.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 473 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/azfw-policy-app-rules.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 346 KiB

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 472 KiB

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 358 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/hubnetwork-azfw-design.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 176 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/hubvnet-address-space.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 219 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/hubvnet-subnets.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 198 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/mrz-udr.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 199 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/mrzvnet-address-space.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 186 KiB

Двоичные данные
docs/media/architecture/hubnetwork-azfw/mrzvnet-subnets.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 184 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/hubnetwork-nva-design.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 206 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/hubvnet-address-space.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 220 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/hubvnet-subnets.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 243 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/mrz-udr.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 200 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/mrzvnet-address-space.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 185 KiB

Двоичные данные
docs/media/architecture/hubnetwork-nva/mrzvnet-subnets.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 127 KiB

Двоичные данные
docs/media/architecture/hubnetwork-private-link-central-dns.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 148 KiB

Двоичные данные
docs/media/architecture/log-analytics-workspace.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 116 KiB

Двоичные данные
docs/media/architecture/management-group-structure.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 215 KiB

Двоичные данные
docs/media/architecture/policy-compliance.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 493 KiB

Двоичные данные
docs/media/architecture/policy-remediation-status.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 353 KiB

Двоичные данные
docs/media/architecture/policy-remediation.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 258 KiB

Двоичные данные
docs/media/architecture/tags.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 252 KiB