521cea9b0c
* import existing pipeline changes * try refactoring steps, splitting plan and apply permissions * refactor more * fix syntax * fix tf apply step * ci: use ubuntu-latest * more cleanup, remove unused code * cd: add prod stage * relative paths, adjust PR * pr-syntax: adjust, test later * feat: add makefile with all local tf commands * chore: use main branch again |
||
---|---|---|
.. | ||
stages | ||
steps | ||
README.md | ||
_vars.yaml | ||
cd.yaml | ||
ci.yaml | ||
pull-request.yaml | ||
schedule-drift.yaml |
README.md
Azure Pipelines
Brief notes and considerations when automating infrastructure with Terraform and Azure DevOps.
Overview
Branch Triggered
Please note
- The CI pipeline does not run for
main
orproduction
because these steps are run in Pull Requests, which are required before merging.
- The CD pipeline does not check for drift because this is checked at the pull request.
Pipeline | Branch | Terraform Backend | Detects Drift | Deploys |
---|---|---|---|---|
ci.yaml |
• feat/* • fix/* |
local | No | No |
cd.yaml |
• main • production |
Azure Storage | No | 🚀 Yes |
Pull Request Triggered
The following pipelines do CI check and configuration drift detection, and posts the results back to the Pull Request. See example →
Pipeline | PR Trigger | Detects Drift | Deploys |
---|---|---|---|
pr-main.yaml |
main |
Yes | No |
pr-production.yaml |
production |
Yes | No |
Scheduled Triggeed
Checks regularly for any manual non Infrastructure as Code changes.
Pipeline | Trigger | Detects Drift | Deploys |
---|---|---|---|
schedule-drift.yaml |
Scheduled nightly | Yes | No |
Security Considerations
Terraform State in Azure Blob Storage
Terraform state is always in plain text, which is why it is stored in Azure Blob Storage with public access disabled.
- This storage account is NOT managed by Terraform
- This project prefers Shared Access Signtures (SAS) tokens over Access Keys in order to:
- Apply principle of least privilege
- Use short-lived tokens for CI/CD because we trust for headless actors less
Pipeline Secrets and Azure Key Vault Integration
- Secrets are stored in Azure Key Vault and automatically fetched from Key Vault at build run time
- Because variables are encrypted in Azure DevOps, secret variables not automatically decrypted and must be explictly mapped into the environment - at every step.
Example
Variable in YAML | Description | Naming Convention |
---|---|---|
kv-tf-state-blob-account |
Name of secret in Key Vault. The $(…) macro syntax means it will be fetched and processed into template before pipeline runs. |
Custom kv- prefix for easier debugging |
$TF_STATE_BLOB_ACCOUNT_NAME |
Mapped environment variable | Linux: uppercase with underscores |
variables:
- group: e2e-gov-demo-kv
steps:
- bash: |
terraform init \
-backend-config="storage_account_name=$TF_STATE_BLOB_ACCOUNT_NAME" \
-backend-config="container_name=$TF_STATE_BLOB_CONTAINER_NAME" \
-backend-config="key=$TF_STATE_BLOB_FILE" \
-backend-config="sas_token=$TF_STATE_BLOB_SAS_TOKEN"
displayName: Terraform Init
env:
TF_STATE_BLOB_ACCOUNT_NAME: $(kv-tf-state-blob-account)
TF_STATE_BLOB_CONTAINER_NAME: $(kv-tf-state-blob-container)
TF_STATE_BLOB_FILE: $(kv-tf-state-blob-file)
TF_STATE_BLOB_SAS_TOKEN: $(kv-tf-state-sas-token)
Why is there an Azure AD "Superadmins" Group?
In this example "superadmins" refers to privileged accounts at the organization level, e.g. central IT infrastructure admins.
This project creates Azure Key Vaults, which require access policies for the Data Plane. This means you want to create the Key Vault and give yourself access.
In order to interact with the Key Vaults' data plane, the we need to assign ourselves the proper roles, e.g. Key Vault Secrets User
to gain access.