README fixes and clarifications

This commit is contained in:
Noel Bundick 2019-05-23 09:27:05 -07:00
Родитель 98b5f764e2
Коммит 6e1f7cc012
1 изменённых файлов: 53 добавлений и 45 удалений

Просмотреть файл

@ -10,12 +10,12 @@ Some modern enterprise routers have the ability to stream telemetry in real-time
The sample utilizes widely-used OSS tools for deployments. Ansible is used to configure virtual machines. Packer is used to automate creation and capture of VM images. Terraform is used for deployment of Azure resources.
Required tools:
Required tools w/ validated versions:
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli)
* [Ansible](https://www.ansible.com/)
* [Packer](https://https://www.packer.io/)
* [Terraform](https://www.terraform.io/)
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) `2.0.61`
* [Ansible](https://www.ansible.com/) `2.8.0`
* [Packer](https://https://www.packer.io/) `1.4.1`
* [Terraform](https://www.terraform.io/) `0.11.13`
All deployment commands should be run in a Bash terminal
@ -30,8 +30,6 @@ The sample makes several assumptions regarding your existing on-premises and Azu
* Storage account for capturing diagnostic logs
* Azure Active Directory application to enable AAD sign-on with Grafana
> As a reference, we've provided some sample infrastructure components for development/reference under `terraform/infra`. You can find more details in the [Development](#Development) section below
## Authenticating
The sample has been tested with a Service Principal with `Contributor` over the target subscription. Secrets are consumed as environment variables.
@ -47,11 +45,38 @@ export ARM_TENANT_ID=<tenant>
export ARM_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
```
## Development infrastructure
If you don't have an existing environment, or if you want to create an environment to contribute to the sample itself, you can use the following steps to create one.
```shell
# Specify/create the storage account used for the Terraform backend
TF_BACKEND_RG=terraform-backend
TF_BACKEND_STORAGE=tfbackend
az group create -n $TF_BACKEND_RG -l westus2
az storage account create -g $TF_BACKEND_RG -n $TF_BACKEND_STORAGE --sku Standard_LRS
# Deploy the development infrastructure
cd terraform/infra
terraform init \
--backend-config="storage_account_name=$TF_BACKEND_STORAGE" \
--backend-config="resource_group_name=$TF_BACKEND_RG" \
--backend-config="key=infra.terraform.tfstate"
terraform apply \
-var 'infra_resource_group_name=network-telemetry-infra' \
-var 'grafana_aad_client_secret=5554eb17-abf0-4c59-aac4-f4a7405ec53d'
```
## Creating VM images
We use Packer with the Ansible provisioner to capture RHEL-based VM images. For some components, you may wish to pull a specific version or a custom build. To do so, change the `*_DOWNLOAD_URL` variables to point to your desired binary.
```shell
# Switch to the Packer directory so that relative paths correctly resolve
cd packer
# Set environment variables for Packer
export PACKER_IMAGE_RESOURCE_GROUP=vm-images
export PACKER_IMAGE_LOCATION=westus2
@ -61,13 +86,13 @@ az group create -n $PACKER_IMAGE_RESOURCE_GROUP -l $PACKER_IMAGE_LOCATION
# Build the pipeline VM image
export PACKER_PIPELINE_DOWNLOAD_URL='https://github.com/cisco-ie/pipeline-gnmi/raw/master/bin/pipeline'
packer build packer/pipelne.json
packer build pipelne.json
# Build the visualization VM image
export PACKER_PIPELINE_DOWNLOAD_URL='https://github.com/noelbundick/pipeline-gnmi/releases/download/custom-build-1/pipeline'
export PACKER_INFLUX_DOWNLOAD_URL='https://dl.influxdata.com/influxdb/releases/influxdb-1.7.6.x86_64.rpm'
export PACKER_GRAFANA_DOWNLOAD_URL='https://dl.grafana.com/oss/release/grafana-6.1.6-1.x86_64.rpm'
packer build packer/visualization.json
packer build visualization.json
```
## Deploying resources via Terraform
@ -84,6 +109,7 @@ TF_BACKEND_RG=terraform-backend
TF_BACKEND_STORAGE=tfbackend
az group create -n $TF_BACKEND_RG -l westus2
az storage account create -g $TF_BACKEND_RG -n $TF_BACKEND_STORAGE --sku Standard_LRS
az storage container create -n terraform --account-name $TF_BACKEND_STORAGE
cd terraform/azure
terraform init \
@ -94,11 +120,13 @@ terraform init \
terraform apply
```
> Note: all components are deployed inside a VNET and are inaccessible to the outside world. If you want to access your resources from the Internet, you'll need to make some changes. [Public access to VMs](#Public-access-to-VMs) has additional details.
## Grafana configuration
You'll need to perform a couple of quick steps to configure Grafana.
First, visit your visualization VM's IP address, and login with `admin`/`admin`, and change it to something more secure.
First, visit your visualization VM's IP address in a web browser, and login with `admin`/`admin`, then change it to something more secure.
Next, add a data source with the following settings
@ -140,48 +168,28 @@ Most systems aggregate and ship logs to a central system. While configuring your
* InfluxDB: via the `systemd` journal - `journalctl -u influxdb.service`
* Grafana: `/var/log/grafana/grafana.log`
# Development
The sample assumes you'll have your own network configuration and will deploy into your existing VNETs/subnets. To help with dev/test, we've provided a Terraform configuration that deploys everything needed to get up and running quickly.
To use it, follow the [Deployment](#Deployment) instructions up to the Terraform deployment, and deploy the development infra before running `terraform apply`
# Troubleshooting
## Public access to VMs
For development or troubleshooting, it can be useful to expose VMs to the Internet so you can stream telemetry, SSH and investigate issues, etc. Use the following steps to add a public IP and add your SSH key to the VM.
```shell
# Specify/create the storage account used for the Terraform backend
TF_BACKEND_RG=terraform-backend
TF_BACKEND_STORAGE=tfbackend
az group create -n $TF_BACKEND_RG -l westus2
az storage account create -g $TF_BACKEND_RG -n $TF_BACKEND_STORAGE --sku Standard_LRS
RESOURCE_GROUP=network-telemetry-pipeline
VM_NAME=viz-c7dd9e3cfc44
# Deploy the development infrastructure
cd terraform/infra
terraform init \
--backend-config="storage_account_name=$TF_BACKEND_STORAGE" \
--backend-config="resource_group_name=$TF_BACKEND_RG" \
--backend-config="key=infra.terraform.tfstate"
# Add a public IP to a VM
az network public-ip create -g $RESOURCE_GROUP --name publicip1 --allocation-method Static
NIC_ID=$(az vm show -n $VM_NAME -g $RESOURCE_GROUP --query 'networkProfile.networkInterfaces[0].id' -o tsv)
az network nic ip-config update -g
az network nic ip-config create -g $RESOURCE_GROUP --nic-name "${NIC_ID##*/}" --name public --public-ip-address publicip1
terraform apply \
-var 'infra_resource_group_name=network-telemetry-infra' \
-var 'grafana_aad_client_secret=5554eb17-abf0-4c59-aac4-f4a7405ec53d'
```
## Troubleshooting
## Accessing Azure VM
First, we need to create a public ip address and assign it the nic attached to the VM.
```shell
RESOURCE_GROUP_NAME=azure-pipeline-rg
az network public-ip create -g $RESOURCE_GROUP_NAME --name publicip1 --allocation-method Static
az network nic ip-config create -g $RESOURCE_GROUP_NAME --nic-name '<<NIC_NAME>>' --name testconfiguration1 --public-ip-address publicip1
```
Finally, we can set the SSH keys so that we can SSH into the vm.
```shell
# Add your SSH key to the VM
az vm user update \
--resource-group $RESOURCE_GROUP_NAME \
--name <<VM_NAME>> \
--resource-group $RESOURCE_GROUP \
--name $VM_NAME \
--username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
```