This commit is contained in:
Anthony Howe 2020-02-29 17:26:34 -05:00
Родитель b3df4c6dbe
Коммит 43aca63b61
21 изменённых файлов: 671 добавлений и 12 удалений

Двоичные данные
docs/images/terraform/1filer-hpcc.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 6.0 KiB

Двоичные данные
docs/images/terraform/3filers-hpcc.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 19 KiB

Двоичные данные
docs/images/terraform/nofiler-hpcc.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 13 KiB

Просмотреть файл

@ -8,12 +8,12 @@ The examples show how to deploy HPC Cache and Avere vFXT from minimal configurat
1. [HPC Cache](examples/HPC%20Cache)
1. [no-filer example](examples/HPC%20Cache/no-filers)
2. [Avere vFXT against 1 IaaS NAS filer example](examples/HPC%20Cache/1-filer)
3. [Avere vFXT against 3 IaaS NAS filers example](examples/HPC%20Cache/3-filers)
2. [HPC Cache mounting 1 IaaS NAS filer example](examples/HPC%20Cache/1-filer)
3. [HPC Cache mounting 3 IaaS NAS filers example](examples/HPC%20Cache/3-filers)
2. [Avere vFXT](examples/vfxt)
1. [no-filer example](examples/vfxt/no-filers)
2. [Avere vFXT against 1 IaaS NAS filer example](examples/vfxt/1-filer)
3. [Avere vFXT against 3 IaaS NAS filers example](examples/vfxt/3-filers)
2. [Avere vFXT mounting 1 IaaS NAS filer example](examples/vfxt/1-filer)
3. [Avere vFXT mounting 3 IaaS NAS filers example](examples/vfxt/3-filers)
4. [Avere vFXT optimized for Houdini](examples/vfxt/HoudiniOptimized)
3. [NFS Filers](examples/nfsfilers)
1. [L32sv1](examples/nfsfilers/L32sv1)

Просмотреть файл

@ -0,0 +1,42 @@
# HPC Cache Deployment with 1 NFS Filer
This example shows how to deploy an HPC Cache mounting 1 NFS Filer.
This example currently uses `azurerm_template_deployment` to deploy a template, but will be replaced soon by a native azurerm module.
This examples configures a render network, controller, and HPC Cache with 1 filer as shown in the diagram below:
![The architecture](../../../../../docs/images/terraform/1filer-hpcc.png)
## Deployment Instructions
To run the example, execute the following instructions. This assumes use of Azure Cloud Shell. If you are installing into your own environment, you will need to follow the [instructions to setup terraform for the Azure environment](https://docs.microsoft.com/en-us/azure/terraform/terraform-install-configure).
1. browse to https://shell.azure.com
2. Specify your subscription by running this command with your subscription ID: ```az account set --subscription YOUR_SUBSCRIPTION_ID```. You will need to run this everytime after restarting your shell, otherwise it may default you to the wrong subscription, and you will see an error similar to `azurerm_public_ip.vm is empty tuple`.
3. double check your [HPC Cache pre-requistes](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-prereqs)
4. get the terraform examples
```bash
mkdir tf
cd tf
git init
git remote add origin -f https://github.com/Azure/Avere.git
git config core.sparsecheckout true
echo "src/terraform/*" >> .git/info/sparse-checkout
git pull origin master
```
6. `cd src/terraform/examples/HPC\ Cache/1-filer`
7. edit the local variables section at the top of the file `main.tf`, to customize to your preferences
8. execute `terraform init` in the directory of `main.tf`.
9. execute `terraform apply -auto-approve` to build the HPC Cache cluster
Once installed you will be able to mount the HPC Cache cluster login according to the [documentation](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-mount).
When you are done using the cluster, you can destroy it by running `terraform destroy -auto-approve` or just delete the three resource groups created.

Просмотреть файл

@ -0,0 +1,150 @@
// customize the HPC Cache by adjusting the following local variables
locals {
// the region of the deployment
location = "eastus"
// network details
network_resource_group_name = "network_resource_group"
// vfxt details
hpc_cache_resource_group_name = "hpc_cache_resource_group"
// HPC Cache Throughput SKU - 3 allowed values for throughput (GB/s) of the cache
// Standard_2G
// Standard_4G
// Standard_8G
cache_throughput = "Standard_2G"
// HPC Cache Size - 5 allowed sizes (GBs) for the cache
// 3072
// 6144
// 12288
// 24576
// 49152
cache_size = 12288
// unique name for cache
cache_name = "uniquename"
// usage model
// WRITE_AROUND
// READ_HEAVY_INFREQ
// WRITE_WORKLOAD_15
usage_model = "READ_HEAVY_INFREQ"
// filer related variables
vm_admin_username = "azureuser"
// use either SSH Key data or admin password, if ssh_key_data is specified
// then admin_password is ignored
vm_admin_password = "PASSWORD"
// if you use SSH key, ensure you have ~/.ssh/id_rsa with permission 600
// populated where you are running terraform
vm_ssh_key_data = null //"ssh-rsa AAAAB3...."
// filer details
filer_resource_group_name = "filer_resource_group"
}
provider "azurerm" {
version = "~>2.0.0"
features {}
}
// the render network
module "network" {
source = "../../../modules/render_network"
resource_group_name = local.network_resource_group_name
location = local.location
}
resource "azurerm_resource_group" "hpc_cache_rg" {
name = local.hpc_cache_resource_group_name
location = local.location
// the depends on is necessary for destroy. Due to the
// limitation of the template deployment, the only
// way to destroy template resources is to destroy
// the resource group
depends_on = [module.network]
}
data "azurerm_subnet" "vnet" {
name = module.network.cloud_cache_subnet_name
virtual_network_name = module.network.vnet_name
resource_group_name = local.network_resource_group_name
}
// load the HPC Cache Template, with the necessary variables
locals {
arm_template = templatefile("${path.module}/../hpc_cache.json",
{
uniquename = local.cache_name,
location = local.location,
hpccsku = local.cache_throughput,
subnetid = data.azurerm_subnet.vnet.id,
hpccachesize = local.cache_size
})
}
// HPC cache is currently deployed using azurerm_template_deployment as described in
// https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html.
// The only way to destroy a template deployment is to destroy the associated
// RG, so keep each template unique to its RG.
resource "azurerm_template_deployment" "storage_cache" {
name = "hpc_cache"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.arm_template
}
resource "azurerm_resource_group" "nfsfiler" {
name = local.filer_resource_group_name
location = local.location
}
// the ephemeral filer
module "nasfiler1" {
source = "../../../modules/nfs_filer"
resource_group_name = azurerm_resource_group.nfsfiler.name
location = azurerm_resource_group.nfsfiler.location
admin_username = local.vm_admin_username
admin_password = local.vm_admin_password
ssh_key_data = local.vm_ssh_key_data
vm_size = "Standard_D2s_v3"
unique_name = "nasfiler1"
// network details
virtual_network_resource_group = local.network_resource_group_name
virtual_network_name = module.network.vnet_name
virtual_network_subnet_name = module.network.cloud_filers_subnet_name
}
// load the Storage Target Template, with the necessary variables
locals {
storage_target_1_template = templatefile("${path.module}/../storage_target.json",
{
uniquename = local.cache_name,
uniquestoragetargetname = "storage_target_1"
location = local.location,
nfsaddress = module.nasfiler1.primary_ip,
usagemodel = local.usage_model,
namespacepath_j1 = "/nfs1data",
nfsexport_j1 = module.nasfiler1.core_filer_export
targetPath_j1 = ""
})
}
resource "azurerm_template_deployment" "storage_target1" {
name = "storage_target_1"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.storage_target_1_template
depends_on = [
azurerm_template_deployment.storage_cache, // add after cache created
module.nasfiler1
]
}
output "mount_addresses" {
value = azurerm_template_deployment.storage_cache.outputs["mountAddresses"]
}

Просмотреть файл

@ -0,0 +1,42 @@
# HPC Cache Deployment with 3 NFS Filers
This example shows how to deploy an HPC Cache mounting 3 NFS Filers.
This example currently uses `azurerm_template_deployment` to deploy a template, but will be replaced soon by a native azurerm module.
This examples configures a render network, controller, and HPC Cache with 3 filers as shown in the diagram below:
![The architecture](../../../../../docs/images/terraform/3filers-hpcc.png)
## Deployment Instructions
To run the example, execute the following instructions. This assumes use of Azure Cloud Shell. If you are installing into your own environment, you will need to follow the [instructions to setup terraform for the Azure environment](https://docs.microsoft.com/en-us/azure/terraform/terraform-install-configure).
1. browse to https://shell.azure.com
2. Specify your subscription by running this command with your subscription ID: ```az account set --subscription YOUR_SUBSCRIPTION_ID```. You will need to run this everytime after restarting your shell, otherwise it may default you to the wrong subscription, and you will see an error similar to `azurerm_public_ip.vm is empty tuple`.
3. double check your [HPC Cache pre-requistes](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-prereqs)
4. get the terraform examples
```bash
mkdir tf
cd tf
git init
git remote add origin -f https://github.com/Azure/Avere.git
git config core.sparsecheckout true
echo "src/terraform/*" >> .git/info/sparse-checkout
git pull origin master
```
6. `cd src/terraform/examples/HPC\ Cache/3-filer`
7. edit the local variables section at the top of the file `main.tf`, to customize to your preferences
8. execute `terraform init` in the directory of `main.tf`.
9. execute `terraform apply -auto-approve` to build the HPC Cache cluster
Once installed you will be able to mount the HPC Cache cluster login according to the [documentation](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-mount).
When you are done using the cluster, you can destroy it by running `terraform destroy -auto-approve` or just delete the three resource groups created.

Просмотреть файл

@ -0,0 +1,238 @@
// customize the HPC Cache by adjusting the following local variables
locals {
// the region of the deployment
location = "eastus"
// network details
network_resource_group_name = "network_resource_group"
// vfxt details
hpc_cache_resource_group_name = "hpc_cache_resource_group"
// HPC Cache Throughput SKU - 3 allowed values for throughput (GB/s) of the cache
// Standard_2G
// Standard_4G
// Standard_8G
cache_throughput = "Standard_2G"
// HPC Cache Size - 5 allowed sizes (GBs) for the cache
// 3072
// 6144
// 12288
// 24576
// 49152
cache_size = 12288
// unique name for cache
cache_name = "uniquename"
// usage model
// WRITE_AROUND
// READ_HEAVY_INFREQ
// WRITE_WORKLOAD_15
usage_model = "READ_HEAVY_INFREQ"
// filer related variables
vm_admin_username = "azureuser"
// use either SSH Key data or admin password, if ssh_key_data is specified
// then admin_password is ignored
vm_admin_password = "PASSWORD"
// if you use SSH key, ensure you have ~/.ssh/id_rsa with permission 600
// populated where you are running terraform
vm_ssh_key_data = null //"ssh-rsa AAAAB3...."
// filer details
filer_resource_group_name = "filer_resource_group"
}
provider "azurerm" {
version = "~>2.0.0"
features {}
}
// the render network
module "network" {
source = "../../../modules/render_network"
resource_group_name = local.network_resource_group_name
location = local.location
}
resource "azurerm_resource_group" "hpc_cache_rg" {
name = local.hpc_cache_resource_group_name
location = local.location
// the depends on is necessary for destroy. Due to the
// limitation of the template deployment, the only
// way to destroy template resources is to destroy
// the resource group
depends_on = [module.network]
}
data "azurerm_subnet" "vnet" {
name = module.network.cloud_cache_subnet_name
virtual_network_name = module.network.vnet_name
resource_group_name = local.network_resource_group_name
}
// load the HPC Cache Template, with the necessary variables
locals {
arm_template = templatefile("${path.module}/../hpc_cache.json",
{
uniquename = local.cache_name,
location = local.location,
hpccsku = local.cache_throughput,
subnetid = data.azurerm_subnet.vnet.id,
hpccachesize = local.cache_size
})
}
// HPC cache is currently deployed using azurerm_template_deployment as described in
// https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html.
// The only way to destroy a template deployment is to destroy the associated
// RG, so keep each template unique to its RG.
resource "azurerm_template_deployment" "storage_cache" {
name = "hpc_cache"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.arm_template
}
resource "azurerm_resource_group" "nfsfiler" {
name = local.filer_resource_group_name
location = local.location
}
// the ephemeral filer
module "nasfiler1" {
source = "../../../modules/nfs_filer"
resource_group_name = azurerm_resource_group.nfsfiler.name
location = azurerm_resource_group.nfsfiler.location
admin_username = local.vm_admin_username
admin_password = local.vm_admin_password
ssh_key_data = local.vm_ssh_key_data
vm_size = "Standard_D2s_v3"
unique_name = "nasfiler1"
// network details
virtual_network_resource_group = local.network_resource_group_name
virtual_network_name = module.network.vnet_name
virtual_network_subnet_name = module.network.cloud_filers_subnet_name
}
// load the Storage Target Template, with the necessary variables
locals {
storage_target_1_template = templatefile("${path.module}/../storage_target.json",
{
uniquename = local.cache_name,
uniquestoragetargetname = "storage_target_1"
location = local.location,
nfsaddress = module.nasfiler1.primary_ip,
usagemodel = local.usage_model,
namespacepath_j1 = "/nfs1data",
nfsexport_j1 = module.nasfiler1.core_filer_export
targetPath_j1 = ""
})
}
resource "azurerm_template_deployment" "storage_target1" {
name = "storage_target_1"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.storage_target_1_template
depends_on = [
azurerm_template_deployment.storage_cache, // add after cache created
module.nasfiler1
]
}
// the ephemeral filer
module "nasfiler2" {
source = "../../../modules/nfs_filer"
resource_group_name = azurerm_resource_group.nfsfiler.name
location = azurerm_resource_group.nfsfiler.location
admin_username = local.vm_admin_username
admin_password = local.vm_admin_password
ssh_key_data = local.vm_ssh_key_data
vm_size = "Standard_D2s_v3"
unique_name = "nasfiler2"
// network details
virtual_network_resource_group = local.network_resource_group_name
virtual_network_name = module.network.vnet_name
virtual_network_subnet_name = module.network.cloud_filers_subnet_name
}
// load the Storage Target Template, with the necessary variables
locals {
storage_target_2_template = templatefile("${path.module}/../storage_target.json",
{
uniquename = local.cache_name,
uniquestoragetargetname = "storage_target_2"
location = local.location,
nfsaddress = module.nasfiler2.primary_ip,
usagemodel = local.usage_model,
namespacepath_j1 = "/nfs2data",
nfsexport_j1 = module.nasfiler2.core_filer_export
targetPath_j1 = ""
})
}
resource "azurerm_template_deployment" "storage_target2" {
name = "storage_target_2"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.storage_target_2_template
depends_on = [
azurerm_template_deployment.storage_target1, // add after storage target1
module.nasfiler2
]
}
// the ephemeral filer
module "nasfiler3" {
source = "../../../modules/nfs_filer"
resource_group_name = azurerm_resource_group.nfsfiler.name
location = azurerm_resource_group.nfsfiler.location
admin_username = local.vm_admin_username
admin_password = local.vm_admin_password
ssh_key_data = local.vm_ssh_key_data
vm_size = "Standard_D2s_v3"
unique_name = "nasfiler3"
// network details
virtual_network_resource_group = local.network_resource_group_name
virtual_network_name = module.network.vnet_name
virtual_network_subnet_name = module.network.cloud_filers_subnet_name
}
// load the Storage Target Template, with the necessary variables
locals {
storage_target_3_template = templatefile("${path.module}/../storage_target.json",
{
uniquename = local.cache_name,
uniquestoragetargetname = "storage_target_3"
location = local.location,
nfsaddress = module.nasfiler3.primary_ip,
usagemodel = local.usage_model,
namespacepath_j1 = "/nfs3data",
nfsexport_j1 = module.nasfiler3.core_filer_export
targetPath_j1 = ""
})
}
resource "azurerm_template_deployment" "storage_target3" {
name = "storage_target_3"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.storage_target_3_template
depends_on = [
azurerm_template_deployment.storage_target2, // add after storage target2
module.nasfiler3
]
}
output "mount_addresses" {
value = azurerm_template_deployment.storage_cache.outputs["mountAddresses"]
}

Просмотреть файл

@ -1 +1,7 @@
Please refer to the Artist Anywhere tutorials for how to use the HPC Cache filer: https://github.com/Azure/Avere/blob/master/src/tutorials/ArtistAnywhere/StorageCache/04-Cache.tf
# HPC Cache
The examples in this folder build various configurations of the Avere vFXT with IaaS based filers:
1. [no-filer example](examples/HPC%20Cache/no-filers)
2. [HPC Cache mounting 1 IaaS NAS filer example](examples/HPC%20Cache/1-filer)
3. [HPC Cache mounting 3 IaaS NAS filers example](examples/HPC%20Cache/3-filers)

Просмотреть файл

@ -0,0 +1,25 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"name": "${uniquename}",
"type": "Microsoft.StorageCache/caches",
"apiVersion": "2019-11-01",
"location": "${location}",
"sku": {
"name": "${hpccsku}"
},
"properties": {
"subnet": "${subnetid}",
"cacheSizeGB": "${hpccachesize}"
}
}
],
"outputs": {
"mountAddresses": {
"type": "string",
"value": "[string(reference(resourceId('Microsoft.StorageCache/caches', '${uniquename}'), '2019-11-01').mountAddresses)]"
}
}
}

Просмотреть файл

@ -0,0 +1,42 @@
# HPC Cache Deployment with no Filers
This example shows how to deploy an HPC Cache by itself.
This example currently uses `azurerm_template_deployment` to deploy a template, but will be replaced soon by a native azurerm module.
This examples configures a render network, controller, and HPC Cache without any filers as shown in the diagram below:
![The architecture](../../../../../docs/images/terraform/nofiler.png)
## Deployment Instructions
To run the example, execute the following instructions. This assumes use of Azure Cloud Shell. If you are installing into your own environment, you will need to follow the [instructions to setup terraform for the Azure environment](https://docs.microsoft.com/en-us/azure/terraform/terraform-install-configure).
1. browse to https://shell.azure.com
2. Specify your subscription by running this command with your subscription ID: ```az account set --subscription YOUR_SUBSCRIPTION_ID```. You will need to run this everytime after restarting your shell, otherwise it may default you to the wrong subscription, and you will see an error similar to `azurerm_public_ip.vm is empty tuple`.
3. double check your [HPC Cache pre-requistes](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-prereqs)
4. get the terraform examples
```bash
mkdir tf
cd tf
git init
git remote add origin -f https://github.com/Azure/Avere.git
git config core.sparsecheckout true
echo "src/terraform/*" >> .git/info/sparse-checkout
git pull origin master
```
6. `cd src/terraform/examples/HPC\ Cache/1-filer`
7. edit the local variables section at the top of the file `main.tf`, to customize to your preferences
8. execute `terraform init` in the directory of `main.tf`.
9. execute `terraform apply -auto-approve` to build the HPC Cache cluster
Once installed you will be able to mount the HPC Cache cluster login according to the [documentation](https://docs.microsoft.com/en-us/azure/hpc-cache/hpc-cache-mount).
When you are done using the cluster, you can destroy it by running `terraform destroy -auto-approve` or just delete the three resource groups created.

Просмотреть файл

@ -0,0 +1,83 @@
// customize the HPC Cache by adjusting the following local variables
locals {
// the region of the deployment
location = "eastus"
// network details
network_resource_group_name = "network_resource_group"
// vfxt details
hpc_cache_resource_group_name = "hpc_cache_resource_group"
// HPC Cache Throughput SKU - 3 allowed values for throughput (GB/s) of the cache
// Standard_2G
// Standard_4G
// Standard_8G
cache_throughput = "Standard_2G"
// HPC Cache Size - 5 allowed sizes (GBs) for the cache
// 3072
// 6144
// 12288
// 24576
// 49152
cache_size = 12288
// unique name for cache
cache_name = "uniquename"
}
provider "azurerm" {
version = "~>2.0.0"
features {}
}
// the render network
module "network" {
source = "../../../modules/render_network"
resource_group_name = local.network_resource_group_name
location = local.location
}
resource "azurerm_resource_group" "hpc_cache_rg" {
name = local.hpc_cache_resource_group_name
location = local.location
// the depends on is necessary for destroy. Due to the
// limitation of the template deployment, the only
// way to destroy template resources is to destroy
// the resource group
depends_on = [module.network]
}
data "azurerm_subnet" "vnet" {
name = module.network.cloud_cache_subnet_name
virtual_network_name = module.network.vnet_name
resource_group_name = local.network_resource_group_name
}
// load the HPC Cache Template, with the necessary variables
locals {
arm_template = templatefile("${path.module}/../hpc_cache.json",
{
uniquename = local.cache_name,
location = local.location,
hpccsku = local.cache_throughput,
subnetid = data.azurerm_subnet.vnet.id,
hpccachesize = local.cache_size
})
}
// HPC cache is currently deployed using azurerm_template_deployment as described in
// https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html.
// The only way to destroy a template deployment is to destroy the associated
// RG, so keep each template unique to its RG.
resource "azurerm_template_deployment" "storage_cache" {
name = "hpc_cache"
resource_group_name = azurerm_resource_group.hpc_cache_rg.name
deployment_mode = "Incremental"
template_body = local.arm_template
}
output "mount_addresses" {
value = azurerm_template_deployment.storage_cache.outputs["mountAddresses"]
}

Просмотреть файл

@ -0,0 +1,26 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"resources": [
{
"name": "${uniquename}/${uniquestoragetargetname}",
"type": "Microsoft.StorageCache/caches/storageTargets",
"apiVersion": "2019-11-01",
"location": "${location}",
"properties": {
"targetType": "nfs3",
"nfs3": {
"target": "${nfsaddress}",
"usageModel": "${usagemodel}"
},
"junctions": [
{
"namespacePath": "${namespacepath_j1}",
"nfsExport": "${nfsexport_j1}",
"targetPath": "${targetpath_j1}"
}
]
}
}
]
}

Просмотреть файл

@ -3,6 +3,6 @@
The examples in this folder build various configurations of the Avere vFXT with IaaS based filers:
1. [Avere vFXT for Azure](no-filers/)
2. [Avere vFXT for Azure and wire up against 1 IaaS NAS filer](1-filer/)
3. [Avere vFXT for Azure and wire up against 3 IaaS NAS filers](3-filers/)
2. [Avere vFXT for Azure mounting 1 IaaS NAS filer](1-filer/)
3. [Avere vFXT for Azure mounting 3 IaaS NAS filers](3-filers/)
4. [Avere vFXT optimized for Houdini](HoudiniOptimized/)

Просмотреть файл

Просмотреть файл

@ -3,5 +3,5 @@
This module deploys a controller for the Avere vFXT for Azure. Examples of module usage can be found in any of the following examples:
1. [Install Avere vFXT for Azure](../../examples/vfxt/no-filers)
2. [Install Avere vFXT for Azure and wire up against 1 IaaS NAS filer](../../examples/vfxt/1-filer)
3. [Install Avere vFXT for Azure and wire up against 3 IaaS NAS filers](../../examples/vfxt/3-filers)
2. [Install Avere vFXT for Azure mounting 1 IaaS NAS filer](../../examples/vfxt/1-filer)
3. [Install Avere vFXT for Azure mounting 3 IaaS NAS filers](../../examples/vfxt/3-filers)

Просмотреть файл

@ -38,13 +38,16 @@ function install_golang() {
mkdir -p $AZURE_HOME_DIR/gopath
echo "export GOPATH=$AZURE_HOME_DIR/gopath" >> $AZURE_HOME_DIR/.bashrc
echo "export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin" >> $AZURE_HOME_DIR/.bashrc
source $AZURE_HOME_DIR/.bashrc
rm $GO_DL_FILE
}
function pull_avere_github() {
# best effort to build the github content
set +e
# setup the environment
source $AZURE_HOME_DIR/.bashrc
OLD_HOME=$HOME
export HOME=$AZURE_HOME_DIR
# checkout Checkpoint simulator code, all dependencies and build the binaries
cd $GOPATH
go get -v github.com/Azure/Avere/src/terraform/providers/terraform-provider-avere
@ -52,8 +55,10 @@ function pull_avere_github() {
go build
mkdir -p $AZURE_HOME_DIR/.terraform.d/plugins
cp terraform-provider-avere $AZURE_HOME_DIR/.terraform.d/plugins
export HOME=$OLD_HOME
# re-enable exit on error
set -e
}
function install_az_cli() {

Просмотреть файл

@ -13,8 +13,8 @@ The provider has the following features:
This provider requires a controller to be installed that is used to create and manage the Avere vFXT. The following examples provide details on how to use terraform to deploy the controller:
1. [Install Avere vFXT for Azure](../../examples/vfxt/no-filers)
2. [Install Avere vFXT for Azure and wire up against 1 IaaS NAS filer](../../examples/vfxt/1-filer)
3. [Install Avere vFXT for Azure and wire up against 3 IaaS NAS filers](../../examples/vfxt/3-filers)
2. [Install Avere vFXT for Azure mounting 1 IaaS NAS filer](../../examples/vfxt/1-filer)
3. [Install Avere vFXT for Azure mounting 3 IaaS NAS filers](../../examples/vfxt/3-filers)
## Build the Terraform Provider binary