HPC Cache or Avere Virtual FXT (vFXT) Edge filers allow you to run compute workflows in Azure Compute without having to worry about moving your NAS. These Avere solutions accomplish this by performing read-ahead caching, which aids in hiding latency when trying to use cloud computing against on-premises storage. The benefits of this technology include:
A key question is how much of a difference does the HPC Cache or Avere vFXT make to a local NFS filer and remote NFS filers at varying distances and latencies. The following video demonstrates how much difference an HPC Cache or Avere vFXT can make at high latency:
For this study, we use a similar rendering configuration described in [Rendering using Azure Batch and Avere vFXT](maya_azure_batch_avere_vfxt_demo.md). For each mounted NFS path, we measure the time it takes to render 500 frames across 500 nodes in parallel. The render includes the reading of the scene from the NFS path, and writing the rendered frame to the same NFS path for each of the 500 nodes.
The Azure Compute and Avere vFXT resources reside in East US. The NFS core filers are installed in each of East US, West US, Southeast Asia, and West India. [Virtual Network Peering](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview) is used to connect all the resources together. The architecture is shown in the following diagram:
To setup the above architecture run the following steps:
1. Deploy an Avere vFXT in East US as described in [Deploy](jumpstart_deploy.md).
2. To have enough IPs for the 500 batch nodes you will need to add a subnet to the VNET created in #1. In the Azure Portal browse to the VNET and add a new subnet ```10.0.4.0/22``` for an additional capacity of 1024 addresses.
3. Deploy an NFS NAS core filer in separate VNETs using the [NFS Deployment templates](../src/tutorials/nfslatency/azuredisknfs) and with the following VNET and subnet configurations:
| NFS Southeast Asia | Southeast Asia | 192.168.255.32/28 | 192.168.255.32/28 |
| NFS West India | West India | 192.168.255.48/28 | 192.168.255.48/28 |
4. On each NFS server install your Maya Render scene in the directories according to [the prepare content instructions](maya_azure_batch_avere_vfxt_demo.md#prepare-content-and-infrastructure). For this experiment we used an 8GB render scene with multiple textures.
5. Setup two-way VNET peering from the Avere vFXT VNET to each of the NAS VNETs as shown in the [peering templates and parameter templates](../src/tutorials/nfslatency/vnetpeering).
6. [Access the Avere vFXT management UI](access_cluster.md), add each [NFS NAS as a core filer](configure_storage.md#nas-core-filer), and then for each regional NFS NAS add the following [named namespaces (junctions)](configure_storage.md#create-a-junction):
| Region | namespace |
| --- | --- |
| eastus | eastus |
| westus | westus |
| westindia | westindia |
| southeastasia | southeastasia |
7. Your Azure batch machines will need to mount all the NFS exports created above. To do this choose one machine (eastus NFS recommended) as the bootstrap machine and add the following file to the ```/bootstrap``` folder of the NFS export, and edit the file with the IP addresses of each NFS server. For example, SSH to the machine and run:
8. [Setup Azure Batch](maya_azure_batch_avere_vfxt_demo.md#create-an-azure-batch-account-and-a-pool) and create a 500 single core low-priority nodes, ensuring you pass in the above boot. As a tip, it is useful to add 20 extra nodes to iron out the bumps of the nodes that get pre-empted. The mount points are under NFS and are setup as shown in the following table on each batch node:
| Round Robin vServer IP (eastus) | /eastus | /nfs/eastusvfxt |
| Round Robin vServer IP (eastus) | /westus | /nfs/westusvfxt |
| Round Robin vServer IP (eastus) | /westindia | /nfs/westindiavfxt |
| Round Robin vServer IP (eastus) | /southeastasia | /nfs/southeastasiavfxt |
Once everything is setup, run a [500 frame job](maya_azure_batch_avere_vfxt_demo.md#production-run-a-job-to-render-the-demo-scene) using the following experiments for each region and the mountpoints specified in the above table:
1. 500 frames directly to the NAS
2. Cold vFXT: 500 frames against the vFXT namespace for that region
3. Warm vFXT: 500 frames against the vFXT namespace for that region
If you need to re-run the cold experiment, you have to:
1. delete your pool
2. on the vFXT Management UI, on the manage core filers page, you need to invalidate the cache which can take up to 30 minutes.
# Results
The timing results to render 500 frames on 500 nodes of a Maya scene of 8GB are shown in the following table and corresponding chart:
| NFS Location | Roundtrip Latency (ms) to East US | Direct Connect (s) | Cold E32x3 vFXT (s) | Warm E32x3 vFXT (s) |
As expected the direct to NAS render times worsened as the latencies increased. However, in the cold vFXT case the rendering times were equal regardless of where the NFS Filer was located. Also, the data shows that the second or "Warm" run was faster than the first "Cold" run.
# Conclusion
As demonstrated by the above experiments, the Avere vFXT hides NAS Filer Latency regardless of where the NFS Filer is located in the world. Warming the cache of the vFXT further increases the performance. The Avere vFXT is well suited to high parallel HPC workloads like rendering that require high I/O and many heavy repeated reads.