batch-shipyard/recipes/CNTK-GPU-Infiniband-IntelMPI
Fred Park 05e9773741 Update recipes
- `remove_container_after_exit` is now defaulted enabled
- Move to CentOS-HPC 7.3 for ib recipes
2017-08-03 19:13:57 -07:00
..
config Update recipes 2017-08-03 19:13:57 -07:00
docker Update CNTK CPU infiniband recipe to 2.1 2017-08-03 16:28:23 -07:00
README.md Update CNTK CPU infiniband recipe to 2.1 2017-08-03 16:28:23 -07:00

README.md

CNTK-GPU-Infiniband-IntelMPI

This recipe shows how to run CNTK on GPUs across Infiniband/RDMA enabled Azure VMs via Intel MPI.

Configuration

Please see refer to this set of sample configuration files for this recipe.

Pool Configuration

The pool configuration should enable the following properties:

  • vm_size must be STANDARD_NC24R
  • inter_node_communication_enabled must be set to true
  • max_tasks_per_node must be set to 1 or omitted
  • publisher should be OpenLogic
  • offer should be CentOS-HPC
  • sku should be 7.3

Global Configuration

The global configuration should set the following properties:

  • docker_images array must have a reference to a valid CNTK GPU-enabled Docker image that can be run with Intel MPI. Images denoted with gpu and intelmpi tags found in alfpark/cntk are compatible with Azure VMs with GPUs and RDMA. Images denoted with refdata tag suffixes found in alfpark/cntk can be used for this recipe which contains reference data for MNIST and CIFAR-10 examples. If you do not need this reference data then you can use the images without the refdata suffix on the image tag. For this example, alfpark/cntk:2.1-gpu-1bitsgd-py36-cuda8-cudnn6-intelmpi-refdata can be used.

MPI Jobs Configuration (MultiNode)

The jobs configuration should set the following properties within the tasks array which should have a task definition containing:

  • image should be the name of the Docker image for this container invocation. For this example, this should be alfpark/cntk:2.1-gpu-1bitsgd-py36-cuda8-cudnn6-intelmpi-refdata. Please note that the docker_images in the Global Configuration should match this image name.
  • command should contain the command to pass to the Docker run invocation. For this example, we will run the ResNet-20 Distributed training on CIFAR-10 example in the alfpark/cntk:2.1-gpu-1bitsgd-py35-cuda8-cudnn6-refdata Docker image. The application command to run would be: "/cntk/run_cntk.sh -s /cntk/Examples/Image/Classification/ResNet/Python/TrainResNet_CIFAR10_Distributed.py -- --network resnet20 -q 1 -a 0 --datadir /cntk/Examples/Image/DataSets/CIFAR-10 --outputdir $AZ_BATCH_TASK_WORKING_DIR/output"
    • run_cntk.sh has two parameters
      • -s for the Python script to run
      • -w for the working directory (not required for this example to run)
      • -- parameters specified after this are given verbatim to the Python script
  • infiniband must be set to true
  • gpu must be set to true. This enables invoking the nvidia-docker wrapper.
  • multi_instance property must be defined
    • num_instances should be set to pool_specification_vm_count_dedicated, pool_specification_vm_count_low_priority, pool_current_dedicated, or pool_current_low_priority
    • coordination_command should be unset or null
    • resource_files should be unset or the array can be empty

Dockerfile and supplementary files

Supplementary files can be found here.

You must agree to the following licenses prior to use: