2c8f6b299a | ||
---|---|---|
.. | ||
config | ||
docker | ||
README.md |
README.md
Torch-GPU
This recipe shows how to run Torch on GPUs using N-series Azure VM instances in an Azure Batch compute pool.
Configuration
Please see refer to this set of sample configuration files for this recipe.
Pool Configuration
The pool configuration should enable the following properties:
vm_size
must be a GPU enabled VM size. Because Torch is a GPU-accelerated compute application, you should choose anND
,NC
orNCv2
VM instance size.vm_configuration
is the VM configurationplatform_image
specifies to use a platform imagepublisher
should beCanonical
orOpenLogic
offer
should beUbuntuServer
for Canonical orCentOS
for OpenLogicsku
should be16.04-LTS
for Ubuntu or7.3
for CentOS
Global Configuration
The global configuration should set the following properties:
docker_images
array must have a reference to a valid Torch GPU-enabled Docker image. alfpark/torch:gpu can be used for this recipe.
Jobs Configuration
The jobs configuration should set the following properties within the tasks
array which should have a task definition containing:
image
should be the name of the Docker image for this container invocation, e.g.,alfpark/torch:gpu
command
should contain the command to pass to the Docker run invocation. For thealfpark/torch:gpu
Docker image and to run the MNIST convolutional example on the GPU, therun_mnist.sh
helper script is used. Thecommand
should be:"/root/torch/run_mnist.sh"
gpu
can be set totrue
, however, it is implicitly enabled by Batch Shipyard when executing on a GPU-enabled compute pool.
Dockerfile and supplementary files
The Dockerfile
for the Docker image can be found here.
You must agree to the Torch License prior to use.