onnxruntime-tvm/docker
..
install
Dockerfile.ci_cpu
Dockerfile.ci_emscripten
Dockerfile.ci_gpu
Dockerfile.ci_i386
Dockerfile.ci_jekyll
Dockerfile.ci_lint
Dockerfile.conda_cpu
Dockerfile.conda_cuda90
Dockerfile.conda_cuda100
Dockerfile.demo_android
Dockerfile.demo_cpu
Dockerfile.demo_gpu
Dockerfile.demo_opencl
README.md
bash.sh
build.sh
with_the_same_user

README.md

TVM Docker

This directory contains the TVM's docker infrastructure. We use docker to provide build environments for CI and images for demo. We need docker and nvidia-docker for GPU images.

Start Docker Bash Session

You can use the following helper script to start an interactive bash session with a given image_name.

/path/to/tvm/docker/bash.sh image_name

The script does the following things:

  • Mount current directory to /workspace and set it as home
  • Switch user to be the same user that calls the bash.sh
  • Use the host-side network

The helper bash script can be useful to build demo sessions.

Prebuilt Docker Images

We provide several pre-built images for doing quick exploration with TVM installed. For example, you can run the following command to get tvmai/demo-cpu image.

/path/to/tvm/docker/bash.sh tvmai/demo-cpu

Then inside the docker container, you can type the following command to start the jupyter notebook

jupyter notebook

Check out https://hub.docker.com/r/tvmai/ to get the full list of available prebuilt images.

Use Local Build Script

We also provide script to build docker images locally. We use (build.sh)[./build.sh] to build and run the commands. To build and run docker images, we can run the following command at the root of the project.

./docker/build.sh image_name [command]

Here image_name corresponds to the docker defined in the Dockerfile.image_name.

You can also start an interactive session by typing

./docker/build.sh image_name -it bash

The build command will map the tvm root to /workspace/ inside the container with the same user as the user invoking the docker command. Here are some common use examples to perform CI tasks.

  • lint the python codes

    ./docker/build.sh ci_lint make pylint
    
  • build codes with CUDA support

    ./docker/build.sh ci_gpu make -j$(nproc)
    
  • do the python unittest

    ./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh
    
  • build the documents. The results will be available at docs/_build/html

    ./docker/ci_build.sh ci_gpu make -C docs html
    
  • build golang test suite.

    ./docker/build.sh ci_cpu tests/scripts/task_golang.sh