onnxruntime-tvm/docker
Jared Roesch 58e15fe10f Bump ONNX version (#3286) 2019-06-04 08:42:46 -07:00
..
install Bump ONNX version (#3286) 2019-06-04 08:42:46 -07:00
Dockerfile.ci_cpu [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.ci_emscripten [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.ci_gpu [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.ci_i386 [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.ci_jekyll Jekyll (#3262) 2019-05-30 21:32:33 -07:00
Dockerfile.ci_lint [CI] Add file type check (#3116) 2019-04-28 12:04:19 -07:00
Dockerfile.demo_android [DOC] Add Android Tutorial (#2977) 2019-04-15 18:11:51 -07:00
Dockerfile.demo_cpu [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.demo_gpu [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
Dockerfile.demo_opencl [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
README.md [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
bash.sh Use bridge network and expose port on macOS when launch docker image (#3086) 2019-04-25 10:49:50 -07:00
build.sh [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00
with_the_same_user [HEADER] Add Header to Comply with ASF Release Policy (#2982) 2019-04-07 21:14:02 -07:00

README.md

TVM Docker

This directory contains the TVM's docker infrastructure. We use docker to provide build environments for CI and images for demo. We need docker and nvidia-docker for GPU images.

Start Docker Bash Session

You can use the following helper script to start an interactive bash session with a given image_name.

/path/to/tvm/docker/bash.sh image_name

The script does the following things:

  • Mount current directory to /workspace and set it as home
  • Switch user to be the same user that calls the bash.sh
  • Use the host-side network

The helper bash script can be useful to build demo sessions.

Prebuilt Docker Images

We provide several pre-built images for doing quick exploration with TVM installed. For example, you can run the following command to get tvmai/demo-cpu image.

/path/to/tvm/docker/bash.sh tvmai/demo-cpu

Then inside the docker container, you can type the following command to start the jupyter notebook

jupyter notebook

Check out https://hub.docker.com/r/tvmai/ to get the full list of available prebuilt images.

Use Local Build Script

We also provide script to build docker images locally. We use (build.sh)[./build.sh] to build and run the commands. To build and run docker images, we can run the following command at the root of the project.

./docker/build.sh image_name [command]

Here image_name corresponds to the docker defined in the Dockerfile.image_name.

You can also start an interactive session by typing

./docker/build.sh image_name -it bash

The build command will map the tvm root to /workspace/ inside the container with the same user as the user invoking the docker command. Here are some common use examples to perform CI tasks.

  • lint the python codes

    ./docker/build.sh ci_lint make pylint
    
  • build codes with CUDA support

    ./docker/build.sh ci_gpu make -j$(nproc)
    
  • do the python unittest

    ./docker/build.sh ci_gpu tests/scripts/task_python_unittest.sh
    
  • build the documents. The results will be available at docs/_build/html

    ./docker/ci_build.sh ci_gpu make -C docs html
    
  • build golang test suite.

    ./docker/build.sh ci_cpu tests/scripts/task_golang.sh