onnxruntime/dockerfiles
Tianlei Wu b4afc6266f
[ROCm] Python 3.10 in ROCm CI, and ROCm 6.2.3 in MigraphX CI (#22527)
### Description
Upgrade python from 3.9 to 3.10 in ROCm and MigraphX docker files and CI
pipelines. Upgrade ROCm version to 6.2.3 in most places except ROCm CI,
see comment below.

Some improvements/upgrades on ROCm/Migraphx docker or pipeline:
* rocm 6.0/6.1.3 => 6.2.3
* python 3.9 => 3.10
* Ubuntu 20.04 => 22.04
* Also upgrade ml_dtypes, numpy and scipy packages.
* Fix message "ROCm version from ..." with correct file path in
CMakeList.txt
* Exclude some NHWC tests since ROCm EP lacks support for NHWC
convolution.

#### ROCm CI Pipeline:
ROCm 6.1.3 is kept in the pipeline for now.
- Failed after upgrading to ROCm 6.2.3: `HIPBLAS_STATUS_INVALID_VALUE ;
GPU=0 ; hostname=76123b390aed ;
file=/onnxruntime_src/onnxruntime/core/providers/rocm/rocm_execution_provider.cc
; line=170 ; expr=hipblasSetStream(hipblas_handle_, stream);` . It need
further investigation.
- cupy issues:
(1) It currently supports numpy < 1.27, might not work with numpy 2.x.
So we locked numpy==1.26.4 for now.
(2) cupy support of ROCm 6.2 is still in progress:
https://github.com/cupy/cupy/issues/8606.

Note that miniconda issues: its libstdc++.so.6 and libgcc_s.so.1 might
have conflict with the system ones. So we created links to use the
system ones.

#### MigraphX CI pipeline

MigraphX CI does not use cupy, and we are able to use ROCm 6.2.3 and
numpy 2.x in the pipeline.

#### Other attempts

Other things that I've tried which might help in the future: 

Attempt to use a single docker file for both ROCm and Migraphx:
https://github.com/microsoft/onnxruntime/pull/22478

Upgrade to ubuntu 24.04 and python 3.12, and use venv like
[this](27903e7ff1/tools/ci_build/github/linux/docker/rocm-ci-pipeline-env.Dockerfile).

### Motivation and Context
In 1.20 release, ROCm nuget packaging pipeline will use 6.2:
https://github.com/microsoft/onnxruntime/pull/22461.
This upgrades rocm to 6.2.3 in CI pipelines to be consistent.
2024-10-25 11:47:16 -07:00
..
scripts [EP Perf] Update cmake (#21624) 2024-08-05 16:41:56 -07:00
Dockerfile.cuda [CUDA] Add CUDA_VERSION and CUDNN_VERSION etc. arguments to Dockerfile.cuda (#22351) 2024-10-09 12:06:33 -07:00
Dockerfile.jetson add install sec updates (#4957) 2020-08-31 18:13:02 -07:00
Dockerfile.migraphx [ROCm] Python 3.10 in ROCm CI, and ROCm 6.2.3 in MigraphX CI (#22527) 2024-10-25 11:47:16 -07:00
Dockerfile.openvino ORT- OVEP 1.19 PR-follow up (#21546) 2024-07-29 14:12:36 -07:00
Dockerfile.rocm [ROCm] Python 3.10 in ROCm CI, and ROCm 6.2.3 in MigraphX CI (#22527) 2024-10-25 11:47:16 -07:00
Dockerfile.source Update dockerfiles/Dockerfile.source to avoid installing onnx (#17975) 2023-10-20 09:24:21 -07:00
Dockerfile.tensorrt Update cmake to 3.27 and upgrade Linux CUDA docker files from CentOS7 to UBI8 (#16856) 2023-09-05 18:12:10 -07:00
Dockerfile.vitisai Update cmake to 3.27 and upgrade Linux CUDA docker files from CentOS7 to UBI8 (#16856) 2023-09-05 18:12:10 -07:00
LICENSE-IMAGE.txt Dockerfiles for TensorRT, CUDA, build from source (#922) 2019-07-09 02:03:55 -07:00
README.md [ROCm] Python 3.10 in ROCm CI, and ROCm 6.2.3 in MigraphX CI (#22527) 2024-10-25 11:47:16 -07:00

README.md

Dockerfiles

Execution Providers

Other

Instructions

CPU

Mariner 2.0, CPU, Python Bindings

  1. Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-source -f Dockerfile.source ..
  1. Run the Docker image
docker run -it onnxruntime-source

The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explicitly specify which CPU architecture you want to build. For example:

  docker build --platform linux/arm64/v8 -f Dockerfile.source

However, we cannot build the code for 32-bit ARM in such a way since a 32-bit compiler/linker might not have enough memory to generate the binaries.

CUDA

Ubuntu 24.04, CUDA 12.x, CuDNN 9.x

  1. Build the docker image from the Dockerfile in this repository. Choose available cuda version or cudnn version, then build docker image like the following:
git submodule update --init
docker build -t onnxruntime-cuda --build-arg CUDA_VERSION=12.6.1 \
                                 --build-arg CUDNN_VERSION=9.5.0.50 \
                                 --build-arg GIT_BRANCH=$(git rev-parse --abbrev-ref HEAD) \
                                 --build-arg GIT_COMMIT=$(git rev-parse HEAD) \
                                 --build-arg ONNXRUNTIME_VERSION=$(cat ../VERSION_NUMBER) \
                                 -f Dockerfile.cuda ..

To inspect the labels of the built image, run the following:

docker inspect onnxruntime-cuda
  1. Run the Docker image
docker run --rm --gpus all -it onnxruntime-cuda

or

nvidia-docker run -it onnxruntime-cuda

TensorRT

Ubuntu 20.04, CUDA 11.8, TensorRT 8.5.1

  1. Update submodules
git submodule update --init
  1. Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
  1. Run the Docker image
docker run --gpus all -it onnxruntime-trt
or
nvidia-docker run -it onnxruntime-trt

OpenVINO

Public Preview

Ubuntu 20.04, Python & C# Bindings RHEL 8.4, Python Binding

1. Using pre-built container images for Python API

The unified container image from Dockerhub can be used to run an application on any of the target accelerators. In order to select the target accelerator, the application should explicitly specify the choice using the device_type configuration option for OpenVINO Execution provider. Refer to OpenVINO EP runtime configuration documentation for details on specifying this option in the application code. If the device_type runtime config option is not explicitly specified, CPU will be chosen as the hardware target execution.

2. Building from Dockerfile

  1. Build the onnxruntime image for one of the accelerators supported below.

    Retrieve your docker image in one of the following ways.

    • Choose Dockerfile.openvino for Python API or Dockerfile.openvino-csharp for C# API as for building latest OpenVINO based Docker image for Ubuntu20.04 and Dockerfile.openvino-rhel for Python API for RHEL 8.4. Providing the docker build argument DEVICE enables the onnxruntime build for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default repository is http://github.com/microsoft/onnxruntime and default branch is main.
      docker build --rm -t onnxruntime --build-arg DEVICE=$DEVICE -f <Dockerfile> .
      
    • Pull the official image from DockerHub.
  2. DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.

Device Option Target Device
CPU_FP32 Intel CPUs
CPU_FP16 Intel CPUs
GPU_FP32 Intel Integrated Graphics
GPU_FP16 Intel Integrated Graphics
MYRIAD_FP16 Intel MovidiusTM USB sticks
VAD-M_FP16 Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs
HETERO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... All Intel® silicons mentioned above
MULTI:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... All Intel® silicons mentioned above
AUTO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... All Intel® silicons mentioned above

Specifying Hardware Target for HETERO or MULTI or AUTO Build:

HETERO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>.. MULTI:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>.. AUTO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>.. The <DEVICE_TYPE> can be any of these devices from this list ['CPU','GPU','MYRIAD','HDDL']

A minimum of two DEVICE_TYPE'S should be specified for a valid HETERO or MULTI or AUTO Build.

Example: HETERO:MYRIAD,CPU HETERO:HDDL,GPU,CPU MULTI:MYRIAD,GPU,CPU AUTO:GPU,CPU

This is the hardware accelerator target that is enabled by default in the container image. After building the container image for one default target, the application may explicitly choose a different target at run time with the same container by using the Dynamic device selction API.

OpenVINO on CPU

  1. Build the docker image from the DockerFile in this repository.

    docker build --rm -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 -f <Dockerfile> .
    
  2. Run the docker image

     docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb onnxruntime-cpu:latest
    

OpenVINO on GPU

  1. Build the docker image from the DockerFile in this repository.
    docker build --rm -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 -f <Dockerfile> .
    
  2. Run the docker image
    docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --device /dev/dri:/dev/dri onnxruntime-gpu:latest
    
    If your host system is Ubuntu 20, use the below command to run. Please find the alternative steps here.
    docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --device /dev/dri:/dev/dri --group-add=$(stat -c "%g" /dev/dri/render*) onnxruntime-gpu:latest
    

OpenVINO on Myriad VPU Accelerator

  1. Build the docker image from the DockerFile in this repository.

     docker build --rm -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 -f <Dockerfile> .
    
  2. Install the Myriad rules drivers on the host machine according to the reference in here

  3. Run the docker image by mounting the device drivers

    docker run -it --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb onnxruntime-myriad:latest
    
    

OpenVINO on VAD-M Accelerator Version

  1. Download OpenVINO Full package for latest version for Linux on host machine from this link and install it with the help of instructions from this link

  2. Install the drivers on the host machine according to the reference in here

  3. Build the docker image from the DockerFile in this repository.

     docker build --rm -t onnxruntime-vadm --build-arg DEVICE=VAD-M_FP16 -f <Dockerfile> .
    
  4. Run hddldaemon on the host in a separate terminal session using the following steps:

    • Initialize the OpenVINO environment.
        source <openvino_install_directory>/setupvars.sh
      
    • Edit the hddl_service.config file from $HDDL_INSTALL_DIR/config/hddl_service.config and change the field “bypass_device_number” to 8.
    • Restart the hddl daemon for the changes to take effect.
     $HDDL_INSTALL_DIR/bin/hddldaemon
    
    • Note that if OpenVINO was installed with root permissions, this file has to be changed with the same permissions.
  5. Run the docker image by mounting the device drivers

    docker run -itu root:root --rm --device-cgroup-rule='c 189:* rmw' -v /dev/bus/usb:/dev/bus/usb --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion  onnxruntime-vadm:latest
    

OpenVINO on HETERO or Multi-Device Build

  1. Build the docker image from the DockerFile in this repository.

    for HETERO:

     docker build --rm -t onnxruntime-HETERO --build-arg DEVICE=HETERO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... -f <Dockerfile> .
    

    for MULTI:

     docker build --rm -t onnxruntime-MULTI --build-arg DEVICE=MULTI:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... -f <Dockerfile> .
    

    for AUTO:

     docker build --rm -t onnxruntime-AUTO --build-arg DEVICE=AUTO:<DEVICE_TYPE_1>,<DEVICE_TYPE_2>,<DEVICE_TYPE_3>... -f <Dockerfile> .
    
  2. Install the required rules, drivers and other packages as required from the steps above for each of the DEVICE_TYPE accordingly that would be added for the HETERO or MULTI or AUTO device build type.

  3. Run the docker image as mentioned in the above steps

ARM 32/64

The build instructions are similar to x86 CPU. But if you want to build them on a x86 machine, you need to install qemu-user-static system package (outside of docker instances) first. Then

  1. Update submodules
git submodule update --init
  1. Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-source -f Dockerfile.arm64 ..
  1. Run the Docker image
docker run -it onnxruntime-source

For ARM32, please use Dockerfile.arm32v7 instead of Dockerfile.arm64.

NVIDIA Jetson TX1/TX2/Nano/Xavier:

These instructions are for JetPack SDK 4.4. The Dockerfile.jetson is using NVIDIA L4T 32.4.3 as base image. Versions different from these may require modifications to these instructions. Instructions assume you are on Jetson host in the root of onnxruntime git project clone(https://github.com/microsoft/onnxruntime)

Two-step installation is required:

  1. Build Python 'wheel' for ONNX Runtime on host Jetson system; Pre-built Python wheels are also available at Nvidia Jetson Zoo.
  2. Build Docker image using ONNX Runtime wheel from step 1. You can also install the wheel on the host directly.

Here are the build commands for each step:

1.1 Install ONNX Runtime build dependencies on Jetpack 4.4 host:

   sudo apt install -y --no-install-recommends \
    	build-essential software-properties-common cmake libopenblas-dev \
	libpython3.6-dev python3-pip python3-dev

1.2 Build ONNXRuntime Python wheel:

   ./build.sh --update --config Release --build --build_wheel \
   --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu

Note: You may add --use_tensorrt and --tensorrt_home options if you wish to use NVIDIA TensorRT (support is experimental), as well as any other options supported by build.sh script.

  1. After the Python wheel is successfully built, use 'find' command for Docker to install the wheel inside new image:
   find . -name '*.whl' -print -exec sudo -H DOCKER_BUILDKIT=1 nvidia-docker build --build-arg WHEEL_FILE={} -f ./dockerfiles/Dockerfile.jetson . \;

Note: Resulting Docker image will have ONNX Runtime installed in /usr, and ONNX Runtime wheel copied to /onnxruntime directory. Nothing else from ONNX Runtime source tree will be copied/installed to the image.

Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropriate files mounted from host. Otherwise, CUDA libraries won't be found. You can also set NVIDIA runtime as default in Docker.

MIGraphX

Ubuntu 22.04, ROCm6.2.3, MIGraphX

  1. Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-migraphx -f Dockerfile.migraphx .
  1. Run the Docker image
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video onnxruntime-migraphx

ROCm

Ubuntu 22.04, ROCm6.2.3

  1. Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-rocm -f Dockerfile.rocm .
  1. Run the Docker image
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video onnxruntime-rocm