ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Перейти к файлу
Changming Sun 57dfd15d7b
Remove dnf update from docker build scripts (#17551)
### Description
1. Remove 'dnf update' from docker build scripts, because it upgrades TRT
packages from CUDA 11.x to CUDA 12.x.
To reproduce it, you can run the following commands in a CentOS CUDA
11.x docker image such as nvidia/cuda:11.8.0-cudnn8-devel-ubi8.
```
export v=8.6.1.6-1.cuda11.8
dnf  install -y libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} libnvinfer-vc-plugin8-${v}        libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} libnvinfer-vc-plugin-devel-${v} libnvinfer-headers-devel-${v}  libnvinfer-headers-plugin-devel-${v} 
dnf update -y
```
The last command will generate the following outputs:
```
========================================================================================================================
 Package                                     Architecture       Version                          Repository        Size
========================================================================================================================
Upgrading:
 libnvinfer-devel                            x86_64             8.6.1.6-1.cuda12.0               cuda             542 M
 libnvinfer-headers-devel                    x86_64             8.6.1.6-1.cuda12.0               cuda             118 k
 libnvinfer-headers-plugin-devel             x86_64             8.6.1.6-1.cuda12.0               cuda              14 k
 libnvinfer-plugin-devel                     x86_64             8.6.1.6-1.cuda12.0               cuda              13 M
 libnvinfer-plugin8                          x86_64             8.6.1.6-1.cuda12.0               cuda              13 M
 libnvinfer-vc-plugin-devel                  x86_64             8.6.1.6-1.cuda12.0               cuda             107 k
 libnvinfer-vc-plugin8                       x86_64             8.6.1.6-1.cuda12.0               cuda             251 k
 libnvinfer8                                 x86_64             8.6.1.6-1.cuda12.0               cuda             543 M
 libnvonnxparsers-devel                      x86_64             8.6.1.6-1.cuda12.0               cuda             467 k
 libnvonnxparsers8                           x86_64             8.6.1.6-1.cuda12.0               cuda             757 k
 libnvparsers-devel                          x86_64             8.6.1.6-1.cuda12.0               cuda             2.0 M
 libnvparsers8                               x86_64             8.6.1.6-1.cuda12.0               cuda             854 k
Installing dependencies:
 cuda-toolkit-12-0-config-common             noarch             12.0.146-1                       cuda             7.7 k
 cuda-toolkit-12-config-common               noarch             12.2.140-1                       cuda             7.9 k
 libcublas-12-0                              x86_64             12.0.2.224-1                     cuda             361 M
 libcublas-devel-12-0                        x86_64             12.0.2.224-1                     cuda             397 M

Transaction Summary
========================================================================================================================

```
As you can see from the output,  they are CUDA 12 packages. 

The problem can also be solved by lock the packages' versions by using
"dnf versionlock" command right after installing the CUDA/TRT packages.
However, going forward, to get the better reproducibility, I suggest
manually fix dnf package versions in the installation scripts like we do
for TRT now.

```bash
v="8.6.1.6-1.cuda11.8" &&\
    yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo &&\
    yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} libnvinfer-vc-plugin8-${v}\
        libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} libnvinfer-vc-plugin-devel-${v} libnvinfer-headers-devel-${v}  libnvinfer-headers-plugin-devel-${v}
```
When we have a need to upgrade a package due to security alert or some
other reasons, we manually change the version string instead of relying
on "dnf update". Though this approach increases efforts, it can make our
pipeines more stable.

2. Move python test to docker
### Motivation and Context
Right now the nightly gpu package mixes using CUDA 11.x and CUDA 12.x
and the result package is totally not usable(crashes every time)
2023-09-21 07:33:29 -07:00
.config Update tsaoptions.json: update the email alias (#13448) 2022-10-26 15:56:16 -07:00
.devcontainer Remove two lines in the Dockerfile for Github Codespace (#12278) 2022-07-21 20:52:17 -07:00
.gdn Update win-ci-pipeline.yml: enable xnnpack tests (#16244) 2023-06-14 19:12:42 -07:00
.github Bump actions/checkout from 3 to 4 (#17487) 2023-09-13 09:22:21 -07:00
.pipelines Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
.vscode Broadcasting for SLN for CPU and CUDA (#16510) 2023-08-07 09:55:42 -07:00
cgmanifests Update C/C++ dependencies: abseil, date, nsync, googletest, wil, mp11, cpuinfo and safeint (#15470) 2023-09-08 13:35:04 -07:00
cmake Refactor Attention cuda kernel (#17578) 2023-09-19 09:49:21 -07:00
csharp Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00
dockerfiles Update cmake to 3.27 and upgrade Linux CUDA docker files from CentOS7 to UBI8 (#16856) 2023-09-05 18:12:10 -07:00
docs Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00
include/onnxruntime/core Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00
java [java] Relaxing CoreML test (#16777) 2023-08-09 11:43:05 -07:00
js Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00
objectivec Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
onnxruntime Fix API 16's marker (#17640) 2023-09-20 19:51:50 -07:00
orttraining Remove redundant Resolve() after each inlined function (#17556) 2023-09-15 12:13:37 -07:00
rust rust bindings: Do not unnecessarily re-run build.rs (#17018) 2023-09-05 19:42:06 -07:00
samples Enable pylint and numpy rules (#15218) 2023-03-27 20:37:53 -07:00
swift/OnnxRuntimeBindingsTests Add iOS Swift Package Manager support (#15297) 2023-04-20 16:18:35 +10:00
tools Remove dnf update from docker build scripts (#17551) 2023-09-21 07:33:29 -07:00
winml remove gsl to make status.h independent from gsl (#17402) 2023-09-13 21:47:43 -07:00
.clang-format Prevent GSL_SUPPRESS arguments from being modified by clang-format (#17242) 2023-08-22 18:26:53 -07:00
.clang-tidy Create clang-tidy CI (#12653) 2022-09-30 08:05:38 -07:00
.dockerignore Update dockerfiles (#5929) 2020-11-25 15:38:22 -08:00
.gitattributes Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.gitignore remove 'lib/' from .gitignore (#15613) 2023-04-24 18:43:32 -07:00
.gitmodules Remove onnxruntime extensions from list of gitmodules (#17615) 2023-09-19 17:12:14 -07:00
.lintrunner.toml Format c++ code under `winml/` (#16660) 2023-07-25 21:56:50 -07:00
CITATION.cff Fix CITATION.cff and add automatic validation of your citation metadata (#10478) 2022-04-13 10:03:52 -07:00
CODEOWNERS Add owners for public facing API files (#15288) 2023-03-30 17:16:15 -07:00
CONTRIBUTING.md Fix link to High Level Design (#11786) 2023-02-28 11:05:54 -08:00
LICENSE Remove year from license (#6658) 2021-02-12 00:25:56 -08:00
NuGet.config Delete nuget extra configs (#6477) 2021-01-27 20:25:45 -08:00
ORT_icon_for_light_bg.png Update nuget icon (#10672) 2022-03-01 09:11:03 -08:00
Package.swift Objective-C Add Support to Create and Query String ORTValues (#16764) 2023-07-20 17:39:29 -07:00
README.md add third-party pipeline status to README.md (#16155) 2023-05-31 22:14:39 -07:00
SECURITY.md Microsoft mandatory file (#11619) 2022-05-25 13:56:10 -07:00
ThirdPartyNotices.txt Flash Attention v2 MHA (#17227) 2023-08-31 13:52:21 -07:00
VERSION_NUMBER Bump Up Version to 1.17.0 (#17587) 2023-09-20 11:02:58 +08:00
build.bat try to find patch.exe in git default installation folder (#17106) 2023-08-10 21:48:13 -07:00
build.sh Upgrade old Python version in packaging pipeline (#16667) 2023-07-17 08:24:47 -07:00
lgtm.yml Fix lgtm C++ error (#13613) 2022-11-10 10:06:22 -08:00
ort.wprp Add Tracelogging for profiling (#1639) 2019-11-11 21:34:10 -08:00
packages.config Bump DirectML version from 1.12.0 to 1.12.1 (#17225) 2023-08-20 09:55:38 -07:00
pyproject.toml Updating QDQ to support Float8E4M3FN (#16550) 2023-08-08 12:18:48 +02:00
requirements-dev.txt Remove codecov from requirements-dev.txt (#15487) 2023-04-12 18:48:02 -07:00
requirements-doc.txt Add auto doc gen for ORTModule API during CI build (#7046) 2021-03-22 10:20:33 -07:00
requirements-lintrunner.txt Bump clang-format to 16.0.6 in CI (#17099) 2023-08-10 13:53:04 -07:00
requirements-training.txt Remove protobuf pin from training requirements (#13695) 2022-11-22 12:27:18 -08:00
requirements.txt.in Add additional python requirements (#11522) 2022-05-20 16:16:18 -07:00
setup.py Update tensorrt_dependencies in setup.py (#17562) 2023-09-15 08:20:47 -07:00

README.md

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status
Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.