ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Перейти к файлу
Ori Levari 6507b4f818
Reintroduce experimental api changes and fix remote build break (#6385)
Co-authored-by: Ori Levari <orlevari@microsoft.com>
2021-01-22 15:15:53 -08:00
.github Don't mark issues that are marked as enhancement as stale (#6134) 2020-12-14 18:57:40 -08:00
cgmanifests Fix generate_submodule_cgmanifest.py Windows issues. (#6404) 2021-01-22 12:25:55 -08:00
cmake Reintroduce experimental api changes and fix remote build break (#6385) 2021-01-22 15:15:53 -08:00
csharp Dont use default string marshalling in C# (#6219) 2021-01-20 17:44:36 -08:00
dockerfiles OpenVino docker file changes to bypass privileged mode 2021-01-22 09:43:47 -08:00
docs Add the custom op project information (#6334) 2021-01-20 15:23:24 -08:00
include/onnxruntime/core Fix some compile warnings (#6316) 2021-01-21 16:40:42 -08:00
java Java: Set C language warnings to W4 and adjust JNI code (#6347) 2021-01-14 15:04:47 -08:00
nodejs Bump ini from 1.3.5 to 1.3.8 in /nodejs 2020-12-12 13:06:43 -08:00
onnxruntime Reintroduce experimental api changes and fix remote build break (#6385) 2021-01-22 15:15:53 -08:00
orttraining Megatron checkpointing (#6293) 2021-01-22 11:26:47 -08:00
package/rpm Update version to 1.6.0 (#6041) 2020-12-08 11:09:51 -08:00
samples Remove nGraph Execution Provider (#5858) 2020-11-19 16:47:55 -08:00
server Remove nGraph Execution Provider (#5858) 2020-11-19 16:47:55 -08:00
tools Megatron checkpointing (#6293) 2021-01-22 11:26:47 -08:00
winml Reintroduce experimental api changes and fix remote build break (#6385) 2021-01-22 15:15:53 -08:00
.clang-format Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.clang-tidy Add remaining build options and make minor changes in documentation (#39) 2018-11-27 19:59:40 -08:00
.dockerignore Update dockerfiles (#5929) 2020-11-25 15:38:22 -08:00
.flake8 Re-enable PEP8 check in Win CI build (#4075) 2020-05-30 09:10:05 +10:00
.gitattributes Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
.gitignore Enable the xcode build for Apple Silicon (arm64 MacOS) (#5924) 2020-11-30 11:22:08 -08:00
.gitmodules remove gemmlowp submodule (#6341) 2021-01-13 15:54:37 -08:00
BUILD.md Update BUILD.md 2020-12-31 17:20:00 -08:00
CODEOWNERS Re-enable CI tests for the new PyTorch frontend (#5017) 2020-09-04 09:36:24 -07:00
CONTRIBUTING.md Update documentation for contributing a PR and add deprecation notices for PyOp and ORT server. (#6172) 2020-12-18 02:00:42 -08:00
LICENSE Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
NuGet.config Add DirectML Execution Provider (#2057) 2019-10-15 06:13:07 -07:00
README.md Update the readme file 2020-12-30 20:16:45 -08:00
ThirdPartyNotices.txt remove gemmlowp submodule (#6341) 2021-01-13 15:54:37 -08:00
VERSION_NUMBER Update version to 1.6.0 (#6041) 2020-12-08 11:09:51 -08:00
build.amd64.1411.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.bat Initial bootstrap commit. 2018-11-19 16:48:22 -08:00
build.sh Add iOS test pipeline and a sample app. (#5298) 2020-09-29 13:53:11 -07:00
ort.wprp Add Tracelogging for profiling (#1639) 2019-11-11 21:34:10 -08:00
packages.config Add suspend handler with new telemetry event for UWP scenarios (#5907) 2020-12-01 20:26:18 -08:00
requirements-dev.txt Add new PytTrch front-end (#4815) 2020-08-17 09:45:25 -07:00
requirements-doc.txt Update readme.rst for pypi, change documentation style (#1663) 2019-10-19 18:26:34 -07:00
requirements.txt Remove cerberus from wheel package (#4919) 2020-08-26 09:00:03 -07:00
setup.py Add longformer to python package (#6314) 2021-01-12 10:38:39 -08:00

README.md

Build Status Build Status Build Status Build Status Build Status Build Status Build Status

ONNX Runtime is a cross-platform inferencing and training accelerator compatible with many popular ML/DNN frameworks, including PyTorch, TensorFlow/Keras, scikit-learn, and more. aka.ms/onnxruntime

Many users can benefit from ONNX Runtime, including those looking to:

  • Improve inference performance for a wide variety of ML models
  • Reduce time and cost of training large models
  • Train in Python but deploy into a C#/C++/Java app
  • Run on different hardware and operating systems
  • Support models created in several different frameworks

ONNX Runtime inferencing APIs are stable and production-ready since the 1.0 release in October 2019 and can enable faster customer experiences and lower costs.

ONNX Runtime training feature was introduced in May 2020 in preview. This feature supports acceleration of PyTorch training on multi-node NVIDIA GPUs for transformer models. Additional updates for this feature are coming soon.


Table of Contents


Get Started

Frequently Asked Questions

Inferencing: Start

To use ONNX Runtime, refer to the table on aka.ms/onnxruntime for instructions for different build combinations.

Compatibility

Supporting models based on the standard ONNX format, the runtime is compatible with PyTorch, scikit-learn, TensorFlow, Keras, and all other frameworks and tools that support the interoperable format.

ONNX Runtime is up to date and backwards compatible with all operators (both DNN and traditional ML) since ONNX v1.2.1+. (ONNX compatibility details). Newer versions of ONNX Runtime support all models that worked with prior versions, so updates should not break integrations.

Binaries

Official builds are available on PyPi (Python), Nuget (C#/C/C++), Maven Central (Java), and npm (node.js).

  • Default CPU Provider (Eigen + MLAS)
  • GPU Provider - NVIDIA CUDA
  • GPU Provider - DirectML (Windows)

Dev builds created from the master branch are available for testing newer changes between official releases. Please use these at your own risk. We strongly advise against deploying these to production workloads as support is limited for dev builds.

Repository Details
Pypi (Python) If using pip, run pip install --upgrade pip prior to downloading.
CPU: onnxruntime / ort-nightly (dev)
GPU: onnxruntime-gpu / ort-gpu-nightly (dev)
Nuget (C#/C/C++) CPU: Microsoft.ML.OnnxRuntime / ort-nightly (dev)
GPU: Microsoft.ML.OnnxRuntime.Gpu / ort-nightly (dev)
Maven Central (Java) CPU: com.microsoft.onnxruntime/onnxruntime
GPU: com.microsoft.onnxruntime/onnxruntime_gpu
npm (node.js) CPU: onnxruntime
Other Contributed non-official packages (including Homebrew, Linuxbrew, and nixpkgs)
These are not maintained by the core ONNX Runtime team and may have limited support; use at your discretion.

System Requirements

The following are required for usage of the official published packages.

  • Visual C++ Runtime (for Windows packages)

  • System language

    • Installation of the English language package and configuring en_US.UTF-8 locale is required, as certain operators makes use of system locales.
    • For Ubuntu, install language-pack-en package
      • Run the following commands: locale-gen en_US.UTF-8 update-locale LANG=en_US.UTF-8
      • Follow similar procedure to configure other locales on other platforms.
  • Default CPU

    • ONNX Runtime binaries in the CPU packages use OpenMP and depend on the library being available at runtime in the system.
      • For Windows, OpenMP support comes as part of VC runtime. It is also available as redist packages: vc_redist.x64.exe and vc_redist.x86.exe
      • For Linux, the system must have libgomp.so.1 which can be installed using apt-get install libgomp1.
      • For Mac OS X, the system must have libomp.dylib which can be installed using brew install libomp.
  • Default GPU (CUDA)

    • The default GPU build requires CUDA runtime libraries being installed on the system:
      • Version: CUDA 10.2 and cuDNN 8.0.3
    • Version dependencies from older ONNX Runtime releases can be found in prior release notes.

Build from Source

For production scenarios, it's strongly recommended to build only from an official release branch.

Docker Images

API Documentation

API Supported Versions Samples
Python 3.6, 3.7, 3.8, 3.9 (3.8/3.9 excludes Win GPU and Linux ARM)
Python Dev Notes
Samples
C# Samples
C++ Samples
C Samples
WinRT Windows.AI.MachineLearning Samples
Java 8+ Samples
Ruby (external project) 2.4-2.7 Samples
Javascript (node.js) 12.x Samples

Supported Accelerators

Execution Providers

CPU GPU IoT/Edge/Mobile Other
  • Default CPU - MLAS (Microsoft Linear Algebra Subprograms) + Eigen
  • Intel DNNL
  • Intel MKL-ML (build option)

Deploying ONNX Runtime

Cloud

IoT and edge devices

The expanding focus and selection of IoT devices with sensors and consistent signal streams introduces new opportunities to move AI workloads to the edge. This is particularly important when there are massive volumes of incoming data/signals that may not be efficient or useful to push to the cloud due to storage or latency considerations. Consider: surveillance tapes where 99% of footage is uneventful, or real-time person detection scenarios where immediate action is required. In these scenarios, directly executing model inferencing on the target device is crucial for optimal assistance.

Client applications


Training: Start

The ONNX Runtime training feature enables easy integration with existing Pytorch trainer code to accelerate the exection. With a few lines of code, you can add ONNX Runtime into your existing training scripts and start seeing acceleration. The current preview version supports training acceleration for transformer models on NVIDIA GPUs.

ONNX Runtime pre-training sample: This sample is setup to pre-train the BERT-Large model to show how ONNX Runtime training can be used to accelerate training execution.

Train PyTorch model with ONNX Runtime

ONNX Runtime (ORT) has the capability to train existing PyTorch models through its optimized backend. For this, we have introduced an python API for PyTorch, called ORTTrainer, which can be used to switch the training backend for PyTorch models (instance of torch.nn.Module) to orttrainer. This requires some changes in the trainer code, such as replacing the PyTorch optimizer, and optionally, setting flags to enable additional features such as mixed-precision training. Here is a sample code fragment to integrate ONNX Runtime Training in your PyTorch pre-training script:

NOTE: The current API is experimental and expected to see significant changes in the near future. Our goal is to improve the interface to provide a seamless integration with PyTorch training that requires minimal changes in users training code.

import torch
...
import onnxruntime
from onnxruntime.training import ORTTrainer, optim

# Model definition
class NeuralNet(torch.nn.Module):
  def __init__(self, input_size, hidden_size, num_classes):
    ...
  def forward(self, data):
    ...

model = NeuralNet(input_size=784, hidden_size=500, num_classes=10)
criterion = torch.nn.Functional.cross_entropy 
model_description = {'inputs':  [('data', ['in', 'batch_size']),
                                 ('target', ['label_x_batch_size'])],
                     'outputs': [('loss', [], True),
                                 ('output', ['out', 'batch_size'])]}

optimizer_config = optim.AdamConfig(lr=learning_rate)

trainer = ORTTrainer(model,              # model
                     model_description,  # model description
                     optimizer_config,   # optimizer configuration
                     criterion)          # loss function

# Training Loop
for t in range(1000):
  # forward + backward + weight update
  loss, y_pred = trainer.train_step(input_data, target_labels, learning_rate)
  total_loss += loss.item()
  ...

Build ONNX Runtime Training from source

To use ONNX Runtime training in a custom environment, like on-prem NVIDIA DGX-2 clusters, you can use these build instructions to generate the Python package to integrate into existing trainer code.

Data/Telemetry

This project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For any feedback or to report a bug, please file a GitHub Issue.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.