ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Перейти к файлу
Vincent Wang 1128882bfd
Quantize Bias for Conv/Gemm on Quantized Model (#22889)
Some quantized models don't have Conv/Gemm node's bias quantized but
still leave them in float. This PR is to create a sub-graph to quantize
the bias for Conv/Gemm nodes with scale = scale_input_0 * scale_input_1
and zp = 0. We only do this for bias initializer so that ConstantFolding
will fold the sub-graph to a real quantized int32 bias initializer
during the graph optimization next round.
2024-11-28 10:10:24 +08:00
.config Auto-generated baselines by 1ES Pipeline Templates (#22817) 2024-11-13 13:50:52 -08:00
.devcontainer
.gdn
.github Move C# doc Github Action to Windows (#22880) 2024-11-18 23:56:59 -08:00
.pipelines [DML EP] Update DML to 1.15.4 (#22635) 2024-10-29 17:13:57 -07:00
.vscode Stop VSCode appending file associations to settings.json (#21944) 2024-08-31 19:04:12 -07:00
cgmanifests Cleanup code (#22827) 2024-11-19 14:13:33 -08:00
cmake Add option to force generic algorithms on x86 (#22917) 2024-11-21 13:45:46 -08:00
csharp [CoreML] Create EP by AppendExecutionProvider (#22675) 2024-11-27 09:26:31 +08:00
dockerfiles Fix warning - LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (#22800) 2024-11-11 13:05:34 -08:00
docs Implementation of TreeEnsemble ai.onnx.ml==5 (#22333) 2024-11-22 19:48:23 +01:00
include/onnxruntime/core [CoreML] Create EP by AppendExecutionProvider (#22675) 2024-11-27 09:26:31 +08:00
java [CoreML] Create EP by AppendExecutionProvider (#22675) 2024-11-27 09:26:31 +08:00
js [WebNN EP] Fixed bug in usage of Array.reduce() (#22944) 2024-11-26 19:03:44 -08:00
objectivec [CoreML] Create EP by AppendExecutionProvider (#22675) 2024-11-27 09:26:31 +08:00
onnxruntime Quantize Bias for Conv/Gemm on Quantized Model (#22889) 2024-11-28 10:10:24 +08:00
orttraining Fix warning - LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format (#22800) 2024-11-11 13:05:34 -08:00
rust Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
samples
tools [CoreML ] ML Program more operators support [3/N] (#22710) 2024-11-28 09:21:02 +08:00
winml Fix warnings (#21809) 2024-08-21 14:23:37 -07:00
.clang-format
.clang-tidy
.dockerignore
.gitattributes Fix typos according to reviewdog report. (#21335) 2024-07-22 13:37:32 -07:00
.gitignore
.gitmodules Revert "Upgrade emsdk from 3.1.59 to 3.1.62" (#21817) 2024-08-22 11:21:00 -07:00
.lintrunner.toml [js] change default formatter for JavaScript/TypeScript from clang-format to Prettier (#21728) 2024-08-14 16:51:22 -07:00
CITATION.cff Fix citation author name issue (#19597) 2024-02-22 17:03:56 -08:00
CODEOWNERS
CONTRIBUTING.md
CPPLINT.cfg Ignore all whitespace lint messages for cpplint (#22781) 2024-11-08 14:31:28 -08:00
LICENSE
NuGet.config Update C# test projects (#21631) 2024-09-05 08:21:23 +10:00
ORT_icon_for_light_bg.png
README.md Update pipeline status (#22924) 2024-11-24 21:26:27 -08:00
SECURITY.md
ThirdPartyNotices.txt Cleanup code (#22827) 2024-11-19 14:13:33 -08:00
VERSION_NUMBER bumps up version in main from 1.20 -> 1.21 (#22482) 2024-10-17 12:32:35 -07:00
build.bat
build.sh
build_arm64x.bat remove unnecessary environment variable (#19166) 2024-01-16 16:24:37 -08:00
lgtm.yml
ort.wprp Fully dynamic ETW controlled logging for ORT and QNN logs (#20537) 2024-06-06 21:11:14 -07:00
packages.config [DML EP] Update DML to 1.15.4 (#22635) 2024-10-29 17:13:57 -07:00
pyproject.toml Ignore ruff rule `N813` (#21477) 2024-07-24 17:48:22 -07:00
requirements-dev.txt
requirements-doc.txt
requirements-lintrunner.txt Update lintrunner requirements (#22185) 2024-09-23 18:27:16 -07:00
requirements-training.txt
requirements.txt Add compatibility for NumPy 2.0 (#21085) 2024-06-27 13:50:53 -07:00
setup.py Update CMake to 3.31.0rc1 (#22433) 2024-10-16 11:50:13 -07:00

README.md

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

This project is tested with BrowserStack.

Third-party Pipeline Status

System Inference Training
Linux Build Status

Releases

The current release and past releases can be found here: https://github.com/microsoft/onnxruntime/releases.

For details on the upcoming release, including release dates, announcements, features, and guidance on submitting feature requests, please visit the release roadmap: https://onnxruntime.ai/roadmap.

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.