This commit is contained in:
Pranav Sharma 2018-11-19 16:48:22 -08:00
Коммит 89618e8f1e
930 изменённых файлов: 127587 добавлений и 0 удалений

22
.clang-format Normal file
Просмотреть файл

@ -0,0 +1,22 @@
---
# Defaults for all languages.
BasedOnStyle: Google
# Setting ColumnLimit to 0 so developer choices about where to break lines are maintained.
# Developers are responsible for adhering to the 120 character maximum.
ColumnLimit: 0
SortIncludes: false
DerivePointerAlignment: false
# if you want to customize when working locally see https://clang.llvm.org/docs/ClangFormatStyleOptions.html for options.
# See ReformatSource.ps1 for a script to update all source according to the current options in this file.
# e.g. customizations to use Allman bracing and more indenting.
# AccessModifierOffset: -2
# BreakBeforeBraces: Allman
# CompactNamespaces: false
# IndentCaseLabels: true
# IndentWidth: 4
# NamespaceIndentation: All
...

30
.clang-tidy Normal file
Просмотреть файл

@ -0,0 +1,30 @@
---
# turn off readability-braces-around-statements to allow single line statement like 'if (x == y) doSomething();'
Checks: '-*,cppcoreguidelines-*,google-*,readability-*,modernize-*,-readability-braces-around-statements,-google-runtime-references,-cppcoreguidelines-pro-type-reinterpret-cast'
WarningsAsErrors: ''
HeaderFilterRegex: '.*lotus\/core\/.*'
AnalyzeTemporaryDtors: false
FormatStyle: none
CheckOptions:
- key: google-readability-braces-around-statements.ShortStatementLines
value: '1'
- key: google-readability-function-size.StatementThreshold
value: '800'
- key: google-readability-namespace-comments.ShortNamespaceLines
value: '10'
- key: google-readability-namespace-comments.SpacesBeforeComments
value: '2'
- key: modernize-loop-convert.MaxCopySize
value: '16'
- key: modernize-loop-convert.MinConfidence
value: reasonable
- key: modernize-loop-convert.NamingStyle
value: CamelCase
- key: modernize-pass-by-value.IncludeStyle
value: google
- key: modernize-replace-auto-ptr.IncludeStyle
value: google
- key: modernize-use-nullptr.NullMacros
value: 'NULL'
...

13
.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,13 @@
# This sets the default behaviour, overriding core.autocrlf
* text=auto
# All source files should have unix line-endings in the repository,
# but convert to native line-endings on checkout
*.cc text
*.h text
# Windows specific files should retain windows line-endings
*.sln text eol=crlf
# make sure build.sh retains Unix line endings, even when checked out on windows.
*.sh text eol=lf

31
.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,31 @@
# build, distribute, and bins (+ python proto bindings)
build
build_host_protoc
build_android
build_ios
build_*
.build_debug/*
.build_release/*
distribute/*
*.testbin
*.bin
cmake_build
.cmake_build
gen
*~
.vs
TestResults/
.idea/
lotus.egg-info
nuget_root/
.packages/
.vscode/
*.code-workspace
__pycache__
onnxruntime_profile*.json
/docs/python/*.md
/docs/python/auto_examples/*
/docs/python/media/*
/docs/python/examples/*.onnx
/docs/python/examples/graph.*
/docs/python/*_LICENSE

18
.gitmodules поставляемый Normal file
Просмотреть файл

@ -0,0 +1,18 @@
[submodule "external/protobuf"]
path = cmake/external/protobuf
url = https://github.com/google/protobuf.git
[submodule "cmake/external/googletest"]
path = cmake/external/googletest
url = https://github.com/google/googletest.git
[submodule "cmake/external/onnx"]
path = cmake/external/onnx
url = https://github.com/onnx/onnx
[submodule "cmake/external/tvm"]
path = cmake/external/tvm
url = https://github.com/dmlc/tvm.git
[submodule "cmake/external/date"]
path = cmake/external/date
url = https://github.com/HowardHinnant/date.git
[submodule "cmake/external/gsl"]
path = cmake/external/gsl
url = https://github.com/Microsoft/GSL.git

136
BUILD.md Normal file
Просмотреть файл

@ -0,0 +1,136 @@
# Build ONNX Runtime
## Supported dev environments
| OS | Supports CPU | Supports GPU| Notes |
|-------------|:------------:|:------------:|------------------------------------|
|Windows 10 | YES | YES |Must use VS 2017 or the latest VS2015|
|Windows 10 <br/> Subsystem for Linux | YES | NO | |
|Ubuntu 16.x | YES | YES | |
|Ubuntu 17.x | YES | YES | |
|Ubuntu 18.x | YES | YES | |
|Fedora 24 | YES | YES | |
|Fedora 25 | YES | YES | |
|Fedora 26 | YES | YES | |
|Fedora 27 | YES | YES | |
|Fedora 28 | YES | NO |Cannot build GPU kernels but can run them |
* Red Hat Enterprise Linux and CentOS are not supported.
* GCC 4.x and below are not supported. If you are using GCC 7.0+, you'll need to upgrade eigen to a newer version before compiling ONNX Runtime.
OS/Compiler Matrix:
| OS/Compiler | Supports VC | Supports GCC | Supports Clang |
|-------------|:------------:|:----------------:|:---------------:|
|Windows 10 | YES | Not tested | Not tested |
|Linux | NO | YES(gcc>=5.0) | YES |
ONNX Runtime python binding only supports Python 3.x. Please use python 3.5+.
## Build
Install cmake-3.11 or better from https://cmake.org/download/.
Checkout the source tree:
```
git clone --recursive https://github.com/Microsoft/onnxruntime
cd onnxruntime
./build.sh for Linux (or ./build.bat for Windows)
```
The build script runs all unit tests by default.
The complete list of build options can be found by running `./build.sh (or ./build.bat) --help`
## Build/Test Flavors for CI
### CI Build Environments
| Build Job Name | Environment | Dependency | Test Coverage | Scripts |
|--------------------|---------------------|---------------------------------|--------------------------|------------------------------------------|
| Linux_CI_Dev | Ubuntu 16.04 | python=3.5 | Unit tests; ONNXModelZoo | [script](tools/ci_build/github/linux/run_build.sh) |
| Linux_CI_GPU_Dev | Ubuntu 16.04 | python=3.5; nvidia-docker | Unit tests; ONNXModelZoo | [script](tools/ci_build/github/linux/run_build.sh) |
| Windows_CI_Dev | Windows Server 2016 | python=3.5 | Unit tests; ONNXModelZoo | [script](build.bat) |
| Windows_CI_GPU_Dev | Windows Server 2016 | cuda=9.0; cudnn=7.0; python=3.5 | Unit tests; ONNXModelZoo | [script](build.bat) |
## Additional Build Flavors
The complete list of build flavors can be seen by running `./build.sh --help` or `./build.bat --help`. Here are some common flavors.
### Windows CUDA Build
ONNX Runtime supports CUDA builds. You will need to download and install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [CUDNN](https://developer.nvidia.com/cudnn).
ONNX Runtime is built and tested with CUDA 9.0 and CUDNN 7.0 using the Visual Studio 2017 14.11 toolset (i.e. Visual Studio 2017 v15.3).
CUDA versions up to 9.2 and CUDNN version 7.1 should also work with versions of Visual Studio 2017 up to and including v15.7, however you may need to explicitly install and use the 14.11 toolset due to CUDA and CUDNN only being compatible with earlier versions of Visual Studio 2017.
To install the Visual Studio 2017 14.11 toolset, see <https://blogs.msdn.microsoft.com/vcblog/2017/11/15/side-by-side-minor-version-msvc-toolsets-in-visual-studio-2017/>
If using this toolset with a later version of Visual Studio 2017 you have two options:
1. Setup the Visual Studio environment variables to point to the 14.11 toolset by running vcvarsall.bat prior to running cmake
- e.g. if you have VS2017 Enterprise, an x64 build would use the following command
`"C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" amd64 -vcvars_ver=14.11`
2. Alternatively if you have CMake 3.12 or later you can specify the toolset version in the "-T" parameter by adding "version=14.11"
- e.g. use the following with the below cmake command
`-T version=14.11,host=x64`
CMake should automatically find the CUDA installation. If it does not, or finds a different version to the one you wish to use, specify your root CUDA installation directory via the -DCUDA_TOOLKIT_ROOT_DIR CMake parameter.
_Side note: If you have multiple versions of CUDA installed on a Windows machine and are building with Visual Studio, CMake will use the build files for the highest version of CUDA it finds in the BuildCustomization folder. e.g. C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\VC\VCTargets\BuildCustomizations\. If you want to build with an earlier version, you must temporarily remove the 'CUDA x.y.*' files for later versions from this directory._
The path to the 'cuda' folder in the CUDNN installation must be provided. The 'cuda' folder should contain 'bin', 'include' and 'lib' directories.
You can build with:
```
./build.sh --use_cuda --cudnn_home /usr --cuda_home /usr/local/cuda (Linux)
./build.bat --use_cuda --cudnn_home <cudnn home path> --cuda_home <cuda home path> (Windows)
```
### MKL-DNN
To build ONNX Runtime with MKL-DNN support, build it with `./build.sh --use_mkldnn --use_mklml`
### OpenBLAS
#### Windows
Instructions how to build OpenBLAS for windows can be found here https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-in-Microsoft-Visual-Studio#build-openblas-for-universal-windows-platform.
Once you have the OpenBLAS binaries, build ONNX Runtime with `./build.bat --use_openblas`
#### Linux
For Linux (e.g. Ubuntu 16.04), install libopenblas-dev package
`sudo apt-get install libopenblas-dev` and build with `./build.sh --use_openblas`
### OpenMP
```
./build.sh --use_openmp (for Linux)
./build.bat --use_openmp (for Windows)
```
### Build with Docker on Linux
Install Docker: `https://docs.docker.com/install/`
#### CPU
```
cd tools/ci_build/github/linux/docker
docker build -t onnxruntime_dev --build-arg OS_VERSION=16.04 -f Dockerfile.ubuntu .
docker run --rm -it onnxruntime_dev /bin/bash
```
#### GPU
If you need GPU support, please also install:
1. nvidia driver. Before doing this please add 'nomodeset rd.driver.blacklist=nouveau' to your linux [kernel boot parameters](https://www.kernel.org/doc/html/v4.17/admin-guide/kernel-parameters.html).
2. nvidia-docker2: [Install doc](`https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0)`)
To test if your nvidia-docker works:
```
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
```
Then build a docker image. We provided a sample for use:
```
cd tools/ci_build/github/linux/docker
docker build -t cuda_dev -f Dockerfile.ubuntu_gpu .
```
Then run it
```
./tools/ci_build/github/linux/run_dockerbuild.sh
```

50
CONTRIBUTING.md Normal file
Просмотреть файл

@ -0,0 +1,50 @@
# Contributing
We're always looking for your help to fix bugs and improve the product. Create a pull request and we'll be happy to take a look.
Start by reading the [Engineering Design](docs/HighLevelDesign.md).
# Checkin procedure
```
git clone --recursive https://github.com/Microsoft/onnxruntime
git checkout -b feature_branch
# make your changes
# write unit tests
# make sure it builds and all tests pass
git commit -m "my changes"
git push origin HEAD
```
To request merge into master send a pull request from the web ui
https://github.com/Microsoft/onnxruntime
New code *must* be accompanied by unit tests.
# Build
[Build](BUILD.md)
# Additional Documentation
* [Adding a custom operator](docs/AddingCustomOp.md)
# Coding guidelines
Please see [Coding Conventions and Standards](./docs/Coding_Conventions_and_Standards.md)
# Licensing guidelines
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
# Code of conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# Reporting Security Issues
Security issues and bugs should be reported privately, via email, to the Microsoft Security
Response Center (MSRC) at [secure@microsoft.com](mailto:secure@microsoft.com). You should
receive a response within 24 hours. If for some reason you do not, please follow up via
email to ensure we received your original message. Further information, including the
[MSRC PGP](https://technet.microsoft.com/en-us/security/dn606155) key, can be found in
the [Security TechCenter](https://technet.microsoft.com/en-us/security/default).

21
LICENSE Normal file
Просмотреть файл

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018 Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

35
README.md Normal file
Просмотреть файл

@ -0,0 +1,35 @@
# ONNX Runtime
[![Build Status](https://dev.azure.com/onnxruntime/onnxruntime/_apis/build/status/Microsoft.onnxruntime)](https://dev.azure.com/onnxruntime/onnxruntime/_build/latest?definitionId=1)
ONNX Runtime is the runtime for [ONNX](https://github.com/onnx/onnx).
# Engineering Design
[Engineering Design](docs/HighLevelDesign.md)
# API
| API | CPU package | GPU package |
|-----|-------------|-------------|
| [Python](https://docs.microsoft.com/en-us/python/api/overview/azure/onnx/intro?view=azure-onnx-py) | [Windows](TODO)<br>[Linux](https://pypi.org/project/onnxruntime/)<br>[Mac](TODO)| [Windows](TODO)<br>[Linux](https://pypi.org/project/onnxruntime-gpu/) |
| [C#](docs/CSharp_API.md) | [Windows](TODO)| Not available |
| [C](docs/C_API.md) | [Windows](TODO)<br>[Linux](TODO) | Not available |
# Build
[Build](BUILD.md)
# Contribute
[Contribute](CONTRIBUTING.md)
# Versioning
[Versioning](docs/Versioning.md)
# Feedback
* File a bug in [GitHub Issues](https://github.com/Microsoft/onnxruntime/issues)
# Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# License
[LICENSE](LICENSE)

2293
ThirdPartyNotices.txt Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

1
VERSION_NUMBER Normal file
Просмотреть файл

@ -0,0 +1 @@
0.1.4

12
build.amd64.1411.bat Normal file
Просмотреть файл

@ -0,0 +1,12 @@
:: Copyright (c) Microsoft Corporation. All rights reserved.
:: Licensed under the MIT License.
rem This will setup the VC env vars to use the 14.11 (VS2017 ver15.3) toolchain which is supported by CUDA 9.2 prior to running build.py.
rem It currently defaults to amd64 but that could be made configurable if that would be useful to developers running this locally.
@echo off
rem Use 14.11 toolset
call "%VCINSTALLDIR%\Auxiliary\Build\vcvarsall.bat" amd64 -vcvars_ver=14.11
rem Requires a python 3.6 or higher install to be available in your PATH
python %~dp0\tools\ci_build\build.py --build_dir %~dp0\build\Windows %*

6
build.bat Normal file
Просмотреть файл

@ -0,0 +1,6 @@
:: Copyright (c) Microsoft Corporation. All rights reserved.
:: Licensed under the MIT License.
@echo off
rem Requires a python 3.6 or higher install to be available in your PATH
python %~dp0\tools\ci_build\build.py --build_dir %~dp0\build\Windows %*

9
build.sh Executable file
Просмотреть файл

@ -0,0 +1,9 @@
#!/bin/bash
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# Get directory this script is in
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
#requires python3.6 or higher
python3 $DIR/tools/ci_build/build.py --build_dir $DIR/build/Linux "$@"

493
cmake/CMakeLists.txt Normal file
Просмотреть файл

@ -0,0 +1,493 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# Minimum CMake required
cmake_minimum_required(VERSION 3.11)
# Project
project(onnxruntime C CXX)
include(CheckCXXCompilerFlag)
include(CheckLanguage)
# Set C++14 as standard for the whole project
set(CMAKE_CXX_STANDARD 14)
# General C# prperties
if (onnxruntime_BUILD_CSHARP)
check_language(CSharp)
if (CMAKE_CSharp_COMPILER)
enable_language(CSharp)
set(CMAKE_DOTNET_TARGET_FRAMEWORK_VERSION v4.6.1)
set(CMAKE_CSharp_FLAGS ${CMAKE_CSharp_FLAGS} "/langversion:6")
message(STATUS "CMAKE_Csharp_Compiler = ${CMAKE_CSharp_COMPILER}")
else()
message(WARNING "Language Csharp is not found in the system")
endif()
endif()
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
# NOTE: POSITION INDEPENDENT CODE hurts performance, and it only make sense on POSIX systems
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
# Enable CTest
enable_testing()
if(NOT CMAKE_BUILD_TYPE)
message(STATUS "Build type not set - using RelWithDebInfo")
set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING "Choose build type: Debug Release RelWithDebInfo." FORCE)
endif()
# Options
option(onnxruntime_RUN_ONNX_TESTS "Enable ONNX Compatibility Testing" OFF)
option(onnxruntime_GENERATE_TEST_REPORTS "Enable test report generation" OFF)
option(onnxruntime_ENABLE_STATIC_ANALYSIS "Enable static analysis" OFF)
option(onnxruntime_ENABLE_PYTHON "Enable python buildings" OFF)
option(onnxruntime_USE_CUDA "Build with CUDA support" OFF)
option(onnxruntime_USE_EIGEN_FOR_BLAS "Use eign for blas" ON)
option(onnxruntime_USE_MLAS "Use optimized blas library for GEMM and 2D Convolution" OFF)
option(onnxruntime_USE_MKLDNN "Build with MKL-DNN support" OFF)
option(onnxruntime_USE_MKLML "Build MKL-DNN with MKL-ML binary dependency" OFF)
option(onnxruntime_USE_OPENBLAS "Use openblas" OFF)
option(onnxruntime_DEV_MODE "Enable developer warnings and treat most of them as error." OFF)
option(onnxruntime_USE_PREBUILT_PB "Use prebuilt protobuf library" OFF)
option(onnxruntime_USE_JEMALLOC "Use jecmalloc" OFF)
option(onnxruntime_MSVC_STATIC_RUNTIME "Compile for the static CRT" OFF)
option(onnxruntime_BUILD_UNIT_TESTS "Build ONNXRuntime unit tests" ON)
option(onnxruntime_USE_PREINSTALLED_EIGEN "Use pre-installed EIGEN. Need to provide eigen_SOURCE_PATH if turn this on." OFF)
option(onnxruntime_BUILD_BENCHMARKS "Build ONNXRuntime micro-benchmarks" OFF)
option(onnxruntime_USE_TVM "Build tvm for code-gen" OFF)
option(onnxruntime_USE_LLVM "Build tvm with LLVM" OFF)
option(onnxruntime_USE_OPENMP "Build with OpenMP support" OFF)
option(onnxruntime_BUILD_SHARED_LIB "Build a shared library" OFF)
option(onnxruntime_ENABLE_MICROSOFT_INTERNAL "Use this option to enable/disable microsoft internal only code" OFF)
option(onnxruntime_USE_NUPHAR "Build with Nupha" OFF)
option(onnxruntime_USE_BRAINSLICE "Build with BrainSlice" OFF)
set(protobuf_BUILD_TESTS OFF CACHE BOOL "Build protobuf tests" FORCE)
set(ONNX_ML 1)
set(REPO_ROOT ${PROJECT_SOURCE_DIR}/..)
set(ONNXRUNTIME_ROOT ${PROJECT_SOURCE_DIR}/../onnxruntime)
file (STRINGS "${REPO_ROOT}/VERSION_NUMBER" VERSION_NUMBER)
if (MSVC)
if (onnxruntime_MSVC_STATIC_RUNTIME)
# set all of our submodules to static runtime
set(ONNX_USE_MSVC_STATIC_RUNTIME ON)
set(protobuf_MSVC_STATIC_RUNTIME ON)
set(gtest_force_shared_crt OFF)
# In case we are building static libraries, link also the runtime library statically
# so that MSVCR*.DLL is not required at runtime.
# https://msdn.microsoft.com/en-us/library/2kzt1wy3.aspx
# This is achieved by replacing msvc option /MD with /MT and /MDd with /MTd
# http://www.cmake.org/Wiki/CMake_FAQ#How_can_I_build_my_MSVC_application_with_a_static_runtime.3F
foreach(flag_var
CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE
CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO
CMAKE_C_FLAGS CMAKE_C_FLAGS_DEBUG CMAKE_C_FLAGS_RELEASE
CMAKE_C_FLAGS_MINSIZEREL CMAKE__FLAGS_RELWITHDEBINFO)
if(${flag_var} MATCHES "/MD")
string(REGEX REPLACE "/MD" "/MT" ${flag_var} "${${flag_var}}")
endif(${flag_var} MATCHES "/MD")
endforeach(flag_var)
else()
set(ONNX_USE_MSVC_STATIC_RUNTIME OFF)
set(protobuf_WITH_ZLIB OFF CACHE BOOL "" FORCE)
set(protobuf_MSVC_STATIC_RUNTIME OFF CACHE BOOL "Link protobuf to static runtime libraries" FORCE)
set(gtest_force_shared_crt ON CACHE BOOL "Use shared (DLL) run-time lib for gtest" FORCE)
endif()
#Always enable exception handling, even for Windows ARM
SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc")
#Disable 4100 globally. Too many this kind errors in protobuf
SET (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4100")
else()
# Enable OpenMP for Non-Windows only. WinML team disallows use of OpenMP.
find_package(OpenMP)
if (OPENMP_FOUND)
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
add_definitions(-DUSE_OPENMP)
endif()
endif()
find_package(PNG)
set(ENABLE_DATE_TESTING OFF CACHE BOOL "" FORCE)
set(USE_SYSTEM_TZ_DB ON CACHE BOOL "" FORCE)
if(CMAKE_CROSSCOMPILING)
message("Doing crosscompiling")
endif()
#Need python to generate def file
if(onnxruntime_BUILD_SHARED_LIB OR onnxruntime_ENABLE_PYTHON)
if(onnxruntime_ENABLE_PYTHON)
find_package(PythonInterp 3.5 REQUIRED)
find_package(PythonLibs 3.5 REQUIRED)
else()
find_package(PythonInterp 3.4 REQUIRED)
find_package(PythonLibs 3.4 REQUIRED)
endif()
endif()
if(onnxruntime_BUILD_BENCHMARKS)
if(NOT TARGET benchmark)
# We will not need to test benchmark lib itself.
set(BENCHMARK_ENABLE_TESTING OFF CACHE BOOL "Disable benchmark testing as we don't need it.")
# We will not need to install benchmark since we link it statically.
set(BENCHMARK_ENABLE_INSTALL OFF CACHE BOOL "Disable benchmark install to avoid overwriting vendor install.")
add_subdirectory(${PROJECT_SOURCE_DIR}/external/onnx/third_party/benchmark EXCLUDE_FROM_ALL)
endif()
endif()
# External dependencies
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/external)
#Here we support three build mode:
#1. (recommended)onnxruntime_USE_PREBUILT_PB is set (ONNX_CUSTOM_PROTOC_EXECUTABLE should also be set)
# We will not build protobuf, will use a prebuilt binary instead. This mode can also support cross-compiling
#2. onnxruntime_USE_PREBUILT_PB is not set but ONNX_CUSTOM_PROTOC_EXECUTABLE is set
# Build Protobuf from source, except protoc.exe. This mode is mainly for cross-compiling
#3. both onnxruntime_USE_PREBUILT_PB and ONNX_CUSTOM_PROTOC_EXECUTABLE are not set
# Compile everything from source code. Slowest option.
if(onnxruntime_USE_PREBUILT_PB)
get_filename_component(
_PROTOBUF_INSTALL_PREFIX
${ONNX_CUSTOM_PROTOC_EXECUTABLE}
DIRECTORY)
get_filename_component(
_PROTOBUF_INSTALL_PREFIX
${_PROTOBUF_INSTALL_PREFIX}/..
REALPATH)
if(WIN32)
include(${_PROTOBUF_INSTALL_PREFIX}/cmake/protobuf-config.cmake)
else()
include(${_PROTOBUF_INSTALL_PREFIX}/lib64/cmake/protobuf/protobuf-config.cmake)
endif()
include(protobuf_function.cmake)
else()
# use protobuf as a submodule
add_subdirectory(${PROJECT_SOURCE_DIR}/external/protobuf/cmake EXCLUDE_FROM_ALL)
set_target_properties(libprotobuf PROPERTIES FOLDER "External/Protobuf")
set_target_properties(libprotobuf-lite PROPERTIES FOLDER "External/Protobuf")
set_target_properties(libprotoc PROPERTIES FOLDER "External/Protobuf")
set_target_properties(protoc PROPERTIES FOLDER "External/Protobuf")
add_library(protobuf::libprotobuf ALIAS libprotobuf)
add_executable(protobuf::protoc ALIAS protoc)
include(protobuf_function.cmake)
endif()
if (onnxruntime_USE_CUDA AND "${onnxruntime_CUDNN_HOME}" STREQUAL "")
message(FATAL_ERROR "onnxruntime_CUDNN_HOME required for onnxruntime_USE_CUDA")
endif()
if (onnxruntime_USE_EIGEN_FOR_BLAS)
add_definitions(-DUSE_EIGEN_FOR_BLAS)
endif()
if (onnxruntime_USE_OPENBLAS AND "${onnxruntime_OPENBLAS_HOME}" STREQUAL "" AND WIN32)
# On linux we assume blas is installed via 'apt-get install libopenblas-dev'
message(FATAL_ERROR "onnxruntime_OPENBLAS_HOME required for onnxruntime_USE_OPENBLAS")
endif()
if (onnxruntime_USE_OPENBLAS AND onnxruntime_USE_EIGEN_FOR_BLAS)
message(FATAL_ERROR "use one of onnxruntime_USE_OPENBLAS, onnxruntime_USE_EIGEN_FOR_BLAS")
endif()
# if ON put all the unit tests in a single project so that code coverage is more comprehensive.
# defaulting to that and most likely removing option to have separate unit test projects in the near future.
set(SingleUnitTestProject ON)
if (onnxruntime_SPLIT_UNIT_TEST_PROJECTS)
set(SingleUnitTestProject OFF)
endif()
get_filename_component(ONNXRUNTIME_ROOT "${ONNXRUNTIME_ROOT}" ABSOLUTE)
get_filename_component(REPO_ROOT "${REPO_ROOT}" ABSOLUTE)
set(ONNXRUNTIME_INCLUDE_DIR ${REPO_ROOT}/include/onnxruntime)
add_subdirectory(external/date EXCLUDE_FROM_ALL)
add_subdirectory(external/gsl EXCLUDE_FROM_ALL)
add_library(date ALIAS tz)
add_library(gsl ALIAS GSL)
set(date_INCLUDE_DIR $<TARGET_PROPERTY:tz,INTERFACE_INCLUDE_DIRECTORIES>)
# bounds checking behavior.
# throw instead of calling terminate if there's a bounds checking violation.
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DGSL_THROW_ON_CONTRACT_VIOLATION")
# no bounds checking in release build so no perf cost
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -DGSL_UNENFORCED_ON_CONTRACT_VIOLATION")
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -DGSL_UNENFORCED_ON_CONTRACT_VIOLATION")
include(eigen)
set(onnxruntime_EXTERNAL_LIBRARIES protobuf::libprotobuf)
# gtest and gmock
add_subdirectory(${PROJECT_SOURCE_DIR}/external/googletest EXCLUDE_FROM_ALL)
set_target_properties(gmock PROPERTIES FOLDER "External/GTest")
set_target_properties(gmock_main PROPERTIES FOLDER "External/GTest")
set_target_properties(gtest PROPERTIES FOLDER "External/GTest")
set_target_properties(gtest_main PROPERTIES FOLDER "External/GTest")
function(onnxruntime_add_include_to_target dst_target)
foreach(src_target ${ARGN})
target_include_directories(${dst_target} PRIVATE $<TARGET_PROPERTY:${src_target},INTERFACE_INCLUDE_DIRECTORIES>)
target_compile_definitions(${dst_target} PRIVATE $<TARGET_PROPERTY:${src_target},INTERFACE_COMPILE_DEFINITIONS>)
endforeach()
endfunction()
# TVM
if (onnxruntime_USE_TVM)
if (onnxruntime_USE_CUDA)
set(USE_CUDA ON)
endif()
if (onnxruntime_USE_LLVM)
set(USE_LLVM ON)
add_definitions(-DUSE_TVM_WITH_LLVM)
endif()
add_subdirectory(${PROJECT_SOURCE_DIR}/external/tvm EXCLUDE_FROM_ALL)
set_target_properties(tvm PROPERTIES FOLDER "External/tvm")
set_target_properties(tvm_topi PROPERTIES FOLDER "External/tvm")
set_target_properties(tvm_runtime PROPERTIES FOLDER "External/tvm")
set_target_properties(nnvm_compiler PROPERTIES FOLDER "External/tvm")
set(TVM_INCLUDES ${PROJECT_SOURCE_DIR}/external/tvm/include
${PROJECT_SOURCE_DIR}/external/tvm/3rdparty/dmlc-core/include
${PROJECT_SOURCE_DIR}/external/tvm/3rdparty/dlpack/include
$<TARGET_PROPERTY:tvm,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:tvm_topi,INTERFACE_INCLUDE_DIRECTORIES>
$<TARGET_PROPERTY:nnvm_compiler,INTERFACE_INCLUDE_DIRECTORIES>)
add_definitions(-DUSE_TVM)
set(onnxruntime_tvm_libs
onnxruntime_codegen_tvm
tvm
nnvm_compiler)
set(onnxruntime_tvm_dependencies
tvm
nnvm_compiler
onnxruntime_codegen_tvm)
endif()
# ONNX
add_subdirectory(onnx)
set_target_properties(onnx PROPERTIES FOLDER "External/ONNX")
set_target_properties(onnx_proto PROPERTIES FOLDER "External/ONNX")
#set_target_properties(gen_onnx_proto PROPERTIES FOLDER "External/ONNX")
# fix a warning in onnx code we can't do anything about
if (MSVC)
target_compile_options(onnx_proto PRIVATE /wd4146) # unary minus operator applied to unsigned type
endif()
set(onnxruntime_EXTERNAL_DEPENDENCIES gsl onnx_proto)
if (onnxruntime_RUN_ONNX_TESTS)
add_definitions(-DONNXRUNTIME_RUN_EXTERNAL_ONNX_TESTS)
endif()
add_definitions(-DUSE_MLAS)
#Adjust warning flags
if (WIN32)
add_definitions(-DPLATFORM_WINDOWS -DNOGDI -DNOMINMAX -D_USE_MATH_DEFINES)
# parallel build
# These compiler opitions cannot be forwarded to NVCC, so cannot use add_compiler_options
string(APPEND CMAKE_CXX_FLAGS " /MP")
string(APPEND CMAKE_CXX_FLAGS
" /wd4503" # Decorated name length exceeded.
" /wd4127" # conditional expression is constant.
" /wd4146" # unary minus operator applied to unsigned type. Needed for Protobuf
)
if (onnxruntime_ENABLE_STATIC_ANALYSIS)
string(APPEND CMAKE_CXX_FLAGS
" /analyze:WX- "
# disable warning because there are many occurrences from test macros
" /wd6326 " # potential comparison of a constant with another constant
)
endif()
# set compile warning level to 3 on CUDA build but 4 on CPU-only build
if(onnxruntime_USE_CUDA)
#CMake hardcoded /W3 in its 'Windows-NVIDIA-CUDA.cmake'. We'd better keep consistent with it.
#Change it to /W4 will result build failure
string(APPEND CMAKE_CXX_FLAGS " /W3")
else()
string(APPEND CMAKE_CXX_FLAGS " /W4")
endif()
#only treat warning as error on x64 platform. For x86, right no there are too many warnings to fix
if (CMAKE_SIZEOF_VOID_P EQUAL 8 AND onnxruntime_DEV_MODE)
# treat warnings as errors
string(APPEND CMAKE_CXX_FLAGS " /WX")
foreach(type EXE STATIC SHARED)
set(CMAKE_${type}_LINKER_FLAGS "${CMAKE_${type}_LINKER_FLAGS} /WX")
endforeach()
endif()
else()
add_definitions(-DPLATFORM_POSIX)
# Enable warning in Linux
string(APPEND CMAKE_CXX_FLAGS " -Wall -Wextra")
string(APPEND CMAKE_C_FLAGS " -Wall -Wextra")
if(onnxruntime_DEV_MODE)
string(APPEND CMAKE_CXX_FLAGS " -Werror")
string(APPEND CMAKE_C_FLAGS " -Werror")
endif()
check_cxx_compiler_flag(-Wunused-but-set-variable HAS_UNUSED_BUT_SET_VARIABLE)
check_cxx_compiler_flag(-Wunused-parameter HAS_UNUSED_PARAMETER)
check_cxx_compiler_flag(-Wcast-function-type HAS_CAST_FUNCTION_TYPE)
check_cxx_compiler_flag(-Wparentheses HAS_PARENTHESES)
check_cxx_compiler_flag(-Wnull-dereference HAS_NULL_DEREFERENCE)
check_cxx_compiler_flag(-Wuseless-cast HAS_USELESS_CAST)
check_cxx_compiler_flag(-Wnonnull-compare HAS_NONNULL_COMPARE)
check_cxx_compiler_flag(-Wtautological-pointer-compare HAS_TAUTOLOGICAL_POINTER_COMPARE)
check_cxx_compiler_flag(-Wcatch-value HAS_CATCH_VALUE)
if(HAS_NULL_DEREFERENCE)
string(APPEND CMAKE_CXX_FLAGS " -Wnull-dereference")
string(APPEND CMAKE_C_FLAGS " -Wnull-dereference")
endif()
if(HAS_TAUTOLOGICAL_POINTER_COMPARE)
#we may have extra null pointer checkings in debug build, it's not an issue
string(APPEND CMAKE_CXX_FLAGS_DEBUG " -Wno-tautological-pointer-compare")
string(APPEND CMAKE_C_FLAGS_DEBUG " -Wno-tautological-pointer-compare")
endif()
if(HAS_NONNULL_COMPARE)
#we may have extra null pointer checkings in debug build, it's not an issue
string(APPEND CMAKE_CXX_FLAGS_DEBUG " -Wno-nonnull-compare")
string(APPEND CMAKE_C_FLAGS_DEBUG " -Wno-nonnull-compare")
endif()
string(APPEND CMAKE_CXX_FLAGS " -Wno-error=sign-compare -Wno-error=comment")
if(onnxruntime_USE_CUDA)
string(APPEND CMAKE_CXX_FLAGS " -Wno-error=reorder")
endif()
if(HAS_PARENTHESES)
string(APPEND CMAKE_CXX_FLAGS " -Wno-parentheses")
endif()
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang" OR "${CMAKE_CXX_COMPILER_ID}" STREQUAL "AppleClang")
string(APPEND CMAKE_CXX_FLAGS " -Wno-error=invalid-partial-specialization -Wno-error=missing-braces -Wno-error=inconsistent-missing-override")
endif()
endif()
if (onnxruntime_USE_TVM)
if (WIN32 AND MSVC)
# wd4100: identifier' : unreferenced formal parameter
# wd4244: conversion from 'int' to 'char', possible loss of data
# wd4251: class X needs to have dll-interface to be used by clients of class Y
# wd4267: 'initializing': conversion from 'size_t' to 'int', possible loss of data
# wd4275: non dll-interface class X used as base for dll-interface class Y
# wd4389: signed/unsigned mismatch
# wd4456: declaration of X hides previous local declaration
set(DISABLED_WARNINGS_FOR_TVM "/wd4100" "/wd4244" "/wd4251" "/wd4267" "/wd4275" "/wd4389" "/wd4456")
else()
set(DISABLED_WARNINGS_FOR_TVM "-Wno-error=extra" "-Wno-error=ignored-qualifiers")
if(HAS_UNUSED_PARAMETER)
list(APPEND DISABLED_WARNINGS_FOR_TVM "-Wno-error=unused-parameter")
endif()
if(HAS_CATCH_VALUE)
#TODO: send a PR to TVM and fix it
list(APPEND DISABLED_WARNINGS_FOR_TVM "-Wno-error=catch-value")
endif()
endif()
include(onnxruntime_codegen.cmake)
endif()
if (onnxruntime_USE_JEMALLOC)
if (Win32)
message( FATAL_ERROR "Jemalloc is not supported on Windows." )
endif()
include(jemalloc)
add_definitions(-DUSE_JEMALLOC=1)
list(APPEND onnxruntime_EXTERNAL_LIBRARIES ${JEMALLOC_STATIC_LIBRARIES})
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES jemalloc)
endif()
include_directories(
${ONNXRUNTIME_INCLUDE_DIR}
$<TARGET_PROPERTY:GSL,INTERFACE_INCLUDE_DIRECTORIES>
)
if (onnxruntime_USE_MKLDNN)
add_definitions(-DUSE_MKLDNN=1)
include(mkldnn)
list(APPEND onnxruntime_EXTERNAL_LIBRARIES mkldnn)
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES mkldnn)
link_directories(${MKLDNN_LIB_DIR})
endif()
if (onnxruntime_USE_OPENBLAS)
add_definitions(-DUSE_OPENBLAS=1)
if (WIN32)
include_directories(${onnxruntime_OPENBLAS_HOME})
list(APPEND onnxruntime_EXTERNAL_LIBRARIES ${onnxruntime_OPENBLAS_HOME}/lib/libopenblas.lib)
else()
# on linux we assume blas is installed via 'apt-get install libopenblas-dev'
list(APPEND onnxruntime_EXTERNAL_LIBRARIES openblas)
endif()
endif()
configure_file(onnxruntime_config.h.in ${CMAKE_CURRENT_BINARY_DIR}/onnxruntime_config.h)
if (onnxruntime_USE_CUDA)
add_definitions(-DUSE_CUDA=1)
enable_language(CUDA)
set(CMAKE_CUDA_STANDARD 14)
find_package(CUDA 9.0 REQUIRED)
file(TO_CMAKE_PATH ${onnxruntime_CUDNN_HOME} onnxruntime_CUDNN_HOME)
include(cub)
set(CUDA_LINK_LIBRARIES_KEYWORD PRIVATE)
file(TO_CMAKE_PATH ${onnxruntime_CUDNN_HOME} onnxruntime_CUDNN_HOME)
if (WIN32)
link_directories(${onnxruntime_CUDNN_HOME}/lib/x64)
set(ONNXRUNTIME_CUDA_LIBRARIES cudnn cublas)
else()
link_directories(${onnxruntime_CUDNN_HOME}/lib64)
set(ONNXRUNTIME_CUDA_LIBRARIES cudnn_static cublas_static culibos)
endif()
list(APPEND onnxruntime_EXTERNAL_LIBRARIES ${ONNXRUNTIME_CUDA_LIBRARIES})
list(APPEND onnxruntime_EXTERNAL_DEPENDENCIES cub)
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_30,code=sm_30") # K series
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_50,code=sm_50") # M series
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_60,code=sm_60") # P series
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} -gencode=arch=compute_70,code=sm_70") # V series
set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} --default-stream per-thread")
if (NOT WIN32)
set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} --compiler-options -fPIC")
endif()
endif()
#names in this var must match the directory names under onnxruntime/core/providers
set(ONNXRUNTIME_PROVIDER_NAMES cpu)
include(onnxruntime_common.cmake)
include(onnxruntime_graph.cmake)
include(onnxruntime_framework.cmake)
include(onnxruntime_util.cmake)
include(onnxruntime_providers.cmake)
include(onnxruntime_session.cmake)
include(onnxruntime_mlas.cmake)
if (onnxruntime_BUILD_SHARED_LIB)
include(onnxruntime.cmake)
endif()
if (onnxruntime_ENABLE_PYTHON)
if(UNIX)
set(CMAKE_SKIP_BUILD_RPATH ON)
endif()
include(onnxruntime_python.cmake)
endif()
if (onnxruntime_BUILD_CSHARP)
message(STATUS "CSharp Build is enabled")
# set_property(GLOBAL PROPERTY VS_DOTNET_TARGET_FRAMEWORK_VERSION "netstandard2.0")
include(onnxruntime_csharp.cmake)
endif()
# some of the tests rely on the shared libs to be
# built; hence the ordering
if (onnxruntime_BUILD_UNIT_TESTS)
# we need to make sure this is turned off since it
# turned ON by the previous step when building a shared lib
set(CMAKE_SKIP_BUILD_RPATH OFF)
include(onnxruntime_unittests.cmake)
endif()

Просмотреть файл

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<EnableCppCoreCheck>true</EnableCppCoreCheck>
<CodeAnalysisRuleSet>NativeRecommendedRules.ruleset</CodeAnalysisRuleSet>
<RunCodeAnalysis>false</RunCodeAnalysis>
<!-- External libraries are in or below the directory with the sln file. Source is under \onnxruntime so not affected by this.
Also need to exclude things under \cmake such as \cmake\external\protobuf.
Ideally we would just use $(SolutionDir);$(AdditionalIncludeDirectories), however cmake includes this
file prior to $(AdditionalIncludeDirectories) having all our include directories added, so that doesn't work.
-->
<CAExcludePath>$(SolutionDir);$(SolutionDir)..\..\..\cmake;</CAExcludePath>
</PropertyGroup>
</Project>

Просмотреть файл

@ -0,0 +1,24 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup Condition="'$(Platform)'=='x64'">
<EnableCppCoreCheck>true</EnableCppCoreCheck>
<CodeAnalysisRuleSet>NativeRecommendedRules.ruleset</CodeAnalysisRuleSet>
<!-- External libraries are in or below the directory with the sln file. Source is under \onnxruntime so not affected by this.
Also need to exclude things under \cmake such as \cmake\external\protobuf, and the easiest way to do that in all
environments is to use the directory this file is in.
-->
<CAExcludePath>$(SolutionDir);$(MSBuildThisFileDirectory)</CAExcludePath>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<RunCodeAnalysis>true</RunCodeAnalysis>
<!-- the C++ Core Guidelines Code Analysis is constantly being updated. We have different VS versions
in different CI setups, as well as locally, so the list of warnings is not consistent. New VS versions
add new checks, and fix false positives. Due to that, setting warnings to errors would make for a
painful setup that wasted too much developer time chasing down different warnings between local,
CPU CI and GPU CI builds.
Local builds on the latest VS version should compile warning free.
<CodeAnalysisTreatWarningsAsErrors>false</CodeAnalysisTreatWarningsAsErrors>
-->
</PropertyGroup>
</Project>

56
cmake/external/FindNumPy.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,56 @@
# - Find the NumPy libraries
# This module finds if NumPy is installed, and sets the following variables
# indicating where it is.
#
# TODO: Update to provide the libraries and paths for linking npymath lib.
#
# NUMPY_FOUND - was NumPy found
# NUMPY_VERSION - the version of NumPy found as a string
# NUMPY_VERSION_MAJOR - the major version number of NumPy
# NUMPY_VERSION_MINOR - the minor version number of NumPy
# NUMPY_VERSION_PATCH - the patch version number of NumPy
# NUMPY_VERSION_DECIMAL - e.g. version 1.6.1 is 10601
# NUMPY_INCLUDE_DIR - path to the NumPy include files
unset(NUMPY_VERSION)
unset(NUMPY_INCLUDE_DIR)
if(PYTHONINTERP_FOUND)
execute_process(COMMAND "${PYTHON_EXECUTABLE}" "-c"
"import numpy as n; print(n.__version__); print(n.get_include());"
RESULT_VARIABLE __result
OUTPUT_VARIABLE __output
OUTPUT_STRIP_TRAILING_WHITESPACE)
if(__result MATCHES 0)
string(REGEX REPLACE ";" "\\\\;" __values ${__output})
string(REGEX REPLACE "\r?\n" ";" __values ${__values})
list(GET __values 0 NUMPY_VERSION)
list(GET __values 1 NUMPY_INCLUDE_DIR)
string(REGEX MATCH "^([0-9])+\\.([0-9])+\\.([0-9])+" __ver_check "${NUMPY_VERSION}")
if(NOT "${__ver_check}" STREQUAL "")
set(NUMPY_VERSION_MAJOR ${CMAKE_MATCH_1})
set(NUMPY_VERSION_MINOR ${CMAKE_MATCH_2})
set(NUMPY_VERSION_PATCH ${CMAKE_MATCH_3})
math(EXPR NUMPY_VERSION_DECIMAL
"(${NUMPY_VERSION_MAJOR} * 10000) + (${NUMPY_VERSION_MINOR} * 100) + ${NUMPY_VERSION_PATCH}")
string(REGEX REPLACE "\\\\" "/" NUMPY_INCLUDE_DIR ${NUMPY_INCLUDE_DIR})
else()
unset(NUMPY_VERSION)
unset(NUMPY_INCLUDE_DIR)
message(STATUS "Requested NumPy version and include path, but got instead:\n${__output}\n")
endif()
endif()
else()
message(STATUS "To find NumPy Python interpretator is required to be found.")
endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(NumPy REQUIRED_VARS NUMPY_INCLUDE_DIR NUMPY_VERSION
VERSION_VAR NUMPY_VERSION)
if(NUMPY_FOUND)
message(STATUS "NumPy ver. ${NUMPY_VERSION} found (include: ${NUMPY_INCLUDE_DIR})")
endif()

30
cmake/external/cub.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,30 @@
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
include (ExternalProject)
set(cub_URL https://github.com/NVlabs/cub/archive/v1.8.0.zip)
set(cub_HASH SHA256=6bfa06ab52a650ae7ee6963143a0bbc667d6504822cbd9670369b598f18c58c3)
set(cub_BUILD ${CMAKE_CURRENT_BINARY_DIR}/cub/src/cub)
set(cub_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/cub/src/cub)
set(cub_ARCHIVE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/cub_archive)
ExternalProject_Add(cub
PREFIX cub
URL ${cub_URL}
URL_HASH ${cub_HASH}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_IN_SOURCE 1
PATCH_COMMAND ${CMAKE_COMMAND} -E copy_if_different ${CMAKE_CURRENT_SOURCE_DIR}/patches/cub/CMakeLists.txt ${cub_BUILD}
INSTALL_COMMAND ${CMAKE_COMMAND} -E copy_directory ${cub_INCLUDE_DIR}/cub ${cub_ARCHIVE_DIR}/cub)

1
cmake/external/date поставляемый Submodule

@ -0,0 +1 @@
Subproject commit e7e1482087f58913b80a20b04d5c58d9d6d90155

28
cmake/external/eigen.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,28 @@
include (ExternalProject)
if (onnxruntime_USE_PREINSTALLED_EIGEN)
set(eigen_INCLUDE_DIRS ${eigen_SOURCE_PATH})
ExternalProject_Add(eigen
PREFIX eigen
SOURCE_DIR ${eigen_SOURCE_PATH}
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
DOWNLOAD_COMMAND ""
UPDATE_COMMAND ""
)
else ()
set(eigen_URL "https://github.com/eigenteam/eigen-git-mirror.git")
set(eigen_TAG "3.3.4")
set(eigen_ROOT_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/eigen)
set(eigen_INCLUDE_DIRS ${eigen_ROOT_DIR})
ExternalProject_Add(eigen
PREFIX eigen
GIT_REPOSITORY ${eigen_URL}
GIT_TAG ${eigen_TAG}
SOURCE_DIR ${eigen_ROOT_DIR}
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
)
endif()

1
cmake/external/googletest поставляемый Submodule

@ -0,0 +1 @@
Subproject commit 9bda90b7e5e08c4c37a832d0cea218aed6af6470

1
cmake/external/gsl поставляемый Submodule

@ -0,0 +1 @@
Subproject commit cee3125af7208258d024a75e24f73977eddaec5b

24
cmake/external/jemalloc.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,24 @@
include (ExternalProject)
set(JEMALLOC_URL https://github.com/jemalloc/jemalloc/releases/download/4.1.1/jemalloc-4.1.1.tar.bz2)
set(JEMALLOC_BUILD ${CMAKE_CURRENT_BINARY_DIR}/jemalloc/src/jemalloc)
set(JEMALLOC_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/jemalloc/install)
if(NOT WIN32)
set(JEMALLOC_STATIC_LIBRARIES ${CMAKE_CURRENT_BINARY_DIR}/jemalloc/install/lib/libjemalloc_pic.a)
else()
message( FATAL_ERROR "Jemalloc is not supported on Windows." )
endif()
ExternalProject_Add(jemalloc
PREFIX jemalloc
URL ${JEMALLOC_URL}
INSTALL_DIR ${JEMALLOC_INSTALL}
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
BUILD_COMMAND $(MAKE)
BUILD_IN_SOURCE 1
INSTALL_COMMAND $(MAKE) install
CONFIGURE_COMMAND
${CMAKE_CURRENT_BINARY_DIR}/jemalloc/src/jemalloc/configure
--prefix=${JEMALLOC_INSTALL}
)

55
cmake/external/mkldnn.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,55 @@
include (ExternalProject)
set(MKLDNN_URL https://github.com/intel/mkl-dnn.git)
# If MKLDNN_TAG is updated, check if platform.cmake.patch or mkldnn_sgemm.patch needs to be updated.
set(MKLDNN_TAG v0.15)
set(MKLDNN_SOURCE ${CMAKE_CURRENT_BINARY_DIR}/mkl-dnn/src/mkl-dnn/src)
set(MKLDNN_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/mkl-dnn/install)
set(MKLDNN_LIB_DIR ${MKLDNN_INSTALL}/lib)
set(MKLDNN_INCLUDE_DIR ${MKLDNN_INSTALL}/include)
# patch for mkldnn_sgemm thread safety bug.
# it can be removed once a fix is available in a validated mkldnn release version.
set(MKLDNN_PATCH_COMMAND1 git apply ${CMAKE_SOURCE_DIR}/patches/mkldnn/mkldnn_sgemm.patch)
if(WIN32)
set(MKLDNN_SHARED_LIB mkldnn.dll)
set(MKLDNN_IMPORT_LIB mkldnn.lib)
if(onnxruntime_USE_MKLML)
set(DOWNLOAD_MKLML ${MKLDNN_SOURCE}/scripts/prepare_mkl.bat)
set(MKLML_SHARED_LIB mklml.dll)
set(IOMP5MD_SHARED_LIB libiomp5md.dll)
endif()
set(MKLDNN_PATCH_COMMAND2 "")
else()
set(MKLDNN_SHARED_LIB libmkldnn.so.0)
if(onnxruntime_USE_MKLML)
set(DOWNLOAD_MKLML ${MKLDNN_SOURCE}/scripts/prepare_mkl.sh)
set(MKLML_SHARED_LIB libmklml_intel.so)
set(IOMP5MD_SHARED_LIB libiomp5.so)
endif()
set(MKLDNN_PATCH_COMMAND2 git apply ${CMAKE_SOURCE_DIR}/patches/mkldnn/platform.cmake.patch)
endif()
if(NOT onnxruntime_USE_MKLDNN OR EXISTS ${MKLDNN_SOURCE}/external)
set(DOWNLOAD_MKLML "")
endif()
ExternalProject_Add(project_mkldnn
PREFIX mkl-dnn
GIT_REPOSITORY ${MKLDNN_URL}
GIT_TAG ${MKLDNN_TAG}
PATCH_COMMAND ${DOWNLOAD_MKLML} COMMAND ${MKLDNN_PATCH_COMMAND1} COMMAND ${MKLDNN_PATCH_COMMAND2}
SOURCE_DIR ${MKLDNN_SOURCE}
CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_INSTALL_PREFIX=${MKLDNN_INSTALL}
)
if(WIN32)
add_library(mkldnn STATIC IMPORTED)
set_property(TARGET mkldnn PROPERTY IMPORTED_LOCATION ${MKLDNN_LIB_DIR}/${MKLDNN_IMPORT_LIB})
else()
add_library(mkldnn SHARED IMPORTED)
set_property(TARGET mkldnn PROPERTY IMPORTED_LOCATION ${MKLDNN_LIB_DIR}/${MKLDNN_SHARED_LIB})
endif()
add_dependencies(mkldnn project_mkldnn)
include_directories(${MKLDNN_INCLUDE_DIR})

1
cmake/external/onnx поставляемый Submodule

@ -0,0 +1 @@
Subproject commit eb4b7c2cc2a0d34c0127e26c2c1cb5e712467e1e

1
cmake/external/protobuf поставляемый Submodule

@ -0,0 +1 @@
Subproject commit 48cb18e5c419ddd23d9badcfe4e9df7bde1979b2

14
cmake/external/pybind11.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,14 @@
include(ExternalProject)
set(pybind11_INCLUDE_DIRS ${CMAKE_CURRENT_BINARY_DIR}/pybind11/src/pybind11/include)
set(pybind11_URL https://github.com/pybind/pybind11.git)
set(pybind11_TAG v2.2.4)
ExternalProject_Add(pybind11
PREFIX pybind11
GIT_REPOSITORY ${pybind11_URL}
GIT_TAG ${pybind11_TAG}
CONFIGURE_COMMAND ""
BUILD_COMMAND ""
INSTALL_COMMAND ""
)

1
cmake/external/tvm поставляемый Submodule

@ -0,0 +1 @@
Subproject commit c2b36154778503a509a70a3b5309b201969eccab

47
cmake/external/zlib.cmake поставляемый Normal file
Просмотреть файл

@ -0,0 +1,47 @@
include (ExternalProject)
set(zlib_INCLUDE_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/zlib_archive)
set(ZLIB_URL https://github.com/madler/zlib)
set(ZLIB_BUILD ${CMAKE_CURRENT_BINARY_DIR}/zlib/src/zlib)
set(ZLIB_INSTALL ${CMAKE_CURRENT_BINARY_DIR}/zlib/install)
set(ZLIB_TAG 50893291621658f355bc5b4d450a8d06a563053d)
if(WIN32)
set(zlib_STATIC_LIBRARIES
debug ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstaticd.lib
optimized ${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/zlibstatic.lib)
else()
set(zlib_STATIC_LIBRARIES
${CMAKE_CURRENT_BINARY_DIR}/zlib/install/lib/libz.a)
endif()
set(ZLIB_HEADERS
"${ZLIB_INSTALL}/include/zconf.h"
"${ZLIB_INSTALL}/include/zlib.h"
)
ExternalProject_Add(zlib
PREFIX zlib
GIT_REPOSITORY ${ZLIB_URL}
GIT_TAG ${ZLIB_TAG}
INSTALL_DIR ${ZLIB_INSTALL}
BUILD_IN_SOURCE 1
DOWNLOAD_DIR "${DOWNLOAD_LOCATION}"
CMAKE_CACHE_ARGS
-DCMAKE_BUILD_TYPE:STRING=Release
-DCMAKE_INSTALL_PREFIX:STRING=${ZLIB_INSTALL}
-DCMAKE_POSITION_INDEPENDENT_CODE:BOOL=ON
)
# put zlib includes in the directory where they are expected
add_custom_target(zlib_create_destination_dir
COMMAND ${CMAKE_COMMAND} -E make_directory ${zlib_INCLUDE_DIR}
DEPENDS zlib)
add_custom_target(zlib_copy_headers_to_destination
DEPENDS zlib_create_destination_dir)
foreach(header_file ${ZLIB_HEADERS})
add_custom_command(TARGET zlib_copy_headers_to_destination PRE_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${header_file} ${zlib_INCLUDE_DIR})
endforeach()

51
cmake/onnx/CMakeLists.txt Normal file
Просмотреть файл

@ -0,0 +1,51 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
add_library(onnx_proto ${ONNXRUNTIME_ROOT}/core/protobuf/onnx-ml.proto ${ONNXRUNTIME_ROOT}/core/protobuf/onnx-operators-ml.proto)
target_include_directories(onnx_proto PUBLIC $<TARGET_PROPERTY:protobuf::libprotobuf,INTERFACE_INCLUDE_DIRECTORIES> "${CMAKE_CURRENT_BINARY_DIR}/..")
onnxruntime_protobuf_generate(APPEND_PATH IMPORT_DIRS TARGET onnx_proto)
# Cpp Tests were added and they require googletest
# since we have our own copy, try using that
set(ONNX_SOURCE_ROOT ${PROJECT_SOURCE_DIR}/external/onnx)
file(GLOB_RECURSE onnx_src
"${ONNX_SOURCE_ROOT}/onnx/*.h"
"${ONNX_SOURCE_ROOT}/onnx/*.cc"
)
file(GLOB_RECURSE onnx_exclude_src
"${ONNX_SOURCE_ROOT}/onnx/py_utils.h"
"${ONNX_SOURCE_ROOT}/onnx/proto_utils.h"
"${ONNX_SOURCE_ROOT}/onnx/backend/test/cpp/*"
"${ONNX_SOURCE_ROOT}/onnx/test/*"
"${ONNX_SOURCE_ROOT}/onnx/cpp2py_export.cc"
)
list(REMOVE_ITEM onnx_src ${onnx_exclude_src})
add_library(onnx ${onnx_src})
add_dependencies(onnx onnx_proto)
target_include_directories(onnx PUBLIC $<TARGET_PROPERTY:onnx_proto,INTERFACE_INCLUDE_DIRECTORIES> "${ONNX_SOURCE_ROOT}")
target_compile_definitions(onnx PUBLIC "ONNX_ML" "ONNX_NAMESPACE=onnx")
if (WIN32)
target_compile_options(onnx PRIVATE
/wd4800 # 'type' : forcing value to bool 'true' or 'false' (performance warning)
/wd4125 # decimal digit terminates octal escape sequence
/wd4100 # 'param' : unreferenced formal parameter
/wd4244 # 'argument' conversion from 'google::protobuf::int64' to 'int', possible loss of data
/EHsc # exception handling - C++ may throw, extern "C" will not
)
set(onnx_static_library_flags
-IGNORE:4221 # LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library
)
set_target_properties(onnx PROPERTIES
STATIC_LIBRARY_FLAGS "${onnx_static_library_flags}")
else()
if(HAS_UNUSED_PARAMETER)
target_compile_options(onnx PRIVATE "-Wno-unused-parameter")
target_compile_options(onnx_proto PRIVATE "-Wno-unused-parameter")
endif()
if(HAS_UNUSED_BUT_SET_VARIABLE)
target_compile_options(onnx PRIVATE "-Wno-unused-but-set-variable")
target_compile_options(onnx_proto PRIVATE "-Wno-unused-but-set-variable")
endif()
endif()

64
cmake/onnxruntime.cmake Normal file
Просмотреть файл

@ -0,0 +1,64 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
if(UNIX)
set(SYMBOL_FILE ${CMAKE_CURRENT_BINARY_DIR}/onnxruntime.lds)
set(OUTPUT_STYLE gcc)
else()
set(SYMBOL_FILE ${CMAKE_CURRENT_BINARY_DIR}/onnxruntime_dll.def)
set(OUTPUT_STYLE vc)
endif()
list(APPEND SYMBOL_FILES "${REPO_ROOT}/tools/ci_build/gen_def.py")
foreach(f ${ONNXRUNTIME_PROVIDER_NAMES})
list(APPEND SYMBOL_FILES "${ONNXRUNTIME_ROOT}/core/providers/${f}/symbols.txt")
endforeach()
add_custom_command(OUTPUT ${SYMBOL_FILE}
COMMAND ${PYTHON_EXECUTABLE} "${REPO_ROOT}/tools/ci_build/gen_def.py" --version_file "${ONNXRUNTIME_ROOT}/../VERSION_NUMBER" --src_root "${ONNXRUNTIME_ROOT}" --config ${ONNXRUNTIME_PROVIDER_NAMES} --style=${OUTPUT_STYLE} --output ${SYMBOL_FILE}
DEPENDS ${SYMBOL_FILES}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
add_custom_target(onnxruntime_generate_def ALL DEPENDS ${SYMBOL_FILE})
add_library(onnxruntime SHARED ${onnxruntime_session_srcs})
set_target_properties(onnxruntime PROPERTIES VERSION ${VERSION_NUMBER})
add_dependencies(onnxruntime onnxruntime_generate_def ${onnxruntime_EXTERNAL_DEPENDENCIES})
target_include_directories(onnxruntime PRIVATE ${ONNXRUNTIME_ROOT} ${date_INCLUDE_DIR})
if(UNIX)
set(BEGIN_WHOLE_ARCHIVE -Xlinker --whole-archive)
set(END_WHOLE_ARCHIVE -Xlinker --no-whole-archive)
set(ONNXRUNTIME_SO_LINK_FLAG "-Xlinker --version-script=${SYMBOL_FILE} -Xlinker --no-undefined")
else()
set(ONNXRUNTIME_SO_LINK_FLAG "-DEF:${SYMBOL_FILE}")
endif()
target_link_libraries(onnxruntime PRIVATE
${BEGIN_WHOLE_ARCHIVE}
${onnxruntime_libs}
${PROVIDERS_CUDA}
${PROVIDERS_MKLDNN}
onnxruntime_providers
onnxruntime_util
onnxruntime_framework
${END_WHOLE_ARCHIVE}
onnxruntime_graph
onnxruntime_common
onnx
onnx_proto
onnxruntime_mlas
${onnxruntime_tvm_libs}
${onnxruntime_EXTERNAL_LIBRARIES}
${CMAKE_THREAD_LIBS_INIT}
${ONNXRUNTIME_CUDA_LIBRARIES})
set_property(TARGET onnxruntime APPEND_STRING PROPERTY LINK_FLAGS ${ONNXRUNTIME_SO_LINK_FLAG})
set_target_properties(onnxruntime PROPERTIES LINK_DEPENDS ${SYMBOL_FILE})
install(TARGETS onnxruntime
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
set_target_properties(onnxruntime PROPERTIES FOLDER "ONNXRuntime")

Просмотреть файл

@ -0,0 +1,16 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB_RECURSE onnxruntime_codegen_tvm_srcs
"${ONNXRUNTIME_ROOT}/core/codegen/tvm/*.h"
"${ONNXRUNTIME_ROOT}/core/codegen/tvm/*.cc"
)
add_library(onnxruntime_codegen_tvm ${onnxruntime_codegen_tvm_srcs})
set_target_properties(onnxruntime_codegen_tvm PROPERTIES FOLDER "ONNXRuntime")
target_include_directories(onnxruntime_codegen_tvm PRIVATE ${ONNXRUNTIME_ROOT} ${TVM_INCLUDES})
onnxruntime_add_include_to_target(onnxruntime_codegen_tvm onnx protobuf::libprotobuf)
target_compile_options(onnxruntime_codegen_tvm PRIVATE ${DISABLED_WARNINGS_FOR_TVM})
# need onnx to build to create headers that this project includes
add_dependencies(onnxruntime_codegen_tvm onnxruntime_framework tvm ${onnxruntime_EXTERNAL_DEPENDENCIES})

Просмотреть файл

@ -0,0 +1,54 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
set(onnxruntime_common_src_patterns
"${ONNXRUNTIME_INCLUDE_DIR}/core/common/*.h"
"${ONNXRUNTIME_INCLUDE_DIR}/core/common/logging/*.h"
"${ONNXRUNTIME_ROOT}/core/common/*.h"
"${ONNXRUNTIME_ROOT}/core/common/*.cc"
"${ONNXRUNTIME_ROOT}/core/common/logging/*.h"
"${ONNXRUNTIME_ROOT}/core/common/logging/*.cc"
"${ONNXRUNTIME_ROOT}/core/common/logging/sinks/*.h"
"${ONNXRUNTIME_ROOT}/core/common/logging/sinks/*.cc"
"${ONNXRUNTIME_ROOT}/core/inc/*.h"
"${ONNXRUNTIME_ROOT}/core/platform/env.h"
"${ONNXRUNTIME_ROOT}/core/platform/env.cc"
"${ONNXRUNTIME_ROOT}/core/platform/env_time.h"
"${ONNXRUNTIME_ROOT}/core/platform/env_time.cc"
)
if(WIN32)
list(APPEND onnxruntime_common_src_patterns
"${ONNXRUNTIME_ROOT}/core/platform/windows/*.h"
"${ONNXRUNTIME_ROOT}/core/platform/windows/*.cc"
"${ONNXRUNTIME_ROOT}/core/platform/windows/logging/*.h"
"${ONNXRUNTIME_ROOT}/core/platform/windows/logging/*.cc"
)
else()
list(APPEND onnxruntime_common_src_patterns
"${ONNXRUNTIME_ROOT}/core/platform/posix/*.h"
"${ONNXRUNTIME_ROOT}/core/platform/posix/*.cc"
)
endif()
file(GLOB onnxruntime_common_src ${onnxruntime_common_src_patterns})
source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_common_src})
add_library(onnxruntime_common ${onnxruntime_common_src})
if(NOT WIN32)
target_link_libraries(onnxruntime_common dl)
endif()
target_include_directories(onnxruntime_common PRIVATE ${ONNXRUNTIME_ROOT} ${date_INCLUDE_DIR})
# logging uses date. threadpool uses eigen
add_dependencies(onnxruntime_common date eigen gsl)
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/common DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/common)
set_target_properties(onnxruntime_common PROPERTIES LINKER_LANGUAGE CXX)
set_target_properties(onnxruntime_common PROPERTIES FOLDER "ONNXRuntime")
if(WIN32)
# Add Code Analysis properties to enable C++ Core checks. Have to do it via a props file include.
set_target_properties(onnxruntime_common PROPERTIES VS_USER_PROPS ${PROJECT_SOURCE_DIR}/EnableVisualStudioCodeAnalysis.props)
endif()

Просмотреть файл

@ -0,0 +1,13 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#cmakedefine HAS_UNUSED_BUT_SET_VARIABLE
#cmakedefine HAS_UNUSED_PARAMETER
#cmakedefine HAS_CAST_FUNCTION_TYPE
#cmakedefine HAS_PARENTHESES
#cmakedefine HAS_NULL_DEREFERENCE
#cmakedefine HAS_USELESS_CAST

Просмотреть файл

@ -0,0 +1,12 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
set (CSHARP_ROOT ${PROJECT_SOURCE_DIR}/../csharp)
set (CSHARP_MASTER_TARGET OnnxRuntime.CSharp)
set (CSHARP_MASTER_PROJECT ${CSHARP_ROOT}/OnnxRuntime.CSharp.proj )
include(CSharpUtilities)
include_external_msproject(${CSHARP_MASTER_TARGET}
${CSHARP_MASTER_PROJECT}
onnxruntime # make it depend on the native onnxruntime project
)

Просмотреть файл

@ -0,0 +1,21 @@
digraph "GG" {
node [
fontsize = "12"
];
"node12" [ label="onnxruntime_graph" shape="diamond"];
"node10" [ label="onnxruntime_common" shape="diamond"];
"node12" -> "node10" // onnxruntime_graph -> onnxruntime_common
"node4" [ label="onnx" shape="diamond"];
"node12" -> "node4" // onnxruntime_graph -> onnx
"node15" [ label="onnxruntime_framework" shape="diamond"];
"node15" -> "node12" // onnxruntime_framework -> onnxruntime_graph
"node15" -> "node10" // onnxruntime_framework -> onnxruntime_common
"node15" -> "node4" // onnxruntime_framework -> onnx
"node17" [ label="onnxruntime_providers" shape="diamond"];
"node17" -> "node10" // onnxruntime_providers -> onnxruntime_common
"node17" -> "node15" // onnxruntime_providers -> onnxruntime_framework
"node18" [ label="onnxruntime_test_common" shape="house"];
"node6" [ label="onnxruntime_test_framework" shape="house"];
"node19" [ label="onnxruntime_test_ir" shape="house"];
"node20" [ label="onnxruntime_test_providers" shape="house"];
}

Просмотреть файл

@ -0,0 +1,25 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB_RECURSE onnxruntime_framework_srcs
"${ONNXRUNTIME_INCLUDE_DIR}/core/framework/*.h"
"${ONNXRUNTIME_ROOT}/core/framework/*.h"
"${ONNXRUNTIME_ROOT}/core/framework/*.cc"
)
source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_framework_srcs})
add_library(onnxruntime_framework ${onnxruntime_framework_srcs})
#TODO: remove ${eigen_INCLUDE_DIRS} from here
target_include_directories(onnxruntime_framework PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS})
onnxruntime_add_include_to_target(onnxruntime_framework onnx protobuf::libprotobuf)
set_target_properties(onnxruntime_framework PROPERTIES FOLDER "ONNXRuntime")
# need onnx to build to create headers that this project includes
add_dependencies(onnxruntime_framework ${onnxruntime_EXTERNAL_DEPENDENCIES})
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/framework DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/framework)
if (WIN32)
# Add Code Analysis properties to enable C++ Core checks. Have to do it via a props file include.
set_target_properties(onnxruntime_framework PROPERTIES VS_USER_PROPS ${PROJECT_SOURCE_DIR}/ConfigureVisualStudioCodeAnalysis.props)
endif()

Просмотреть файл

@ -0,0 +1,37 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB_RECURSE onnxruntime_graph_src
"${ONNXRUNTIME_INCLUDE_DIR}/core/graph/*.h"
"${ONNXRUNTIME_ROOT}/core/graph/*.h"
"${ONNXRUNTIME_ROOT}/core/graph/*.cc"
)
file(GLOB_RECURSE onnxruntime_ir_defs_src
"${ONNXRUNTIME_ROOT}/core/defs/*.cc"
)
add_library(onnxruntime_graph ${onnxruntime_graph_src} ${onnxruntime_ir_defs_src})
add_dependencies(onnxruntime_graph onnx_proto gsl)
onnxruntime_add_include_to_target(onnxruntime_graph onnx protobuf::libprotobuf)
target_include_directories(onnxruntime_graph PRIVATE ${ONNXRUNTIME_ROOT})
set_target_properties(onnxruntime_graph PROPERTIES FOLDER "ONNXRuntime")
set_target_properties(onnxruntime_graph PROPERTIES LINKER_LANGUAGE CXX)
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/graph DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/graph)
source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_graph_src} ${onnxruntime_ir_defs_src})
if (WIN32)
set(onnxruntime_graph_static_library_flags
-IGNORE:4221 # LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library
)
set_target_properties(onnxruntime_graph PROPERTIES
STATIC_LIBRARY_FLAGS "${onnxruntime_graph_static_library_flags}")
target_compile_options(onnxruntime_graph PRIVATE
/EHsc # exception handling - C++ may throw, extern "C" will not
)
# Add Code Analysis properties to enable C++ Core checks. Have to do it via a props file include.
set_target_properties(onnxruntime_graph PROPERTIES VS_USER_PROPS ${PROJECT_SOURCE_DIR}/EnableVisualStudioCodeAnalysis.props)
endif()

Просмотреть файл

@ -0,0 +1,137 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
set(mlas_common_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/platform.cpp
${ONNXRUNTIME_ROOT}/core/mlas/lib/sgemm.cpp
${ONNXRUNTIME_ROOT}/core/mlas/lib/convolve.cpp
${ONNXRUNTIME_ROOT}/core/mlas/lib/pooling.cpp
${ONNXRUNTIME_ROOT}/core/mlas/lib/bias.cpp
)
if (MSVC)
if (CMAKE_GENERATOR_PLATFORM STREQUAL "ARM")
set(mlas_platform_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/arm/sgemmc.cpp
)
elseif (CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
set(asm_filename ${ONNXRUNTIME_ROOT}/core/mlas/lib/arm64/sgemma.asm)
set(pre_filename ${CMAKE_CURRENT_BINARY_DIR}/sgemma.i)
set(obj_filename ${CMAKE_CURRENT_BINARY_DIR}/sgemma.obj)
if (CMAKE_BUILD_TYPE STREQUAL "Debug")
set(ARMASM_FLAGS "-g")
else()
set(ARMASM_FLAGS "")
endif()
add_custom_command(
OUTPUT ${obj_filename}
COMMAND
cl.exe /P ${asm_filename}
COMMAND
armasm64.exe ${ARMASM_FLAGS} ${pre_filename} ${obj_filename}
)
set(mlas_platform_srcs ${obj_filename})
elseif (CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
enable_language(ASM_MASM)
set(CMAKE_ASM_MASM_FLAGS "${CMAKE_ASM_MASM_FLAGS} /safeseh")
set(mlas_platform_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/i386/sgemma.asm
)
elseif (CMAKE_GENERATOR_PLATFORM STREQUAL "x64")
enable_language(ASM_MASM)
set(mlas_platform_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/SgemmKernelSse2.asm
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/SgemmKernelAvx.asm
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/SgemmKernelFma3.asm
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/SgemmKernelAvx512F.asm
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/sgemma.asm
${ONNXRUNTIME_ROOT}/core/mlas/lib/amd64/cvtfp16a.asm
)
endif()
else()
execute_process(
COMMAND ${CMAKE_C_COMPILER} -dumpmachine
OUTPUT_VARIABLE dumpmachine_output
ERROR_QUIET
)
if (dumpmachine_output MATCHES "^arm.*")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mfpu=neon")
set(mlas_platform_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/arm/sgemmc.cpp
)
elseif (dumpmachine_output MATCHES "^aarch64.*")
enable_language(ASM)
set(mlas_platform_srcs
${ONNXRUNTIME_ROOT}/core/mlas/lib/aarch64/sgemma.s
)
elseif (CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
enable_language(ASM)
# The LLVM assmebler does not support the .arch directive to enable instruction
# set extensions and also doesn't support AVX-512F instructions without
# turning on support via command-line option. Group the sources by the
# instruction set extension and explicitly set the compiler flag as appropriate.
set(mlas_platform_srcs_sse2
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelSse2.S
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmTransposePackB16x4Sse2.S
)
set_source_files_properties(${mlas_platform_srcs_sse} PROPERTIES COMPILE_FLAGS "-msse2")
set(mlas_platform_srcs_avx
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelAvx.S
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelM1Avx.S
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelM1TransposeBAvx.S
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmTransposePackB16x4Avx.S
)
set_source_files_properties(${mlas_platform_srcs_avx} PROPERTIES COMPILE_FLAGS "-mavx")
set(mlas_platform_srcs_avx2
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelFma3.S
)
set_source_files_properties(${mlas_platform_srcs_avx2} PROPERTIES COMPILE_FLAGS "-mavx2 -mfma")
set(mlas_platform_srcs_avx512f
${ONNXRUNTIME_ROOT}/core/mlas/lib/x86_64/SgemmKernelAvx512F.S
)
set_source_files_properties(${mlas_platform_srcs_avx512f} PROPERTIES COMPILE_FLAGS "-mavx512f")
set(mlas_platform_srcs
${mlas_platform_srcs_sse2}
${mlas_platform_srcs_avx}
${mlas_platform_srcs_avx2}
${mlas_platform_srcs_avx512f}
)
endif()
endif()
add_library(onnxruntime_mlas STATIC ${mlas_common_srcs} ${mlas_platform_srcs})
target_include_directories(onnxruntime_mlas PRIVATE ${ONNXRUNTIME_ROOT}/core/mlas/inc ${ONNXRUNTIME_ROOT}/core/mlas/lib)
set_target_properties(onnxruntime_mlas PROPERTIES FOLDER "ONNXRuntime")

Просмотреть файл

@ -0,0 +1,104 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB_RECURSE onnxruntime_providers_srcs
"${ONNXRUNTIME_ROOT}/core/providers/cpu/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/cpu/*.cc"
)
file(GLOB_RECURSE onnxruntime_contrib_ops_srcs
"${ONNXRUNTIME_ROOT}/contrib_ops/*.h"
"${ONNXRUNTIME_ROOT}/contrib_ops/*.cc"
"${ONNXRUNTIME_ROOT}/contrib_ops/cpu/*.h"
"${ONNXRUNTIME_ROOT}/contrib_ops/cpu/*.cc"
)
file(GLOB onnxruntime_providers_common_srcs
"${ONNXRUNTIME_ROOT}/core/providers/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/*.cc"
)
if(onnxruntime_USE_MKLDNN)
set(PROVIDERS_MKLDNN onnxruntime_providers_mkldnn)
list(APPEND ONNXRUNTIME_PROVIDER_NAMES mkldnn)
endif()
if(onnxruntime_USE_CUDA)
set(PROVIDERS_CUDA onnxruntime_providers_cuda)
list(APPEND ONNXRUNTIME_PROVIDER_NAMES cuda)
endif()
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_common_srcs} ${onnxruntime_providers_srcs})
# add using ONNXRUNTIME_ROOT so they show up under the 'contrib_ops' folder in Visual Studio
source_group(TREE ${ONNXRUNTIME_ROOT} FILES ${onnxruntime_contrib_ops_srcs})
add_library(onnxruntime_providers ${onnxruntime_providers_common_srcs} ${onnxruntime_providers_srcs} ${onnxruntime_contrib_ops_srcs})
onnxruntime_add_include_to_target(onnxruntime_providers onnx protobuf::libprotobuf)
target_include_directories(onnxruntime_providers PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS})
add_dependencies(onnxruntime_providers eigen gsl onnx)
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/providers/cpu DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/providers/cpu)
set_target_properties(onnxruntime_providers PROPERTIES LINKER_LANGUAGE CXX)
set_target_properties(onnxruntime_providers PROPERTIES FOLDER "ONNXRuntime")
if (WIN32 AND onnxruntime_USE_OPENMP)
if (${CMAKE_CXX_COMPILER_ID} STREQUAL MSVC)
add_definitions(/openmp)
endif()
endif()
if (onnxruntime_USE_CUDA)
file(GLOB_RECURSE onnxruntime_providers_cuda_cc_srcs
"${ONNXRUNTIME_ROOT}/core/providers/cuda/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/cuda/*.cc"
)
file(GLOB_RECURSE onnxruntime_providers_cuda_cu_srcs
"${ONNXRUNTIME_ROOT}/core/providers/cuda/*.cu"
"${ONNXRUNTIME_ROOT}/core/providers/cuda/*.cuh"
)
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_cuda_cc_srcs} ${onnxruntime_providers_cuda_cu_srcs})
if(NOT WIN32)
set(CUDA_CFLAGS " -std=c++14")
endif()
cuda_add_library(onnxruntime_providers_cuda ${onnxruntime_providers_cuda_cc_srcs} ${onnxruntime_providers_cuda_cu_srcs} OPTIONS ${CUDA_CFLAGS})
onnxruntime_add_include_to_target(onnxruntime_providers_cuda onnx protobuf::libprotobuf)
add_dependencies(onnxruntime_providers_cuda eigen ${onnxruntime_EXTERNAL_DEPENDENCIES} ${onnxruntime_tvm_dependencies})
target_include_directories(onnxruntime_providers_cuda PRIVATE ${ONNXRUNTIME_ROOT} ${onnxruntime_CUDNN_HOME}/include ${eigen_INCLUDE_DIRS} ${TVM_INCLUDES})
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/providers/cuda DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/providers/cuda)
set_target_properties(onnxruntime_providers_cuda PROPERTIES LINKER_LANGUAGE CUDA)
set_target_properties(onnxruntime_providers_cuda PROPERTIES FOLDER "ONNXRuntime")
if (WIN32)
# *.cu cannot use PCH
foreach(src_file ${onnxruntime_providers_cuda_cc_srcs})
set_source_files_properties(${src_file}
PROPERTIES
COMPILE_FLAGS "/Yucuda_pch.h /FIcuda_pch.h")
endforeach()
set_source_files_properties("${ONNXRUNTIME_ROOT}/core/providers/cuda/cuda_pch.cc"
PROPERTIES
COMPILE_FLAGS "/Yccuda_pch.h"
)
# disable a warning from the CUDA headers about unreferenced local functions
target_compile_options(onnxruntime_providers_cuda PRIVATE /wd4505)
if (onnxruntime_USE_TVM)
target_compile_options(onnxruntime_providers_cuda PRIVATE ${DISABLED_WARNINGS_FOR_TVM})
endif()
endif()
endif()
if (onnxruntime_USE_MKLDNN)
file(GLOB_RECURSE onnxruntime_providers_mkldnn_cc_srcs
"${ONNXRUNTIME_ROOT}/core/providers/mkldnn/*.h"
"${ONNXRUNTIME_ROOT}/core/providers/mkldnn/*.cc"
)
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_providers_mkldnn_cc_srcs})
add_library(onnxruntime_providers_mkldnn ${onnxruntime_providers_mkldnn_cc_srcs})
onnxruntime_add_include_to_target(onnxruntime_providers_mkldnn onnx protobuf::libprotobuf)
add_dependencies(onnxruntime_providers_mkldnn eigen ${onnxruntime_EXTERNAL_DEPENDENCIES})
set_target_properties(onnxruntime_providers_mkldnn PROPERTIES FOLDER "ONNXRuntime")
target_include_directories(onnxruntime_providers_mkldnn PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS})
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/providers/mkldnn DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/providers/mkldnn)
set_target_properties(onnxruntime_providers_mkldnn PROPERTIES LINKER_LANGUAGE CXX)
endif()
if (onnxruntime_ENABLE_MICROSOFT_INTERNAL)
include(onnxruntime_providers_internal.cmake)
endif()

Просмотреть файл

@ -0,0 +1,185 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
include(pybind11)
FIND_PACKAGE(NumPy)
if(NOT PYTHON_INCLUDE_DIR)
set(PYTHON_NOT_FOUND false)
exec_program("${PYTHON_EXECUTABLE}"
ARGS "-c \"import distutils.sysconfig; print(distutils.sysconfig.get_python_inc())\""
OUTPUT_VARIABLE PYTHON_INCLUDE_DIR
RETURN_VALUE PYTHON_NOT_FOUND)
if(${PYTHON_NOT_FOUND})
message(FATAL_ERROR
"Cannot get Python include directory. Is distutils installed?")
endif(${PYTHON_NOT_FOUND})
endif(NOT PYTHON_INCLUDE_DIR)
# 2. Resolve the installed version of NumPy (for numpy/arrayobject.h).
if(NOT NUMPY_INCLUDE_DIR)
set(NUMPY_NOT_FOUND false)
exec_program("${PYTHON_EXECUTABLE}"
ARGS "-c \"import numpy; print(numpy.get_include())\""
OUTPUT_VARIABLE NUMPY_INCLUDE_DIR
RETURN_VALUE NUMPY_NOT_FOUND)
if(${NUMPY_NOT_FOUND})
message(FATAL_ERROR
"Cannot get NumPy include directory: Is NumPy installed?")
endif(${NUMPY_NOT_FOUND})
endif(NOT NUMPY_INCLUDE_DIR)
# ---[ Python + Numpy
set(onnxruntime_pybind_srcs_pattern
"${ONNXRUNTIME_ROOT}/python/*.cc"
"${ONNXRUNTIME_ROOT}/python/*.h"
)
file(GLOB onnxruntime_pybind_srcs ${onnxruntime_pybind_srcs_pattern})
#TODO(): enable cuda and test it
add_library(onnxruntime_pybind11_state MODULE ${onnxruntime_pybind_srcs})
if(HAS_CAST_FUNCTION_TYPE)
target_compile_options(onnxruntime_pybind11_state PRIVATE "-Wno-cast-function-type")
endif()
target_include_directories(onnxruntime_pybind11_state PRIVATE ${ONNXRUNTIME_ROOT} ${PYTHON_INCLUDE_DIR} ${NUMPY_INCLUDE_DIR})
target_include_directories(onnxruntime_pybind11_state PRIVATE ${pybind11_INCLUDE_DIRS})
if(APPLE)
set(ONNXRUNTIME_SO_LINK_FLAG "-Xlinker -exported_symbols_list ${ONNXRUNTIME_ROOT}/python/exported_symbols.lst")
elseif(UNIX)
set(ONNXRUNTIME_SO_LINK_FLAG "-Xlinker --version-script=${ONNXRUNTIME_ROOT}/python/version_script.lds -Xlinker --no-undefined")
endif()
set(onnxruntime_pybind11_state_libs
onnxruntime_session
${onnxruntime_libs}
${PROVIDERS_CUDA}
${PROVIDERS_MKLDNN}
onnxruntime_providers
onnxruntime_framework
onnxruntime_util
onnxruntime_graph
onnx
onnx_proto
onnxruntime_common
onnxruntime_mlas
${onnxruntime_tvm_libs}
)
set(onnxruntime_pybind11_state_dependencies
${onnxruntime_EXTERNAL_DEPENDENCIES}
pybind11
)
add_dependencies(onnxruntime_pybind11_state ${onnxruntime_pybind11_state_dependencies})
if (MSVC)
# if MSVC, pybind11 looks for release version of python lib (pybind11/detail/common.h undefs _DEBUG)
target_link_libraries(onnxruntime_pybind11_state ${onnxruntime_pybind11_state_libs} ${onnxruntime_EXTERNAL_LIBRARIES} ${PYTHON_LIBRARY_RELEASE} ${ONNXRUNTIME_SO_LINK_FLAG})
else()
target_link_libraries(onnxruntime_pybind11_state ${onnxruntime_pybind11_state_libs} ${onnxruntime_EXTERNAL_LIBRARIES} ${PYTHON_LIBRARY} ${ONNXRUNTIME_SO_LINK_FLAG})
if (APPLE)
set_target_properties(onnxruntime_pybind11_state PROPERTIES INSTALL_RPATH "@loader_path")
else()
set_target_properties(onnxruntime_pybind11_state PROPERTIES LINK_FLAGS "-Xlinker -rpath=\$ORIGIN")
endif()
endif()
set_target_properties(onnxruntime_pybind11_state PROPERTIES PREFIX "")
set_target_properties(onnxruntime_pybind11_state PROPERTIES FOLDER "ONNXRuntime")
if (MSVC)
set_target_properties(onnxruntime_pybind11_state PROPERTIES SUFFIX ".pyd")
else()
set_target_properties(onnxruntime_pybind11_state PROPERTIES SUFFIX ".so")
endif()
file(GLOB onnxruntime_backend_srcs
"${ONNXRUNTIME_ROOT}/python/backend/*.py"
)
file(GLOB onnxruntime_python_srcs
"${ONNXRUNTIME_ROOT}/python/*.py"
)
file(GLOB onnxruntime_python_test_srcs
"${ONNXRUNTIME_ROOT}/test/python/*.py"
)
file(GLOB onnxruntime_python_tools_srcs
"${ONNXRUNTIME_ROOT}/python/tools/*.py"
)
file(GLOB onnxruntime_python_datasets_srcs
"${ONNXRUNTIME_ROOT}/python/datasets/*.py"
)
file(GLOB onnxruntime_python_datasets_data
"${ONNXRUNTIME_ROOT}/python/datasets/*.pb"
"${ONNXRUNTIME_ROOT}/python/datasets/*.onnx"
)
# adjust based on what target/s onnxruntime_unittests.cmake created
if (SingleUnitTestProject)
set(test_data_target onnxruntime_test_all)
else()
set(test_data_target onnxruntime_test_ir)
endif()
add_custom_command(
TARGET onnxruntime_pybind11_state POST_BUILD
COMMAND ${CMAKE_COMMAND} -E make_directory $<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/backend
COMMAND ${CMAKE_COMMAND} -E make_directory $<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi
COMMAND ${CMAKE_COMMAND} -E make_directory $<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/datasets
COMMAND ${CMAKE_COMMAND} -E make_directory $<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/tools
COMMAND ${CMAKE_COMMAND} -E copy
${ONNXRUNTIME_ROOT}/__init__.py
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/
COMMAND ${CMAKE_COMMAND} -E copy
${REPO_ROOT}/ThirdPartyNotices.txt
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/
COMMAND ${CMAKE_COMMAND} -E copy
${REPO_ROOT}/LICENSE
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_python_test_srcs}
$<TARGET_FILE_DIR:${test_data_target}>
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_backend_srcs}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/backend/
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_python_srcs}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi/
COMMAND ${CMAKE_COMMAND} -E copy
$<TARGET_FILE:onnxruntime_pybind11_state>
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi/
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_python_datasets_srcs}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/datasets/
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_python_datasets_data}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/datasets/
COMMAND ${CMAKE_COMMAND} -E copy
${onnxruntime_python_tools_srcs}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/tools/
)
if (onnxruntime_USE_MKLDNN)
add_custom_command(
TARGET onnxruntime_pybind11_state POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${MKLDNN_LIB_DIR}/${MKLDNN_SHARED_LIB}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi/
)
endif()
if (onnxruntime_USE_TVM)
add_custom_command(
TARGET onnxruntime_pybind11_state POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
$<TARGET_FILE:tvm> $<TARGET_FILE:nnvm_compiler>
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi/
)
endif()
if (onnxruntime_USE_MKLML)
add_custom_command(
TARGET onnxruntime_pybind11_state POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${MKLDNN_LIB_DIR}/${MKLML_SHARED_LIB} ${MKLDNN_LIB_DIR}/${IOMP5MD_SHARED_LIB}
$<TARGET_FILE_DIR:${test_data_target}>/onnxruntime/capi/
)
endif()

Просмотреть файл

@ -0,0 +1,19 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB onnxruntime_session_srcs
"${ONNXRUNTIME_INCLUDE_DIR}/core/session/*.h"
"${ONNXRUNTIME_ROOT}/core/session/*.h"
"${ONNXRUNTIME_ROOT}/core/session/*.cc"
)
source_group(TREE ${REPO_ROOT} FILES ${onnxruntime_session_srcs})
add_library(onnxruntime_session ${onnxruntime_session_srcs})
install(DIRECTORY ${PROJECT_SOURCE_DIR}/../include/onnxruntime/core/session DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/onnxruntime/core/session)
onnxruntime_add_include_to_target(onnxruntime_session onnx protobuf::libprotobuf)
target_include_directories(onnxruntime_session PRIVATE ${ONNXRUNTIME_ROOT})
add_dependencies(onnxruntime_session ${onnxruntime_EXTERNAL_DEPENDENCIES})
set_target_properties(onnxruntime_session PROPERTIES FOLDER "ONNXRuntime")

Просмотреть файл

@ -0,0 +1,563 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
find_package(Threads)
set(TEST_SRC_DIR ${ONNXRUNTIME_ROOT}/test)
set(TEST_INC_DIR ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS} ${date_INCLUDE_DIR} ${CUDA_INCLUDE_DIRS} ${onnxruntime_CUDNN_HOME}/include)
if (onnxruntime_USE_TVM)
list(APPEND TEST_INC_DIR ${TVM_INCLUDES})
endif()
set(disabled_warnings)
set(extra_includes)
function(AddTest)
cmake_parse_arguments(_UT "" "TARGET" "LIBS;SOURCES;DEPENDS" ${ARGN})
list(REMOVE_DUPLICATES _UT_LIBS)
list(REMOVE_DUPLICATES _UT_SOURCES)
if (_UT_DEPENDS)
list(REMOVE_DUPLICATES _UT_DEPENDS)
endif(_UT_DEPENDS)
add_executable(${_UT_TARGET} ${_UT_SOURCES})
source_group(TREE ${TEST_SRC_DIR} FILES ${_UT_SOURCES})
set_target_properties(${_UT_TARGET} PROPERTIES FOLDER "ONNXRuntimeTest")
if (_UT_DEPENDS)
add_dependencies(${_UT_TARGET} ${_UT_DEPENDS} eigen)
endif(_UT_DEPENDS)
target_link_libraries(${_UT_TARGET} PRIVATE ${_UT_LIBS} ${onnxruntime_EXTERNAL_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
target_include_directories(${_UT_TARGET} PRIVATE ${TEST_INC_DIR})
if (WIN32)
if (onnxruntime_USE_CUDA)
# disable a warning from the CUDA headers about unreferenced local functions
if (MSVC)
target_compile_options(${_UT_TARGET} PRIVATE /wd4505)
endif()
endif()
target_compile_options(${_UT_TARGET} PRIVATE ${disabled_warnings})
else()
target_compile_options(${_UT_TARGET} PRIVATE ${DISABLED_WARNINGS_FOR_TVM})
endif()
set(TEST_ARGS)
if (onnxruntime_GENERATE_TEST_REPORTS)
# generate a report file next to the test program
list(APPEND TEST_ARGS
"--gtest_output=xml:$<SHELL_PATH:$<TARGET_FILE:${_UT_TARGET}>.$<CONFIG>.results.xml>")
endif(onnxruntime_GENERATE_TEST_REPORTS)
add_test(NAME ${_UT_TARGET}
COMMAND ${_UT_TARGET} ${TEST_ARGS}
WORKING_DIRECTORY $<TARGET_FILE_DIR:${_UT_TARGET}>
)
endfunction(AddTest)
#Check whether C++17 header file <filesystem> is present
include(CheckIncludeFiles)
check_include_files("filesystem" HAS_FILESYSTEM_H LANGUAGE CXX)
check_include_files("experimental/filesystem" HAS_EXPERIMENTAL_FILESYSTEM_H LANGUAGE CXX)
#Do not add '${TEST_SRC_DIR}/util/include' to your include directories directly
#Use onnxruntime_add_include_to_target or target_link_libraries, so that compile definitions
#can propagate correctly.
file(GLOB onnxruntime_test_utils_src
"${TEST_SRC_DIR}/util/include/*.h"
"${TEST_SRC_DIR}/util/*.cc"
)
file(GLOB onnxruntime_test_common_src
"${TEST_SRC_DIR}/common/*.cc"
"${TEST_SRC_DIR}/common/*.h"
"${TEST_SRC_DIR}/common/logging/*.cc"
"${TEST_SRC_DIR}/common/logging/*.h"
)
file(GLOB onnxruntime_test_ir_src
"${TEST_SRC_DIR}/ir/*.cc"
"${TEST_SRC_DIR}/ir/*.h"
)
set(onnxruntime_test_framework_src_patterns
"${TEST_SRC_DIR}/framework/*.cc"
"${TEST_SRC_DIR}/platform/*.cc"
)
if(WIN32)
list(APPEND onnxruntime_test_framework_src_patterns
"${TEST_SRC_DIR}/platform/windows/*.cc"
"${TEST_SRC_DIR}/platform/windows/logging/*.cc" )
endif()
if(onnxruntime_USE_CUDA)
list(APPEND onnxruntime_test_framework_src_patterns ${TEST_SRC_DIR}/framework/cuda/*)
endif()
set(onnxruntime_test_providers_src_patterns
"${TEST_SRC_DIR}/contrib_ops/*.h"
"${TEST_SRC_DIR}/contrib_ops/*.cc"
"${TEST_SRC_DIR}/providers/*.h"
"${TEST_SRC_DIR}/providers/*.cc"
"${TEST_SRC_DIR}/framework/TestAllocatorManager.cc"
"${TEST_SRC_DIR}/framework/TestAllocatorManager.h"
)
file(GLOB onnxruntime_test_providers_src ${onnxruntime_test_providers_src_patterns})
file(GLOB_RECURSE onnxruntime_test_providers_cpu_src
"${TEST_SRC_DIR}/providers/cpu/*"
)
list(APPEND onnxruntime_test_providers_src ${onnxruntime_test_providers_cpu_src})
# tests from lowest level library up.
# the order of libraries should be maintained, with higher libraries being added first in the list
set(onnxruntime_test_common_libs
onnxruntime_test_utils
onnxruntime_common
gtest
gmock
)
set(onnxruntime_test_ir_libs
onnxruntime_test_utils
onnxruntime_graph
onnx
onnx_proto
onnxruntime_common
protobuf::libprotobuf
gtest gmock
)
set(onnxruntime_test_framework_libs
onnxruntime_test_utils_for_framework
onnxruntime_session
onnxruntime_providers
onnxruntime_framework
onnxruntime_util
onnxruntime_graph
onnx
onnx_proto
onnxruntime_common
onnxruntime_mlas
protobuf::libprotobuf
gtest gmock
)
if(onnxruntime_USE_CUDA)
list(APPEND onnxruntime_test_framework_libs onnxruntime_providers_cuda)
endif()
if(onnxruntime_USE_MKLDNN)
list(APPEND onnxruntime_test_framework_libs onnxruntime_providers_mkldnn)
endif()
if(WIN32)
list(APPEND onnxruntime_test_framework_libs Advapi32)
elseif(HAS_FILESYSTEM_H OR HAS_EXPERIMENTAL_FILESYSTEM_H)
list(APPEND onnxruntime_test_framework_libs stdc++fs)
endif()
set(onnxruntime_test_providers_libs
onnxruntime_test_utils_for_framework
onnxruntime_session)
set (onnxruntime_test_providers_dependencies ${onnxruntime_EXTERNAL_DEPENDENCIES})
if(onnxruntime_USE_CUDA)
list(APPEND onnxruntime_test_providers_dependencies onnxruntime_providers_cuda)
endif()
if(onnxruntime_USE_MKLDNN)
list(APPEND onnxruntime_test_providers_dependencies onnxruntime_providers_mkldnn)
endif()
if( NOT WIN32 AND (HAS_FILESYSTEM_H OR HAS_EXPERIMENTAL_FILESYSTEM_H))
list(APPEND onnxruntime_test_providers_libs stdc++fs)
endif()
file(GLOB_RECURSE onnxruntime_test_tvm_src
"${ONNXRUNTIME_ROOT}/test/tvm/*.h"
"${ONNXRUNTIME_ROOT}/test/tvm/*.cc"
)
set(onnx_test_libs
onnxruntime_test_utils
onnxruntime_session)
if (onnxruntime_ENABLE_MICROSOFT_INTERNAL)
include(onnxruntime_unittests_internal.cmake)
endif()
list(APPEND onnxruntime_test_providers_libs
${PROVIDERS_CUDA}
${PROVIDERS_MKLDNN}
onnxruntime_providers
onnxruntime_framework
onnxruntime_util
onnxruntime_graph
onnx
onnx_proto
onnxruntime_common
onnxruntime_mlas
protobuf::libprotobuf
gtest gmock
)
if(WIN32)
if (onnxruntime_USE_TVM)
list(APPEND disabled_warnings ${DISABLED_WARNINGS_FOR_TVM})
endif()
endif()
file(GLOB onnxruntime_test_framework_src ${onnxruntime_test_framework_src_patterns})
add_library(onnxruntime_test_utils_for_framework ${onnxruntime_test_utils_src})
onnxruntime_add_include_to_target(onnxruntime_test_utils_for_framework gtest onnx protobuf::libprotobuf)
add_dependencies(onnxruntime_test_utils_for_framework ${onnxruntime_EXTERNAL_DEPENDENCIES} eigen)
target_include_directories(onnxruntime_test_utils_for_framework PUBLIC "${TEST_SRC_DIR}/util/include" PRIVATE ${eigen_INCLUDE_DIRS} ${ONNXRUNTIME_ROOT})
# Add the define for conditionally using the framework Environment class in TestEnvironment
target_compile_definitions(onnxruntime_test_utils_for_framework PUBLIC -DHAVE_FRAMEWORK_LIB)
if (SingleUnitTestProject)
add_library(onnxruntime_test_utils ALIAS onnxruntime_test_utils_for_framework)
else()
add_library(onnxruntime_test_utils ${onnxruntime_test_utils_src})
onnxruntime_add_include_to_target(onnxruntime_test_utils gtest onnx protobuf::libprotobuf)
add_dependencies(onnxruntime_test_utils ${onnxruntime_EXTERNAL_DEPENDENCIES} eigen)
target_include_directories(onnxruntime_test_utils PUBLIC "${TEST_SRC_DIR}/util/include" PRIVATE ${eigen_INCLUDE_DIRS})
endif()
if (SingleUnitTestProject)
set(all_tests ${onnxruntime_test_common_src} ${onnxruntime_test_ir_src} ${onnxruntime_test_framework_src} ${onnxruntime_test_providers_src})
set(all_libs onnxruntime_test_utils ${onnxruntime_test_providers_libs})
set(all_dependencies ${onnxruntime_test_providers_dependencies} )
if (onnxruntime_USE_TVM)
list(APPEND all_tests ${onnxruntime_test_tvm_src})
list(APPEND all_libs ${onnxruntime_tvm_libs})
list(APPEND all_dependencies ${onnxruntime_tvm_dependencies})
endif()
# we can only have one 'main', so remove them all and add back the providers test_main as it sets
# up everything we need for all tests
file(GLOB_RECURSE test_mains "${TEST_SRC_DIR}/*/test_main.cc")
list(REMOVE_ITEM all_tests ${test_mains})
list(APPEND all_tests "${TEST_SRC_DIR}/providers/test_main.cc")
# this is only added to onnxruntime_test_framework_libs above, but we use onnxruntime_test_providers_libs for the onnxruntime_test_all target.
# for now, add it here. better is probably to have onnxruntime_test_providers_libs use the full onnxruntime_test_framework_libs
# list given it's built on top of that library and needs all the same dependencies.
if(WIN32)
list(APPEND onnxruntime_test_providers_libs Advapi32)
endif()
AddTest(
TARGET onnxruntime_test_all
SOURCES ${all_tests}
LIBS ${all_libs} ${onnxruntime_test_common_libs}
DEPENDS ${all_dependencies}
)
# the default logger tests conflict with the need to have an overall default logger
# so skip in this type of
target_compile_definitions(onnxruntime_test_all PUBLIC -DSKIP_DEFAULT_LOGGER_TESTS)
set(test_data_target onnxruntime_test_all)
else()
AddTest(
TARGET onnxruntime_test_common
SOURCES ${onnxruntime_test_common_src}
LIBS ${onnxruntime_test_common_libs}
DEPENDS ${onnxruntime_EXTERNAL_DEPENDENCIES}
)
AddTest(
TARGET onnxruntime_test_ir
SOURCES ${onnxruntime_test_ir_src}
LIBS ${onnxruntime_test_ir_libs}
DEPENDS ${onnxruntime_EXTERNAL_DEPENDENCIES}
)
AddTest(
TARGET onnxruntime_test_framework
SOURCES ${onnxruntime_test_framework_src}
LIBS ${onnxruntime_test_framework_libs}
# code smell! see if CPUExecutionProvider should move to framework so onnxruntime_providers isn't needed.
DEPENDS ${onnxruntime_test_providers_dependencies}
)
AddTest(
TARGET onnxruntime_test_providers
SOURCES ${onnxruntime_test_providers_src}
LIBS ${onnxruntime_test_providers_libs}
DEPENDS ${onnxruntime_test_providers_dependencies}
)
set(test_data_target onnxruntime_test_ir)
endif() # SingleUnitTestProject
#
# onnxruntime_ir_graph test data
#
set(TEST_DATA_SRC ${TEST_SRC_DIR}/testdata)
set(TEST_DATA_DES $<TARGET_FILE_DIR:${test_data_target}>/testdata)
# Copy test data from source to destination.
add_custom_command(
TARGET ${test_data_target} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_directory
${TEST_DATA_SRC}
${TEST_DATA_DES})
add_library(onnx_test_data_proto ${TEST_SRC_DIR}/proto/tml.proto)
if(HAS_NULL_DEREFERENCE)
target_compile_options(onnx_test_data_proto PRIVATE "-Wno-null-dereference")
endif()
if(WIN32)
target_compile_options(onnx_test_data_proto PRIVATE "/wd4125" "/wd4456")
endif()
add_dependencies(onnx_test_data_proto onnx_proto ${onnxruntime_EXTERNAL_DEPENDENCIES})
if(NOT WIN32)
if(HAS_UNUSED_PARAMETER)
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/tml.pb.cc PROPERTIES COMPILE_FLAGS -Wno-unused-parameter)
endif()
endif()
onnxruntime_add_include_to_target(onnx_test_data_proto onnx_proto protobuf::libprotobuf)
target_include_directories(onnx_test_data_proto PRIVATE ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}/onnx)
set_target_properties(onnx_test_data_proto PROPERTIES FOLDER "ONNXRuntimeTest")
onnxruntime_protobuf_generate(APPEND_PATH IMPORT_DIRS ${ONNXRUNTIME_ROOT}/core/protobuf TARGET onnx_test_data_proto)
set(onnx_test_runner_src_dir ${TEST_SRC_DIR}/onnx)
set(onnx_test_runner_common_srcs
${onnx_test_runner_src_dir}/TestResultStat.cc
${onnx_test_runner_src_dir}/TestResultStat.h
${onnx_test_runner_src_dir}/testenv.h
${onnx_test_runner_src_dir}/FixedCountFinishCallback.h
${onnx_test_runner_src_dir}/TestCaseResult.cc
${onnx_test_runner_src_dir}/TestCaseResult.h
${onnx_test_runner_src_dir}/testenv.cc
${onnx_test_runner_src_dir}/runner.h
${onnx_test_runner_src_dir}/runner.cc
${onnx_test_runner_src_dir}/TestCase.cc
${onnx_test_runner_src_dir}/TestCase.h
${onnx_test_runner_src_dir}/path_lib.h
${onnx_test_runner_src_dir}/sync_api.h)
if(WIN32)
set(wide_get_opt_src_dir ${TEST_SRC_DIR}/win_getopt/wide)
add_library(win_getopt_wide ${wide_get_opt_src_dir}/getopt.cc ${wide_get_opt_src_dir}/include/getopt.h)
target_include_directories(win_getopt_wide INTERFACE ${wide_get_opt_src_dir}/include)
set_target_properties(win_getopt_wide PROPERTIES FOLDER "ONNXRuntimeTest")
set(mb_get_opt_src_dir ${TEST_SRC_DIR}/win_getopt/mb)
add_library(win_getopt_mb ${mb_get_opt_src_dir}/getopt.cc ${mb_get_opt_src_dir}/include/getopt.h)
target_include_directories(win_getopt_mb INTERFACE ${mb_get_opt_src_dir}/include)
set_target_properties(win_getopt_mb PROPERTIES FOLDER "ONNXRuntimeTest")
set(onnx_test_runner_common_srcs ${onnx_test_runner_common_srcs} ${onnx_test_runner_src_dir}/sync_api_win.cc)
set(GETOPT_LIB_WIDE win_getopt_wide)
set(GETOPT_LIB win_getopt_mb)
else()
set(onnx_test_runner_common_srcs ${onnx_test_runner_common_srcs} ${onnx_test_runner_src_dir}/onnxruntime_event.h ${onnx_test_runner_src_dir}/simple_thread_pool.h ${onnx_test_runner_src_dir}/sync_api_linux.cc)
if(HAS_FILESYSTEM_H OR HAS_EXPERIMENTAL_FILESYSTEM_H)
set(FS_STDLIB stdc++fs)
endif()
endif()
add_library(onnx_test_runner_common ${onnx_test_runner_common_srcs})
onnxruntime_add_include_to_target(onnx_test_runner_common onnxruntime_test_utils onnx protobuf::libprotobuf)
add_dependencies(onnx_test_runner_common eigen onnx_test_data_proto ${onnxruntime_EXTERNAL_DEPENDENCIES})
target_include_directories(onnx_test_runner_common PRIVATE ${eigen_INCLUDE_DIRS} ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}/onnx ${ONNXRUNTIME_ROOT})
set_target_properties(onnx_test_runner_common PROPERTIES FOLDER "ONNXRuntimeTest")
if(onnxruntime_USE_CUDA)
set(onnx_cuda_test_libs onnxruntime_providers_cuda)
endif()
if(onnxruntime_USE_MKLDNN)
set(onnx_mkldnn_test_libs onnxruntime_providers_mkldnn)
endif()
list(APPEND onnx_test_libs
${onnx_cuda_test_libs}
${onnxruntime_tvm_libs}
${onnx_mkldnn_test_libs}
onnxruntime_providers
onnxruntime_framework
onnxruntime_util
onnxruntime_graph
onnx
onnx_proto
onnxruntime_common
onnxruntime_mlas
onnx_test_data_proto
${FS_STDLIB}
${onnxruntime_EXTERNAL_LIBRARIES}
${ONNXRUNTIME_CUDA_LIBRARIES}
${CMAKE_THREAD_LIBS_INIT}
)
if(WIN32)
list(APPEND onnx_test_libs Pathcch)
endif()
if (onnxruntime_USE_OPENBLAS)
if (WIN32)
list(APPEND onnx_test_libs ${onnxruntime_OPENBLAS_HOME}/lib/libopenblas.lib)
else()
list(APPEND onnx_test_libs openblas)
endif()
endif()
if (onnxruntime_USE_MKLDNN)
list(APPEND onnx_test_libs mkldnn)
add_custom_command(
TARGET ${test_data_target} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${MKLDNN_LIB_DIR}/${MKLDNN_SHARED_LIB} $<TARGET_FILE_DIR:${test_data_target}>
)
endif()
if (onnxruntime_USE_MKLML)
add_custom_command(
TARGET ${test_data_target} POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${MKLDNN_LIB_DIR}/${MKLML_SHARED_LIB} ${MKLDNN_LIB_DIR}/${IOMP5MD_SHARED_LIB}
$<TARGET_FILE_DIR:${test_data_target}>
)
endif()
add_executable(onnx_test_runner ${onnx_test_runner_src_dir}/main.cc)
target_link_libraries(onnx_test_runner PRIVATE onnx_test_runner_common ${onnx_test_libs} ${GETOPT_LIB_WIDE})
target_include_directories(onnx_test_runner PRIVATE ${ONNXRUNTIME_ROOT})
set_target_properties(onnx_test_runner PROPERTIES FOLDER "ONNXRuntimeTest")
install(TARGETS onnx_test_runner
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR})
if(onnxruntime_BUILD_BENCHMARKS AND (HAS_FILESYSTEM_H OR HAS_EXPERIMENTAL_FILESYSTEM_H))
add_executable(onnxruntime_benchmark ${TEST_SRC_DIR}/onnx/microbenchmark/main.cc ${TEST_SRC_DIR}/onnx/microbenchmark/modeltest.cc)
target_include_directories(onnxruntime_benchmark PRIVATE ${ONNXRUNTIME_ROOT} ${onnxruntime_graph_header} benchmark)
target_compile_options(onnxruntime_benchmark PRIVATE "/wd4141")
target_link_libraries(onnxruntime_benchmark PRIVATE ${onnx_test_libs} onnx_test_runner_common benchmark)
add_dependencies(onnxruntime_benchmark ${onnxruntime_EXTERNAL_DEPENDENCIES})
set_target_properties(onnxruntime_benchmark PROPERTIES FOLDER "ONNXRuntimeTest")
endif()
if(WIN32)
set(DISABLED_WARNINGS_FOR_PROTOBUF "/wd4125" "/wd4456" "/wd4505")
target_compile_options(onnx_test_runner_common PRIVATE ${DISABLED_WARNINGS_FOR_PROTOBUF} -D_CRT_SECURE_NO_WARNINGS)
target_compile_options(onnx_test_runner PRIVATE ${DISABLED_WARNINGS_FOR_PROTOBUF})
endif()
set(onnxruntime_exec_src_dir ${TEST_SRC_DIR}/onnxruntime_exec)
file(GLOB onnxruntime_exec_src
"${onnxruntime_exec_src_dir}/*.cc"
"${onnxruntime_exec_src_dir}/*.h"
)
add_executable(onnxruntime_exec ${onnxruntime_exec_src})
target_include_directories(onnxruntime_exec PRIVATE ${ONNXRUNTIME_ROOT})
# we need to force these dependencies to build first. just using target_link_libraries isn't sufficient
add_dependencies(onnxruntime_exec ${onnxruntime_EXTERNAL_DEPENDENCIES})
target_link_libraries(onnxruntime_exec PRIVATE ${onnx_test_libs})
set_target_properties(onnxruntime_exec PROPERTIES FOLDER "ONNXRuntimeTest")
add_test(NAME onnx_test_pytorch_converted
COMMAND onnx_test_runner ${PROJECT_SOURCE_DIR}/external/onnx/onnx/backend/test/data/pytorch-converted)
add_test(NAME onnx_test_pytorch_operator
COMMAND onnx_test_runner ${PROJECT_SOURCE_DIR}/external/onnx/onnx/backend/test/data/pytorch-operator)
if(HAS_FILESYSTEM_H OR HAS_EXPERIMENTAL_FILESYSTEM_H)
set(onnxruntime_perf_test_src_dir ${TEST_SRC_DIR}/perftest)
set(onnxruntime_perf_test_src_patterns
"${onnxruntime_perf_test_src_dir}/*.cc"
"${onnxruntime_perf_test_src_dir}/*.h")
if(WIN32)
list(APPEND onnxruntime_perf_test_src_patterns
"${onnxruntime_perf_test_src_dir}/windows/*.cc"
"${onnxruntime_perf_test_src_dir}/windows/*.h" )
else ()
list(APPEND onnxruntime_perf_test_src_patterns
"${onnxruntime_perf_test_src_dir}/posix/*.cc"
"${onnxruntime_perf_test_src_dir}/posix/*.h" )
endif()
file(GLOB onnxruntime_perf_test_src ${onnxruntime_perf_test_src_patterns})
add_executable(onnxruntime_perf_test ${onnxruntime_perf_test_src})
target_include_directories(onnxruntime_perf_test PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS} ${extra_includes} ${onnxruntime_graph_header} ${onnxruntime_exec_src_dir} ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_CURRENT_BINARY_DIR}/onnx)
if (WIN32)
target_compile_options(onnxruntime_perf_test PRIVATE ${disabled_warnings})
endif()
target_link_libraries(onnxruntime_perf_test PRIVATE ${onnx_test_libs} ${GETOPT_LIB})
set_target_properties(onnxruntime_perf_test PROPERTIES FOLDER "ONNXRuntimeTest")
endif()
# shared lib
if (onnxruntime_BUILD_SHARED_LIB)
if (UNIX)
# test custom op shared lib
file(GLOB onnxruntime_custom_op_shared_lib_test_srcs "${ONNXRUNTIME_ROOT}/test/custom_op_shared_lib/test_custom_op.cc")
add_library(onnxruntime_custom_op_shared_lib_test SHARED ${onnxruntime_custom_op_shared_lib_test_srcs})
add_dependencies(onnxruntime_custom_op_shared_lib_test onnx_proto ${onnxruntime_EXTERNAL_DEPENDENCIES})
target_include_directories(onnxruntime_custom_op_shared_lib_test PUBLIC "${PROJECT_SOURCE_DIR}/include")
target_link_libraries(onnxruntime_custom_op_shared_lib_test PRIVATE onnxruntime onnx onnx_proto protobuf::libprotobuf)
set_target_properties(onnxruntime_custom_op_shared_lib_test PROPERTIES FOLDER "ONNXRuntimeSharedLibTest")
set(ONNX_DLL onnxruntime)
else()
set(ONNX_DLL onnxruntime)
endif()
#################################################################
# test inference using shared lib + custom op
# this program shouldn't have direct depedency on CUDA
# CUDA is part of ${ONNX_DLL}
set (ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR "${ONNXRUNTIME_ROOT}/test/shared_lib")
add_executable(onnxruntime_shared_lib_test
${ONNXRUNTIME_ROOT}/test/util/test_allocator.cc
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_fixture.h
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_inference.cc
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_session_options.cc
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_run_options.cc
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_allocator.cc
${ONNXRUNTIME_SHARED_LIB_TEST_SRC_DIR}/test_inference.cc)
onnxruntime_add_include_to_target(onnxruntime_shared_lib_test onnxruntime_test_utils)
target_include_directories(onnxruntime_shared_lib_test PRIVATE "${TEST_SRC_DIR}/util/include" "${PROJECT_SOURCE_DIR}/include")
if(WIN32)
target_compile_definitions(onnxruntime_shared_lib_test PRIVATE ONNX_RUNTIME_DLL_IMPORT)
endif()
target_link_libraries(onnxruntime_shared_lib_test PRIVATE ${ONNX_DLL} onnx onnx_proto gtest)
set_target_properties(onnxruntime_shared_lib_test PROPERTIES FOLDER "ONNXRuntimeSharedLibTest")
add_test(NAME onnxruntime_shared_lib_test COMMAND onnxruntime_shared_lib_test WORKING_DIRECTORY $<TARGET_FILE_DIR:onnxruntime_shared_lib_test>)
#demo
if(PNG_FOUND)
add_executable(fns_candy_style_transfer "${ONNXRUNTIME_ROOT}/test/shared_lib/fns_candy_style_transfer.c")
target_include_directories(fns_candy_style_transfer PRIVATE "${TEST_SRC_DIR}/util/include" ${PNG_INCLUDE_DIRS})
target_link_libraries(fns_candy_style_transfer PRIVATE ${ONNX_DLL} ${PNG_LIBRARIES})
set_target_properties(fns_candy_style_transfer PROPERTIES FOLDER "ONNXRuntimeTest")
endif()
endif()
add_executable(onnxruntime_mlas_test ${TEST_SRC_DIR}/mlas/unittest.cpp)
target_include_directories(onnxruntime_mlas_test PRIVATE ${ONNXRUNTIME_ROOT}/core/mlas/inc)
target_link_libraries(onnxruntime_mlas_test PRIVATE onnxruntime_mlas)
set_target_properties(onnxruntime_mlas_test PROPERTIES FOLDER "ONNXRuntimeTest")
if (onnxruntime_ENABLE_MICROSOFT_INTERNAL)
include(onnxruntime_standalone_tests_internal.cmake)
endif()

Просмотреть файл

@ -0,0 +1,20 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
file(GLOB_RECURSE onnxruntime_util_srcs
"${ONNXRUNTIME_ROOT}/core/util/*.h"
"${ONNXRUNTIME_ROOT}/core/util/*.cc"
)
source_group(TREE ${ONNXRUNTIME_ROOT}/core FILES ${onnxruntime_util_srcs})
add_library(onnxruntime_util ${onnxruntime_util_srcs})
target_include_directories(onnxruntime_util PRIVATE ${ONNXRUNTIME_ROOT} ${eigen_INCLUDE_DIRS})
onnxruntime_add_include_to_target(onnxruntime_util onnx protobuf::libprotobuf)
set_target_properties(onnxruntime_util PROPERTIES LINKER_LANGUAGE CXX)
set_target_properties(onnxruntime_util PROPERTIES FOLDER "ONNXRuntime")
add_dependencies(onnxruntime_util ${onnxruntime_EXTERNAL_DEPENDENCIES} eigen)
if (WIN32)
target_compile_definitions(onnxruntime_util PRIVATE _SCL_SECURE_NO_WARNINGS)
target_compile_definitions(onnxruntime_framework PRIVATE _SCL_SECURE_NO_WARNINGS)
endif()

1
cmake/patches/.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
*.patch text eol=lf

Просмотреть файл

@ -0,0 +1,6 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
cmake_minimum_required(VERSION 2.8.3)
project(cub)

Просмотреть файл

@ -0,0 +1,136 @@
diff --git a/src/cpu/gemm/jit_avx2_gemm_f32.cpp b/src/cpu/gemm/jit_avx2_gemm_f32.cpp
index bf03c57..46793e7 100644
--- a/src/cpu/gemm/jit_avx2_gemm_f32.cpp
+++ b/src/cpu/gemm/jit_avx2_gemm_f32.cpp
@@ -2349,13 +2349,18 @@ void jit_avx2_gemm_f32::sgemm(const char *transa, const char *transb,
nthr_mn = nthr_m * nthr_n;
- unsigned int volatile *ompstatus = (unsigned int volatile *)ompstatus_;
- if (!ompstatus) return;
+ unsigned int *ompstatus_ = nullptr;
+ unsigned int volatile *ompstatus = nullptr;
float *c_buffers = NULL;
float *ws_buffers = NULL;
if (nthr_k > 1) {
+ ompstatus_ = (unsigned int *)malloc(
+ sizeof(unsigned int *) * nthrs_ * CACHE_LINE_SIZE, 64);
+ ompstatus = (unsigned int volatile *)ompstatus_;
+ assert(ompstatus);
+
for (int i = 0; i < nthr; i++)
ompstatus[i * CACHE_LINE_SIZE] = 0;
@@ -2486,8 +2491,10 @@ void jit_avx2_gemm_f32::sgemm(const char *transa, const char *transb,
}
}
- if (nthr_k > 1)
+ if (nthr_k > 1) {
free(c_buffers);
+ free(ompstatus_);
+ }
free(ws_buffers);
}
@@ -2513,9 +2520,6 @@ jit_avx2_gemm_f32::jit_avx2_gemm_f32(
ker_b0_ = ker_bn_;
}
nthrs_ = omp_get_max_threads();
- ompstatus_ = (unsigned int *)malloc(
- sizeof(unsigned int *) * nthrs_ * CACHE_LINE_SIZE, 64);
- assert(ompstatus_);
}
jit_avx2_gemm_f32::~jit_avx2_gemm_f32()
@@ -2525,7 +2529,6 @@ jit_avx2_gemm_f32::~jit_avx2_gemm_f32()
delete ker_b1_;
if (beta_ != 0.0 || (beta_ == 0.0 && hasBias_))
delete ker_b0_;
- free(ompstatus_);
}
}
diff --git a/src/cpu/gemm/jit_avx2_gemm_f32.hpp b/src/cpu/gemm/jit_avx2_gemm_f32.hpp
index 7adb2a2..ebbbde0 100644
--- a/src/cpu/gemm/jit_avx2_gemm_f32.hpp
+++ b/src/cpu/gemm/jit_avx2_gemm_f32.hpp
@@ -49,7 +49,6 @@ private:
bool hasBias_;
struct xbyak_gemm;
xbyak_gemm *ker_bn_, *ker_b1_, *ker_b0_;
- unsigned int *ompstatus_;
int nthrs_;
};
}
diff --git a/src/cpu/gemm/jit_avx512_common_gemm_f32.cpp b/src/cpu/gemm/jit_avx512_common_gemm_f32.cpp
index 7959195..fca14f4 100644
--- a/src/cpu/gemm/jit_avx512_common_gemm_f32.cpp
+++ b/src/cpu/gemm/jit_avx512_common_gemm_f32.cpp
@@ -1866,14 +1866,18 @@ void jit_avx512_common_gemm_f32::sgemm(const char *transa, const char *transb,
nthr = nthr_m * nthr_n * nthr_k;
nthr_mn = nthr_m * nthr_n;
-
- unsigned int volatile *ompstatus = (unsigned int volatile *)ompstatus_;
- if (!ompstatus) return;
+
+ unsigned int *ompstatus_ = nullptr;
+ unsigned int volatile *ompstatus = nullptr;
float *c_buffers = NULL;
float *ws_buffers = NULL;
if (nthr_k > 1) {
+ ompstatus_ = (unsigned int *)malloc(
+ sizeof(unsigned int *) * nthrs_ * CACHE_LINE_SIZE, 64);
+ ompstatus = (unsigned int volatile *)ompstatus_;
+ assert(ompstatus);
for (int i = 0; i < nthr; i++)
ompstatus[i * CACHE_LINE_SIZE] = 0;
@@ -2004,8 +2008,10 @@ void jit_avx512_common_gemm_f32::sgemm(const char *transa, const char *transb,
}
}
- if (nthr_k > 1)
+ if (nthr_k > 1) {
free(c_buffers);
+ free(ompstatus_);
+ }
free(ws_buffers);
}
@@ -2032,10 +2038,6 @@ jit_avx512_common_gemm_f32::jit_avx512_common_gemm_f32(
}
nthrs_ = omp_get_max_threads();
- ompstatus_ = (unsigned int *)malloc(
- sizeof(unsigned int *) * nthrs_ * CACHE_LINE_SIZE, 64);
- assert(ompstatus_);
-
}
jit_avx512_common_gemm_f32::~jit_avx512_common_gemm_f32()
@@ -2045,7 +2047,6 @@ jit_avx512_common_gemm_f32::~jit_avx512_common_gemm_f32()
delete ker_b1_;
if (beta_ != 0.0 || (beta_ == 0.0 && hasBias_))
delete ker_b0_;
- free(ompstatus_);
}
}
}
diff --git a/src/cpu/gemm/jit_avx512_common_gemm_f32.hpp b/src/cpu/gemm/jit_avx512_common_gemm_f32.hpp
index ede1cf9..c057335 100644
--- a/src/cpu/gemm/jit_avx512_common_gemm_f32.hpp
+++ b/src/cpu/gemm/jit_avx512_common_gemm_f32.hpp
@@ -49,7 +49,6 @@ private:
bool hasBias_;
struct xbyak_gemm;
xbyak_gemm *ker_bn_, *ker_b1_, *ker_b0_;
- unsigned int *ompstatus_;
int nthrs_;
};
}

Просмотреть файл

@ -0,0 +1,14 @@
diff --git a/cmake/platform.cmake b/cmake/platform.cmake
index fa51aa7..3d24fdc 100644
--- a/cmake/platform.cmake
+++ b/cmake/platform.cmake
@@ -64,9 +64,6 @@ elseif(UNIX OR APPLE OR MINGW)
# unconditionnaly.
set(CMAKE_CCXX_FLAGS "${CMAKE_CCXX_FLAGS} -Wno-pass-failed")
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
- if(NOT CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.0)
- set(DEF_ARCH_OPT_FLAGS "-march=native -mtune=native")
- endif()
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 6.0)
# suppress warning on assumptions made regarding overflow (#146)
set(CMAKE_CCXX_FLAGS "${CMAKE_CCXX_FLAGS} -Wno-strict-overflow")

Просмотреть файл

@ -0,0 +1,27 @@
diff --git a/src/google/protobuf/compiler/cpp/cpp_file.cc b/src/google/protobuf/compiler/cpp/cpp_file.cc
index a066a6a7..636a864f 100644
--- a/src/google/protobuf/compiler/cpp/cpp_file.cc
+++ b/src/google/protobuf/compiler/cpp/cpp_file.cc
@@ -972,6 +972,11 @@ void FileGenerator::GenerateTopHeaderGuard(io::Printer* printer,
"#ifndef PROTOBUF_$filename_identifier$__INCLUDED\n"
"#define PROTOBUF_$filename_identifier$__INCLUDED\n"
"\n"
+ "#ifdef _MSC_VER\n"
+ "#pragma warning(push)\n"
+ "#pragma warning(disable: 4800)\n"
+ "#endif // _MSC_VER\n"
+ "\n"
"#include <string>\n",
"filename", file_->name(), "filename_identifier", filename_identifier);
printer->Print("\n");
@@ -980,6 +985,10 @@ void FileGenerator::GenerateTopHeaderGuard(io::Printer* printer,
void FileGenerator::GenerateBottomHeaderGuard(
io::Printer* printer, const string& filename_identifier) {
printer->Print(
+ "#ifdef _MSC_VER\n"
+ "#pragma warning(pop)\n"
+ "#endif // _MSC_VER\n"
+ "\n"
"#endif // PROTOBUF_$filename_identifier$__INCLUDED\n",
"filename_identifier", filename_identifier);
}

Просмотреть файл

@ -0,0 +1,148 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#Changelog:
#copied from https://github.com/protocolbuffers/protobuf/blob/master/cmake/protobuf-config.cmake.in
#sed -i 's/protobuf_generate/onnxruntime_protobuf_generate/g' protobuf-config.cmake.orig
#replace 'protobuf::protoc' with ${PROTOC_EXECUTABLE} and ${PROTOC_DEPS}
#remove OUTDIR
function(onnxruntime_protobuf_generate)
include(CMakeParseArguments)
if(EXISTS "${ONNX_CUSTOM_PROTOC_EXECUTABLE}")
set(PROTOC_EXECUTABLE ${ONNX_CUSTOM_PROTOC_EXECUTABLE})
else()
set(PROTOC_EXECUTABLE $<TARGET_FILE:protobuf::protoc>)
set(PROTOC_DEPS protobuf::protoc)
endif()
set(_options APPEND_PATH)
set(_singleargs LANGUAGE OUT_VAR EXPORT_MACRO)
if(COMMAND target_sources)
list(APPEND _singleargs TARGET)
endif()
set(_multiargs PROTOS IMPORT_DIRS GENERATE_EXTENSIONS)
cmake_parse_arguments(onnxruntime_protobuf_generate "${_options}" "${_singleargs}" "${_multiargs}" "${ARGN}")
if(NOT onnxruntime_protobuf_generate_PROTOS AND NOT onnxruntime_protobuf_generate_TARGET)
message(SEND_ERROR "Error: onnxruntime_protobuf_generate called without any targets or source files")
return()
endif()
if(NOT onnxruntime_protobuf_generate_OUT_VAR AND NOT onnxruntime_protobuf_generate_TARGET)
message(SEND_ERROR "Error: onnxruntime_protobuf_generate called without a target or output variable")
return()
endif()
if(NOT onnxruntime_protobuf_generate_LANGUAGE)
set(onnxruntime_protobuf_generate_LANGUAGE cpp)
endif()
string(TOLOWER ${onnxruntime_protobuf_generate_LANGUAGE} onnxruntime_protobuf_generate_LANGUAGE)
if(onnxruntime_protobuf_generate_EXPORT_MACRO AND onnxruntime_protobuf_generate_LANGUAGE STREQUAL cpp)
set(_dll_export_decl "dllexport_decl=${onnxruntime_protobuf_generate_EXPORT_MACRO}:")
endif()
if(NOT onnxruntime_protobuf_generate_EXTENSIONS)
if(onnxruntime_protobuf_generate_LANGUAGE STREQUAL cpp)
set(onnxruntime_protobuf_generate_EXTENSIONS .pb.h .pb.cc)
elseif(onnxruntime_protobuf_generate_LANGUAGE STREQUAL python)
set(onnxruntime_protobuf_generate_EXTENSIONS _pb2.py)
else()
message(SEND_ERROR "Error: onnxruntime_protobuf_generate given unknown Language ${LANGUAGE}, please provide a value for GENERATE_EXTENSIONS")
return()
endif()
endif()
if(onnxruntime_protobuf_generate_TARGET)
get_target_property(_source_list ${onnxruntime_protobuf_generate_TARGET} SOURCES)
foreach(_file ${_source_list})
if(_file MATCHES "proto$")
list(APPEND onnxruntime_protobuf_generate_PROTOS ${_file})
endif()
endforeach()
endif()
if(NOT onnxruntime_protobuf_generate_PROTOS)
message(SEND_ERROR "Error: onnxruntime_protobuf_generate could not find any .proto files")
return()
endif()
if(onnxruntime_protobuf_generate_APPEND_PATH)
# Create an include path for each file specified
foreach(_file ${onnxruntime_protobuf_generate_PROTOS})
get_filename_component(_abs_file ${_file} ABSOLUTE)
get_filename_component(_abs_path ${_abs_file} PATH)
list(FIND _protobuf_include_path ${_abs_path} _contains_already)
if(${_contains_already} EQUAL -1)
list(APPEND _protobuf_include_path -I ${_abs_path})
endif()
endforeach()
else()
set(_protobuf_include_path -I ${CMAKE_CURRENT_SOURCE_DIR})
endif()
foreach(DIR ${onnxruntime_protobuf_generate_IMPORT_DIRS})
get_filename_component(ABS_PATH ${DIR} ABSOLUTE)
list(FIND _protobuf_include_path ${ABS_PATH} _contains_already)
if(${_contains_already} EQUAL -1)
list(APPEND _protobuf_include_path -I ${ABS_PATH})
endif()
endforeach()
set(_generated_srcs_all)
foreach(_proto ${onnxruntime_protobuf_generate_PROTOS})
get_filename_component(_abs_file ${_proto} ABSOLUTE)
get_filename_component(_basename ${_proto} NAME_WE)
set(_generated_srcs)
foreach(_ext ${onnxruntime_protobuf_generate_EXTENSIONS})
list(APPEND _generated_srcs "${CMAKE_CURRENT_BINARY_DIR}/${_basename}${_ext}")
endforeach()
list(APPEND _generated_srcs_all ${_generated_srcs})
add_custom_command(
OUTPUT ${_generated_srcs}
COMMAND ${PROTOC_EXECUTABLE}
ARGS --${onnxruntime_protobuf_generate_LANGUAGE}_out ${_dll_export_decl}${CMAKE_CURRENT_BINARY_DIR} ${_protobuf_include_path} ${_abs_file}
DEPENDS ${_abs_file} ${PROTOC_DEPS}
COMMENT "Running ${onnxruntime_protobuf_generate_LANGUAGE} protocol buffer compiler on ${_proto}"
VERBATIM )
endforeach()
set_source_files_properties(${_generated_srcs_all} PROPERTIES GENERATED TRUE)
if(onnxruntime_protobuf_generate_OUT_VAR)
set(${onnxruntime_protobuf_generate_OUT_VAR} ${_generated_srcs_all} PARENT_SCOPE)
endif()
if(onnxruntime_protobuf_generate_TARGET)
target_sources(${onnxruntime_protobuf_generate_TARGET} PRIVATE ${_generated_srcs_all})
endif()
endfunction()

Просмотреть файл

@ -0,0 +1,30 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.1</TargetFramework>
<NativeBuildOutputDir>..\..\build\Windows\$(Configuration)\$(Configuration)</NativeBuildOutputDir>
</PropertyGroup>
<ItemGroup>
<None Include="$(NativeBuildOutputDir)\onnxruntime.???">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="$(NativeBuildOutputDir)\mkldnn.dll">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="testdata\*">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,88 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using Microsoft.ML.OnnxRuntime;
using System.Numerics.Tensors;
namespace CSharpUsage
{
class Program
{
public static void Main(string[] args)
{
Console.WriteLine("Using API");
UseApi();
Console.WriteLine("Done");
}
static void UseApi()
{
string modelPath = Directory.GetCurrentDirectory() + @"\testdata\squeezenet.onnx";
using (var session = new InferenceSession(modelPath))
{
var inputMeta = session.InputMetadata;
// User should be able to detect input name/type/shape from the metadata.
// Currently InputMetadata implementation is inclomplete, so assuming Tensor<flot> of predefined dimension.
var shape0 = new int[] { 1, 3, 224, 224 };
float[] inputData0 = LoadInputsFloat();
var tensor = new DenseTensor<float>(inputData0, shape0);
var container = new List<NamedOnnxValue>();
container.Add(new NamedOnnxValue("data_0", tensor));
// Run the inference
var results = session.Run(container); // results is an IReadOnlyList<NamedOnnxValue> container
// dump the results
foreach (var r in results)
{
Console.WriteLine("Output for {0}", r.Name);
Console.WriteLine(r.AsTensor<float>().GetArrayString());
}
// Just try some GC collect
results = null;
container = null;
GC.Collect();
GC.WaitForPendingFinalizers();
}
}
static int[] LoadInputsInt32()
{
return null;
}
static float[] LoadInputsFloat()
{
// input: data_0 = float32[1,3,224,224] for squeezenet model
// output: softmaxout_1 = float32[1,1000,1,1]
uint size = 1 * 3 * 224 * 224;
float[] tensor = new float[size];
// read data from file
using (var inputFile = new System.IO.StreamReader(@"testdata\bench.in"))
{
inputFile.ReadLine(); //skip the input name
string[] dataStr = inputFile.ReadLine().Split(new char[] { ',', '[', ']' }, StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i < dataStr.Length; i++)
{
tensor[i] = Single.Parse(dataStr[i]);
}
}
return tensor;
}
}
}

2
csharp/CSharpUsage/testdata/bench.expected_out поставляемый Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

2
csharp/CSharpUsage/testdata/bench.in поставляемый Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Двоичные данные
csharp/CSharpUsage/testdata/squeezenet.onnx поставляемый Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="NuGetOrg" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>

Просмотреть файл

@ -0,0 +1,49 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
This is the master msbuild project file for all csharp components.
This is created so that the NuGet dependencies are restored before the projects are built during a CI build.
CMake creates a target to this project
-->
<Project DefaultTargets="Build">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<OutputPath>bin\$(Platform)\$(Configuration)\</OutputPath>
</PropertyGroup>
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<Target Name="RestoreProjects" BeforeTargets="Build">
<Message Importance="High" Text="Restoring NuGet packages for CSharp projects..." />
<MSBuild Projects="src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj"
Targets="Restore"
Properties="RestoreConfigFile=$(MSBuildThisFileDirectory)\NuGet.CSharp.config;MSBuildWarningsAsMessages=NU1503;RestoreIgnoreFailedSource=true"
/>
<MSBuild Projects="sample\Microsoft.ML.OnnxRuntime.InferenceSample\Microsoft.ML.OnnxRuntime.InferenceSample.csproj"
Targets="Restore"
Properties="RestoreConfigFile=$(MSBuildThisFileDirectory)\NuGet.CSharp.config;MSBuildWarningsAsMessages=NU1503;RestoreIgnoreFailedSource=true"
/>
<MSBuild Projects="test\Microsoft.ML.OnnxRuntime.Tests\Microsoft.ML.OnnxRuntime.Tests.csproj"
Targets="Restore"
Properties="RestoreConfigFile=$(MSBuildThisFileDirectory)\NuGet.CSharp.config;MSBuildWarningsAsMessages=NU1503;RestoreIgnoreFailedSource=true"
/>
</Target>
<Target Name="Build">
<Message Importance="High" Text="Building CSharp projects..." />
<MSBuild Projects="src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj"
Targets="Build" />
<MSBuild Projects="sample\Microsoft.ML.OnnxRuntime.InferenceSample\Microsoft.ML.OnnxRuntime.InferenceSample.csproj"
Targets="Build" />
<MSBuild Projects="test\Microsoft.ML.OnnxRuntime.Tests\Microsoft.ML.OnnxRuntime.Tests.csproj"
Targets="Build" />
</Target>
<Target Name="RunTest" AfterTargets="Build">
<Message Importance="High" Text="Running CSharp tests..." />
<Exec Command="dotnet test test\Microsoft.ML.OnnxRuntime.Tests\Microsoft.ML.OnnxRuntime.Tests.csproj -c $(Configuration) --no-build" />
</Target>
</Project>

Просмотреть файл

@ -0,0 +1,37 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.28010.2003
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Microsoft.ML.OnnxRuntime", "src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj", "{584B53B3-359D-4DC2-BCD8-530B5D4685AD}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Microsoft.ML.OnnxRuntime.InferenceSample", "sample\Microsoft.ML.OnnxRuntime.InferenceSample\Microsoft.ML.OnnxRuntime.InferenceSample.csproj", "{1AA14958-9246-4163-9403-F650E65ADCBC}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Microsoft.ML.OnnxRuntime.Tests", "test\Microsoft.ML.OnnxRuntime.Tests\Microsoft.ML.OnnxRuntime.Tests.csproj", "{50173D13-DF29-42E7-A30B-8B12D36C77B1}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Debug|Any CPU.Build.0 = Debug|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Release|Any CPU.ActiveCfg = Release|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Release|Any CPU.Build.0 = Release|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Debug|Any CPU.Build.0 = Debug|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Release|Any CPU.ActiveCfg = Release|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Release|Any CPU.Build.0 = Release|Any CPU
{50173D13-DF29-42E7-A30B-8B12D36C77B1}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{50173D13-DF29-42E7-A30B-8B12D36C77B1}.Debug|Any CPU.Build.0 = Debug|Any CPU
{50173D13-DF29-42E7-A30B-8B12D36C77B1}.Release|Any CPU.ActiveCfg = Release|Any CPU
{50173D13-DF29-42E7-A30B-8B12D36C77B1}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {C3DBDA2B-F169-4EDE-9353-858904124B75}
EndGlobalSection
EndGlobal

32
csharp/OnnxRuntime.proj Normal file
Просмотреть файл

@ -0,0 +1,32 @@
<?xml version="1.0" encoding="utf-8"?>
<!--
This is the master msbuild project file for all csharp components.
This is created so that the NuGet dependencies are restored before the projects are built during a CI build.
CMake creates a target to this project
-->
<Project DefaultTargets="BuildProjects">
<Target Name="RestoreProjects" BeforeTargets="BuildProjects">
<Message Importance="High" Text="Restoring NuGet packages for CSharp projects..." />
<MSBuild Projects="OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj"
Targets="Restore"
Properties="MSBuildWarningsAsMessages=NU1503" />
<MSBuild Projects="CSharpUsage\CSharpUsage.csproj"
Targets="Restore"
Properties="MSBuildWarningsAsMessages=NU1503" />
</Target>
<Target Name="BuildProjects">
<Message Importance="High" Text="Building CSharp projects..." />
<MSBuild Projects="OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj"
Targets="Build" />
<MSBuild Projects="CSharpUsage\CSharpUsage.csproj"
Targets="Build" />
</Target>
</Project>

31
csharp/OnnxRuntime.sln Normal file
Просмотреть файл

@ -0,0 +1,31 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.28010.2003
MinimumVisualStudioVersion = 10.0.40219.1
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Microsoft.ML.OnnxRuntime", "OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj", "{584B53B3-359D-4DC2-BCD8-530B5D4685AD}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "CSharpUsage", "CSharpUsage\CSharpUsage.csproj", "{1AA14958-9246-4163-9403-F650E65ADCBC}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Debug|Any CPU.Build.0 = Debug|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Release|Any CPU.ActiveCfg = Release|Any CPU
{584B53B3-359D-4DC2-BCD8-530B5D4685AD}.Release|Any CPU.Build.0 = Release|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Debug|Any CPU.Build.0 = Debug|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Release|Any CPU.ActiveCfg = Release|Any CPU
{1AA14958-9246-4163-9403-F650E65ADCBC}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {C3DBDA2B-F169-4EDE-9353-858904124B75}
EndGlobalSection
EndGlobal

Просмотреть файл

@ -0,0 +1,66 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
namespace Microsoft.ML.OnnxRuntime
{
/// <summary>
/// Enum conresponding to native onnxruntime error codes. Must be in sync with the native API
/// </summary>
internal enum ErrorCode
{
Ok = 0,
Fail = 1,
InvalidArgument = 2,
NoSuchFile = 3,
NoModel = 4,
EngineError = 5,
RuntimeException = 6,
InvalidProtobuf = 7,
ModelLoaded = 8,
NotImplemented = 9,
InvalidGraph = 10,
ShapeInferenceNotRegistered = 11,
RequirementNotRegistered = 12
}
public class OnnxRuntimeException: Exception
{
public OnnxRuntimeException(string message)
:base(message)
{
}
}
public class CoreRuntimeException : OnnxRuntimeException
{
private static Dictionary<ErrorCode, string> errorCodeToString = new Dictionary<ErrorCode, string>()
{
{ ErrorCode.Ok, "Ok" },
{ ErrorCode.Fail, "Fail" },
{ ErrorCode.InvalidArgument, "InvalidArgument"} ,
{ ErrorCode.NoSuchFile, "NoSuchFile" },
{ ErrorCode.NoModel, "NoModel" },
{ ErrorCode.EngineError, "EngineError" },
{ ErrorCode.RuntimeException, "RuntimeException" },
{ ErrorCode.InvalidProtobuf, "InvalidProtobuf" },
{ ErrorCode.ModelLoaded, "ModelLoaded" },
{ ErrorCode.NotImplemented, "NotImplemented" },
{ ErrorCode.InvalidGraph, "InvalidGraph" },
{ ErrorCode.ShapeInferenceNotRegistered, "ShapeInferenceNotRegistered" },
{ ErrorCode.RequirementNotRegistered, "RequirementNotRegistered" }
};
internal CoreRuntimeException(ErrorCode errorCode, string message)
:base("[ErrorCode:" + errorCodeToString[errorCode] + "] " + message)
{
}
}
}

Просмотреть файл

@ -0,0 +1,217 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace Microsoft.ML.OnnxRuntime
{
public struct RunOptions
{
// placeholder for RunOptions
}
/// <summary>
/// Represents an Inference Session against an ONNX Model
/// </summary>
public class InferenceSession: IDisposable
{
protected IntPtr _nativeHandle;
internal InferenceSession(IntPtr nativeHandle)
{
_nativeHandle = nativeHandle;
}
#region Public API
public InferenceSession(string modelPath)
: this(modelPath, SessionOptions.Default)
{
}
public InferenceSession(string modelPath, SessionOptions options)
{
var envHandle = OnnxRuntime.Instance.NativeHandle;
IntPtr outputHandle;
IntPtr status = NativeMethods.ONNXRuntimeCreateInferenceSession(envHandle, modelPath, options.NativeHandle, out outputHandle);
_nativeHandle = IntPtr.Zero;
NativeApiStatus.VerifySuccess(status);
_nativeHandle = outputHandle;
}
public IReadOnlyDictionary<string, NodeMetadata> InputMetadata
{
get
{
return null; // TODO: implement
}
}
public IReadOnlyDictionary<string, NodeMetadata> OutputMetadata
{
get
{
return null; // TODO: implement
}
}
public ModelMetadata ModelMetadata
{
get
{
return new ModelMetadata(); //TODO: implement
}
}
public IReadOnlyList<NamedOnnxValue> Run(IReadOnlyList<NamedOnnxValue> inputs, RunOptions options = new RunOptions())
{
var inputNames = new string[inputs.Count];
var inputTensors = new IntPtr[inputs.Count];
var pinnedBufferHandles = new System.Buffers.MemoryHandle[inputs.Count];
for (int i = 0; i < inputs.Count; i++)
{
inputNames[i] = inputs[i].Name;
// create Tensor fromt the inputs[i] if feasible, else throw notsupported exception for now
inputs[i].ToNativeOnnxValue(out inputTensors[i], out pinnedBufferHandles[i]);
}
IntPtr outputValueList = IntPtr.Zero;
ulong outputLength = 0;
IntPtr status = NativeMethods.ONNXRuntimeRunInferenceAndFetchAll(
this._nativeHandle,
inputNames,
inputTensors,
(uint)(inputTensors.Length),
out outputValueList,
out outputLength
); //Note: the inputTensors and pinnedBufferHandles must be alive for the duration of the call
try
{
NativeApiStatus.VerifySuccess(status);
var result = new List<NamedOnnxValue>();
for (uint i = 0; i < outputLength; i++)
{
IntPtr tensorValue = NativeMethods.ONNXRuntimeONNXValueListGetNthValue(outputValueList, i);
result.Add(NamedOnnxValue.CreateFromOnnxValue(Convert.ToString(i), tensorValue)); // TODO: currently Convert.ToString(i) is used instead of the output name, for the absense of C-api.
// Will be fixed as soon as the C-api for output name is available
}
return result;
}
catch (OnnxRuntimeException e)
{
//clean up the individual output tensors if it is not null;
if (outputValueList != IntPtr.Zero)
{
for (uint i = 0; i < outputLength; i++)
{
IntPtr tensorValue = NativeMethods.ONNXRuntimeONNXValueListGetNthValue(outputValueList, i);
NativeMethods.ReleaseONNXValue(tensorValue);
}
}
throw e;
}
finally
{
// always unpin the input buffers, and delete the native Onnx value objects
for (int i = 0; i < inputs.Count; i++)
{
NativeMethods.ReleaseONNXValue(inputTensors[i]); // this should not release the buffer, but should delete the native tensor object
pinnedBufferHandles[i].Dispose();
}
// always release the output value list, because the individual tensor pointers are already obtained.
if (outputValueList != IntPtr.Zero)
{
NativeMethods.ReleaseONNXValueList(outputValueList);
}
}
}
/// <summary>
/// Runs the loaded model for the given inputs, and fetches the specified outputs in <paramref name="outputNames"/>.
/// </summary>
/// <param name="inputs"></param>
/// <param name="outputNames"></param>
/// <param name="options"></param>
/// <returns>Output Tensors in a Dictionary</returns>
public IReadOnlyList<NamedOnnxValue> Run(IReadOnlyList<NamedOnnxValue> inputs, ICollection<string> outputNames, RunOptions options = new RunOptions())
{
//TODO: implement
return null;
}
#endregion
#region private methods
#endregion
#region destructors disposers
~InferenceSession()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// cleanup managed resources
}
// cleanup unmanaged resources
if (_nativeHandle != IntPtr.Zero)
{
NativeMethods.ReleaseONNXSession(_nativeHandle);
}
}
#endregion
}
public struct NodeMetadata
{
public uint[] Shape
{
get; internal set;
}
public System.Type Type
{
get; internal set;
}
}
public struct ModelMetadata
{
//placeholder for Model metadata. Python API has this
}
}

Просмотреть файл

@ -0,0 +1,15 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard1.1</TargetFramework>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<SignAssembly>true</SignAssembly>
<DelaySign>false</DelaySign>
<AssemblyOriginatorKeyFile>OnnxRuntime.snk</AssemblyOriginatorKeyFile>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="System.Numerics.Tensors" Version="0.1.0-preview2-181101-1" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,322 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.Numerics.Tensors;
using System.Buffers;
using System.Collections;
using System.Diagnostics;
namespace Microsoft.ML.OnnxRuntime
{
public class NamedOnnxValue
{
protected Object _value;
protected string _name;
public NamedOnnxValue(string name, Object value)
{
_name = name;
_value = value;
}
public string Name { get { return _name; } }
public Tensor<T> AsTensor<T>()
{
return _value as Tensor<T>; // will return null if not castable
}
/// <summary>
/// Attempts to Pin the buffer, and create a native OnnxValue out of it. the pinned MemoryHandle is passed to output.
/// In this case, the pinnedHandle should be kept alive till the native OnnxValue is used, then dispose it.
/// If it is not possible to Pin the buffer, then creates OnnxValue from the copy of the data. The output pinnedMemoryHandle
/// contains a default value in that case.
/// Attempts to infer the type of the value while creating the OnnxValue
/// </summary>
/// <param name="onnxValue"></param>
/// <param name="pinnedMemoryHandle"></param>
internal void ToNativeOnnxValue(out IntPtr onnxValue, out MemoryHandle pinnedMemoryHandle)
{
//try to cast _value to Tensor<T>
TensorElementType nativeElementType = TensorElementType.DataTypeMax; //invalid
IntPtr dataBufferPointer = IntPtr.Zero;
int dataBufferLength = 0;
ReadOnlySpan<int> shape = null;
int rank = 0;
onnxValue = IntPtr.Zero;
if (TryPinAsTensor<float>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<double>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<int>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<uint>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<long>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<ulong>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<short>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<ushort>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<byte>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<bool>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
//TODO: add other types
else
{
// nothing to cleanup here, since no memory has been pinned
throw new NotSupportedException("The inference value " + nameof(_value) + " is not of a supported type");
}
Debug.Assert(dataBufferPointer != IntPtr.Zero, "dataBufferPointer must be non-null after obtaining the pinned buffer");
// copy to an ulong[] shape to match size_t[]
ulong[] longShape = new ulong[rank];
for (int i = 0; i < rank; i++)
{
longShape[i] = (ulong)shape[i];
}
IntPtr status = NativeMethods.ONNXRuntimeCreateTensorWithDataAsONNXValue(
NativeCpuAllocatorInfo.Handle,
dataBufferPointer,
(ulong)(dataBufferLength),
longShape,
(ulong)rank,
nativeElementType,
out onnxValue
);
try
{
NativeApiStatus.VerifySuccess(status);
}
catch (OnnxRuntimeException e)
{
pinnedMemoryHandle.Dispose();
throw e;
}
}
internal static NamedOnnxValue CreateFromOnnxValue(string name, IntPtr nativeOnnxValue)
{
NamedOnnxValue result = null;
if (true /* TODO: check native data type when API available. assuming Tensor<float> for now */)
{
NativeOnnxTensorMemory<float> nativeTensorWrapper = new NativeOnnxTensorMemory<float>(nativeOnnxValue);
DenseTensor<float> dt = new DenseTensor<float>(nativeTensorWrapper.Memory, nativeTensorWrapper.Dimensions);
result = new NamedOnnxValue(name, dt);
}
return result;
}
private bool TryPinAsTensor<T>(
out MemoryHandle pinnedMemoryHandle,
out IntPtr dataBufferPointer,
out int dataBufferLength,
out ReadOnlySpan<int> shape,
out int rank,
out TensorElementType nativeElementType
)
{
nativeElementType = TensorElementType.DataTypeMax; //invalid
dataBufferPointer = IntPtr.Zero;
dataBufferLength = 0;
shape = null;
rank = 0;
pinnedMemoryHandle = default(MemoryHandle);
if (_value is Tensor<T>)
{
Tensor<T> t = _value as Tensor<T>;
if (t.IsReversedStride)
{
//TODO: not sure how to support reverse stride. may be able to calculate the shape differently
throw new NotSupportedException(nameof(Tensor<T>) + " of reverseStride is not supported");
}
DenseTensor<T> dt = null;
if (_value is DenseTensor<T>)
{
dt = _value as DenseTensor<T>;
}
else
{
dt = t.ToDenseTensor();
}
shape = dt.Dimensions; // does not work for reverse stride
rank = dt.Rank;
pinnedMemoryHandle = dt.Buffer.Pin();
unsafe
{
dataBufferPointer = (IntPtr)pinnedMemoryHandle.Pointer;
}
// find the native type
if (typeof(T) == typeof(float))
{
nativeElementType = TensorElementType.Float;
dataBufferLength = dt.Buffer.Length * sizeof(float);
}
else if (typeof(T) == typeof(double))
{
nativeElementType = TensorElementType.Double;
dataBufferLength = dt.Buffer.Length * sizeof(double);
}
else if (typeof(T) == typeof(int))
{
nativeElementType = TensorElementType.Int32;
dataBufferLength = dt.Buffer.Length * sizeof(int);
}
else if (typeof(T) == typeof(uint))
{
nativeElementType = TensorElementType.UInt32;
dataBufferLength = dt.Buffer.Length * sizeof(uint);
}
else if (typeof(T) == typeof(long))
{
nativeElementType = TensorElementType.Int64;
dataBufferLength = dt.Buffer.Length * sizeof(long);
}
else if (typeof(T) == typeof(ulong))
{
nativeElementType = TensorElementType.UInt64;
dataBufferLength = dt.Buffer.Length * sizeof(ulong);
}
else if (typeof(T) == typeof(short))
{
nativeElementType = TensorElementType.Int16;
dataBufferLength = dt.Buffer.Length * sizeof(short);
}
else if (typeof(T) == typeof(ushort))
{
nativeElementType = TensorElementType.UInt16;
dataBufferLength = dt.Buffer.Length * sizeof(ushort);
}
else if (typeof(T) == typeof(byte))
{
nativeElementType = TensorElementType.UInt8;
dataBufferLength = dt.Buffer.Length * sizeof(byte);
}
//TODO: Not supporting boolean for now. bool is non-blittable, the interop needs some care, and possibly need to copy
//else if (typeof(T) == typeof(bool))
//{
//}
else
{
//TODO: may extend the supported types
// do not throw exception, rather assign the sentinel value
nativeElementType = TensorElementType.DataTypeMax;
}
return true;
}
return false;
}
// may expose different types of getters in future
}
internal enum TensorElementType
{
Float = 1,
UInt8 = 2,
Int8 = 3,
UInt16 = 4,
Int16 = 5,
Int32 = 6,
Int64 = 7,
String = 8,
Bool = 9,
Float16 = 10,
Double = 11,
UInt32 = 12,
UInt64 = 13,
Complex64 = 14,
Complex128 = 15,
BFloat16 = 16,
DataTypeMax = 17
}
}

Просмотреть файл

@ -0,0 +1,35 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
namespace Microsoft.ML.OnnxRuntime
{
class NativeApiStatus
{
private static string GetErrorMessage(IntPtr /*(ONNXStatus*)*/status)
{
IntPtr nativeString = NativeMethods.ONNXRuntimeGetErrorMessage(status);
string str = Marshal.PtrToStringAnsi(nativeString); //assumes charset = ANSI
return str;
}
/// <summary>
/// Checks the native Status if the errocode is OK/Success. Otherwise constructs an appropriate exception and throws.
/// Releases the native status object, as needed.
/// </summary>
/// <param name="nativeStatus"></param>
/// <throws></throws>
public static void VerifySuccess(IntPtr nativeStatus)
{
if (nativeStatus != IntPtr.Zero)
{
ErrorCode statusCode = NativeMethods.ONNXRuntimeGetErrorCode(nativeStatus);
string errorMessage = GetErrorMessage(nativeStatus);
NativeMethods.ReleaseONNXStatus(nativeStatus);
throw new CoreRuntimeException(statusCode, errorMessage);
}
}
}
}

Просмотреть файл

@ -0,0 +1,75 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
namespace Microsoft.ML.OnnxRuntime
{
internal class NativeCpuAllocatorInfo : IDisposable
{
// static singleton
private static readonly Lazy<NativeCpuAllocatorInfo> _instance = new Lazy<NativeCpuAllocatorInfo>(() => new NativeCpuAllocatorInfo());
// member variables
private IntPtr _nativeHandle;
internal static IntPtr Handle // May throw exception in every access, if the constructor have thrown an exception
{
get
{
return _instance.Value._nativeHandle;
}
}
private NativeCpuAllocatorInfo()
{
_nativeHandle = CreateCPUAllocatorInfo();
}
private static IntPtr CreateCPUAllocatorInfo()
{
IntPtr allocInfo = IntPtr.Zero;
try
{
IntPtr status = NativeMethods.ONNXRuntimeCreateCpuAllocatorInfo(NativeMethods.AllocatorType.DeviceAllocator, NativeMethods.MemoryType.Cpu, out allocInfo);
NativeApiStatus.VerifySuccess(status);
return allocInfo;
}
catch (Exception e)
{
if (allocInfo != IntPtr.Zero)
{
NativeMethods.ReleaseONNXRuntimeAllocatorInfo(allocInfo);
}
throw e;
}
}
~NativeCpuAllocatorInfo()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
private void Dispose(bool disposing)
{
if (disposing)
{
//release managed resource
}
//release unmanaged resource
if (_nativeHandle != IntPtr.Zero)
{
NativeMethods.ReleaseONNXRuntimeAllocatorInfo(_nativeHandle);
}
}
}
}

Просмотреть файл

@ -0,0 +1,236 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
namespace Microsoft.ML.OnnxRuntime
{
/// <summary>
/// NamedOnnxValue type, must match the native enum
/// </summary>
internal static class NativeMethods
{
private const string nativeLib = "onnxruntime.dll";
internal const CharSet charSet = CharSet.Ansi;
#region Runtime/Environment API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus* */ONNXRuntimeInitialize(
LogLevel default_warning_level,
string logId,
out IntPtr /*(ONNXEnv*)*/ env);
// ReleaseONNXEnv should not be used
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXEnv(IntPtr /*(ONNXEnv*)*/ env);
#endregion Runtime/Environment API
#region Status API
[DllImport(nativeLib, CharSet = charSet)]
public static extern ErrorCode ONNXRuntimeGetErrorCode(IntPtr /*(ONNXStatus*)*/status);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* char* */ONNXRuntimeGetErrorMessage(IntPtr /* (ONNXStatus*) */status);
// returns char*, need to convert to string by the caller.
// does not free the underlying ONNXStatus*
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXStatus(IntPtr /*(ONNXStatus*)*/ statusPtr);
#endregion Status API
#region InferenceSession API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus* */ONNXRuntimeCreateInferenceSession(
IntPtr /* (ONNXEnv*) */ environment,
[MarshalAs(UnmanagedType.LPWStr)]string modelPath, //the model path is consumed as a wchar* in the C-api
IntPtr /* (ONNXRuntimeSessionOptions*) */sessopnOptions,
out IntPtr /**/ session);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNStatus*)*/ ONNXRuntimeRunInferenceAndFetchAll(
IntPtr /*(ONNXSessionPtr)*/ session,
string[] inputNames,
IntPtr[] /*(ONNXValuePtr[])*/ inputValues,
ulong inputLength, // size_t, TODO: make it portable for x86, arm
out IntPtr /* (ONNXValueListPtr*)*/ outputValues,
out ulong /* (size_t*) */ outputLength); //TODO: make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNStatus*)*/ ONNXRuntimeRunInference(
IntPtr /*(ONNXSession*)*/ session,
string[] inputNames,
IntPtr[] /* (ONNXValue*[])*/ inputValues,
ulong inputCount, /* TODO: size_t, make it portable for x86 arm */
string[] outputNames,
ulong outputCount, /* TODO: size_t, make it portable for x86 and arm */
[MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 5 /*index of outputCount*/)][In, Out]
IntPtr[] outputValues /* An array of output value pointers. Array must be allocated by the caller */
);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeInferenceSessionGetInputCount(IntPtr /*(ONNXSession*)*/ session);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeInferenceSessionGetOutputCount(IntPtr /*(ONNXSession*)*/ session);
//TODO: need the input/output names API
//TODO: need the input/output shape/type API
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXSession(IntPtr /*(ONNXSession*)*/ session);
#endregion InferenceSession API
#region SessionOptions API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*ONNXRuntimeSessionOptions* */ ONNXRuntimeCreateSessionOptions();
//DEFINE_RUNTIME_CLASS(ONNXRuntimeSessionOptions)
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXRuntimeSessionOptions(IntPtr /*(ONNXRuntimeSessionOptions*)*/ sessionOptions);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableSequentialExecution(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableSequentialExecution(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableProfiling(IntPtr /* ONNXRuntimeSessionOptions* */ options, string profilePathPrefix);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableProfiling(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableMemPattern(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableMemPattern(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableCpuMemArena(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableCpuMemArena(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeSetSessionLogId(IntPtr /* ONNXRuntimeSessionOptions* */ options, string logId);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeSetSessionLogVerbosityLevel(IntPtr /* ONNXRuntimeSessionOptions* */ options, LogLevel sessionLogVerbosityLevel);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeSetSessionThreadPoolSize(IntPtr /* ONNXRuntimeSessionOptions* */ options, int sessionThreadPoolSize);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeEnableCudaProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options, int deviceId);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableCudaProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeEnableMklProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableMklProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
#endregion
#region Allocator/AllocatorInfo API
//TODO: consider exposing them publicly, when allocator API is exposed
public enum AllocatorType
{
DeviceAllocator = 0,
ArenaAllocator = 1
}
//TODO: consider exposing them publicly when allocator API is exposed
public enum MemoryType
{
CpuInput = -2, // Any CPU memory used by non-CPU execution provider
CpuOutput = -1, // CPU accessible memory outputted by non-CPU execution provider, i.e. CUDA_PINNED
Cpu = CpuOutput, // temporary CPU accessible memory allocated by non-CPU execution provider, i.e. CUDA_PINNED
Default = 0, // the default allocator for execution provider
}
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* (ONNXStatus*)*/ ONNXRuntimeCreateAllocatorInfo(
IntPtr /*(const char*) */name,
AllocatorType allocatorType,
int identifier,
MemoryType memType,
out IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo // memory ownership transfered to caller
);
//ONNXRUNTIME_API_STATUS(ONNXRuntimeCreateCpuAllocatorInfo, enum ONNXRuntimeAllocatorType type, enum ONNXRuntimeMemType mem_type1, _Out_ ONNXRuntimeAllocatorInfo** out)
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* (ONNXStatus*)*/ ONNXRuntimeCreateCpuAllocatorInfo(
AllocatorType allocatorType,
MemoryType memoryType,
out IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo
);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXRuntimeAllocatorInfo(IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo);
#endregion Allocator/AllocatorInfo API
#region Tensor/OnnxValue API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus */ ONNXRuntimeCreateTensorWithDataAsONNXValue(
IntPtr /* (const ONNXRuntimeAllocatorInfo*) */ allocatorInfo,
IntPtr /* (void*) */dataBufferHandle,
ulong dataLength, //size_t, TODO: make it portable for x86, arm
ulong[] shape, //size_t* or size_t[], TODO: make it portable for x86, arm
ulong shapeLength, //size_t, TODO: make it portable for x86, arm
TensorElementType type,
out IntPtr /* ONNXValuePtr* */ outputValue);
/// This function doesn't work with string tensor
/// this is a no-copy method whose pointer is only valid until the backing ONNXValuePtr is free'd.
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorMutableData(IntPtr /*(ONNXValue*)*/ value, out IntPtr /* (void**)*/ dataBufferHandle);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorShapeDimCount(IntPtr /*(ONNXValue*)*/ value, out ulong dimension); //size_t TODO: make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorShapeElementCount(IntPtr /*(ONNXValue*)*/value, out ulong count);
///**
// * Generally, user should call ONNXRuntimeGetTensorShapeDimCount before calling this.
// * Unless they already have a good estimation on the dimension count
// * \param shape_array An array allocated by caller, with size of shape_array
// * \param shape_array_len the length of passed in shape_array.
// */
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ONNXRuntimeGetTensorShape(
IntPtr /*(ONNXValue*)*/ value,
ulong[] shapeArray, //size_t[] TODO: make it portable for x86, arm
ulong shapeArrayLength); //size_t, TODO: make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXValuePtr)*/ ONNXRuntimeONNXValueListGetNthValue(IntPtr /*(ONNXValueListPtr)*/ list, ulong index); // 0-based index TODO: size_t, make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXValue(IntPtr /*(ONNXValue*)*/ value);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXValueList(IntPtr /*(ONNXValueList*)*/ valueList);
#endregion
} //class NativeMethods
} //namespace

Просмотреть файл

@ -0,0 +1,170 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.Buffers;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Threading;
namespace Microsoft.ML.OnnxRuntime
{
internal class NativeOnnxTensorMemory<T> : MemoryManager<T>
{
private bool _disposed;
private int _referenceCount;
private IntPtr _onnxValueHandle;
private IntPtr _dataBufferHandle;
private int _elementCount;
private int _elementWidth;
private int[] _dimensions;
public NativeOnnxTensorMemory(IntPtr onnxValueHandle)
{
//TODO: check type param and the native tensor type
if (typeof(T) != typeof(float))
throw new NotSupportedException(nameof(NativeOnnxTensorMemory<T>)+" does not support T other than float");
_elementWidth = 4;
_onnxValueHandle = onnxValueHandle;
// derive the databuffer pointer, element_count, element_width, and shape
try
{
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorMutableData(_onnxValueHandle, out _dataBufferHandle));
// throws OnnxRuntimeException if native call failed
ulong dimension;
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorShapeDimCount(_onnxValueHandle, out dimension));
ulong count;
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorShapeElementCount(_onnxValueHandle, out count));
ulong[] shape = new ulong[dimension];
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorShape(_onnxValueHandle, shape, dimension)); //Note: shape must be alive during the call
_elementCount = (int)count;
_dimensions = new int[dimension];
for (ulong i = 0; i < dimension; i++)
{
_dimensions[i] = (int)shape[i];
}
}
catch (OnnxRuntimeException e)
{
//TODO: cleanup any partially created state
//Do not call ReleaseTensor here. If the constructor has thrown exception, then this NativeOnnxTensorWrapper is not created, so caller should take appropriate action to dispose
throw e;
}
}
~NativeOnnxTensorMemory()
{
Dispose(false);
}
public bool IsDisposed => _disposed;
protected bool IsRetained => _referenceCount > 0;
public int[] Dimensions
{
get
{
return _dimensions;
}
}
public int Rank
{
get
{
return _dimensions.Length;
}
}
public override Span<T> GetSpan()
{
if (IsDisposed)
throw new ObjectDisposedException(nameof(NativeOnnxTensorMemory<T>));
Span<T> span = null;
unsafe
{
span = new Span<T>((void*)_dataBufferHandle, _elementCount);
}
return span;
}
public override MemoryHandle Pin(int elementIndex = 0)
{
//Note: always pin the full buffer and return
unsafe
{
if (elementIndex >= _elementCount)
{
throw new ArgumentOutOfRangeException(nameof(elementIndex));
}
Retain();
return new MemoryHandle((void*)((int)_dataBufferHandle + elementIndex*_elementWidth)); //could not use Unsafe.Add
}
}
public override void Unpin()
{
Release();
}
private bool Release()
{
int newRefCount = Interlocked.Decrement(ref _referenceCount);
if (newRefCount < 0)
{
throw new InvalidOperationException("Unmatched Release/Retain");
}
return newRefCount != 0;
}
private void Retain()
{
if (IsDisposed)
{
throw new ObjectDisposedException(nameof(NativeOnnxTensorMemory<T>));
}
Interlocked.Increment(ref _referenceCount);
}
protected override void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (disposing)
{
// do managed objects cleanup
}
// TODO Call nativeMethods.ReleaseTensor, once the corresponding native API is fixed
// Currently there will be memory leak
_disposed = true;
}
protected override bool TryGetArray(out ArraySegment<T> arraySegment)
{
// cannot expose managed array
arraySegment = default(ArraySegment<T>);
return false;
}
}
}

Просмотреть файл

@ -0,0 +1,93 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.CompilerServices;
using System.Collections.Generic;
namespace Microsoft.ML.OnnxRuntime
{
internal struct GlobalOptions //Options are currently not accessible to user
{
public string LogId { get; set; }
public LogLevel LogLevel { get; set; }
}
internal enum LogLevel
{
Verbose = 0,
Info = 1,
Warning = 2,
Error = 3,
Fatal = 4
}
/// <summary>
/// This class intializes the process-global ONNX runtime
/// C# API users do not need to access this, thus kept as internal
/// </summary>
internal sealed class OnnxRuntime : IDisposable
{
// static singleton
private static readonly Lazy<OnnxRuntime> _instance = new Lazy<OnnxRuntime>(() => new OnnxRuntime());
// member variables
private IntPtr _nativeHandle;
internal static OnnxRuntime Instance // May throw exception in every access, if the constructor have thrown an exception
{
get
{
return _instance.Value;
}
}
private OnnxRuntime() //Problem: it is not possible to pass any option for a Singleton
{
_nativeHandle = IntPtr.Zero;
IntPtr outPtr;
IntPtr status = NativeMethods.ONNXRuntimeInitialize(LogLevel.Warning, @"CSharpOnnxRuntime", out outPtr);
NativeApiStatus.VerifySuccess(status);
_nativeHandle = outPtr;
}
internal IntPtr NativeHandle
{
get
{
return _nativeHandle;
}
}
~OnnxRuntime()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
private void Dispose(bool disposing)
{
if (disposing)
{
//release managed resource
}
//release unmanaged resource
if (_nativeHandle != IntPtr.Zero)
{
NativeMethods.ReleaseONNXEnv(_nativeHandle);
}
}
}
}

Двоичные данные
csharp/OnnxRuntime/OnnxRuntime.snk Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,59 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
namespace Microsoft.ML.OnnxRuntime
{
public class SessionOptions : IDisposable
{
private static SessionOptions _defaultOptions = new SessionOptions();
private IntPtr _nativeHandle;
public SessionOptions()
{
_nativeHandle = NativeMethods.ONNXRuntimeCreateSessionOptions();
}
internal IntPtr NativeHandle
{
get
{
return _nativeHandle;
}
}
public static SessionOptions Default
{
get
{
return _defaultOptions;
}
}
#region destructors disposers
~SessionOptions()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// cleanup managed resources
}
// cleanup unmanaged resources
}
#endregion
}
}

Просмотреть файл

@ -0,0 +1,36 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp2.0</TargetFramework>
<OnnxRuntimeCsharpRoot>..\..</OnnxRuntimeCsharpRoot>
<buildDirectory Condition="'$(buildDirectory)'==''">$(OnnxRuntimeCsharpRoot)\..\build\Windows</buildDirectory>
<NativeBuildOutputDir>$(buildDirectory)\$(Configuration)\$(Configuration)</NativeBuildOutputDir>
</PropertyGroup>
<ItemGroup>
<None Include="$(NativeBuildOutputDir)\onnxruntime.dll">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="$(NativeBuildOutputDir)\onnxruntime.pdb" Condition="Exists('$(NativeBuildOutputDir)\onnxruntime.pdb')">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="$(NativeBuildOutputDir)\mkldnn.dll">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="$(OnnxRuntimeCSharpRoot)\testdata\*">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
</ItemGroup>
<ItemGroup>
<ProjectReference Include="$(OnnxRuntimeCSharpRoot)\src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,88 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.IO;
using Microsoft.ML.OnnxRuntime;
using System.Numerics.Tensors;
namespace CSharpUsage
{
class Program
{
public static void Main(string[] args)
{
Console.WriteLine("Using API");
UseApi();
Console.WriteLine("Done");
}
static void UseApi()
{
string modelPath = Directory.GetCurrentDirectory() + @"\squeezenet.onnx";
using (var session = new InferenceSession(modelPath))
{
var inputMeta = session.InputMetadata;
// User should be able to detect input name/type/shape from the metadata.
// Currently InputMetadata implementation is inclomplete, so assuming Tensor<flot> of predefined dimension.
var shape0 = new int[] { 1, 3, 224, 224 };
float[] inputData0 = LoadInputsFloat();
var tensor = new DenseTensor<float>(inputData0, shape0);
var container = new List<NamedOnnxValue>();
container.Add(new NamedOnnxValue("data_0", tensor));
// Run the inference
var results = session.Run(container); // results is an IReadOnlyList<NamedOnnxValue> container
// dump the results
foreach (var r in results)
{
Console.WriteLine("Output for {0}", r.Name);
Console.WriteLine(r.AsTensor<float>().GetArrayString());
}
// Just try some GC collect
results = null;
container = null;
GC.Collect();
GC.WaitForPendingFinalizers();
}
}
static int[] LoadInputsInt32()
{
return null;
}
static float[] LoadInputsFloat()
{
// input: data_0 = float32[1,3,224,224] for squeezenet model
// output: softmaxout_1 = float32[1,1000,1,1]
uint size = 1 * 3 * 224 * 224;
float[] tensor = new float[size];
// read data from file
using (var inputFile = new System.IO.StreamReader(@"bench.in"))
{
inputFile.ReadLine(); //skip the input name
string[] dataStr = inputFile.ReadLine().Split(new char[] { ',', '[', ']' }, StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i < dataStr.Length; i++)
{
tensor[i] = Single.Parse(dataStr[i]);
}
}
return tensor;
}
}
}

Просмотреть файл

@ -0,0 +1,66 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
namespace Microsoft.ML.OnnxRuntime
{
/// <summary>
/// Enum conresponding to native onnxruntime error codes. Must be in sync with the native API
/// </summary>
internal enum ErrorCode
{
Ok = 0,
Fail = 1,
InvalidArgument = 2,
NoSuchFile = 3,
NoModel = 4,
EngineError = 5,
RuntimeException = 6,
InvalidProtobuf = 7,
ModelLoaded = 8,
NotImplemented = 9,
InvalidGraph = 10,
ShapeInferenceNotRegistered = 11,
RequirementNotRegistered = 12
}
public class OnnxRuntimeException: Exception
{
public OnnxRuntimeException(string message)
:base(message)
{
}
}
public class CoreRuntimeException : OnnxRuntimeException
{
private static Dictionary<ErrorCode, string> errorCodeToString = new Dictionary<ErrorCode, string>()
{
{ ErrorCode.Ok, "Ok" },
{ ErrorCode.Fail, "Fail" },
{ ErrorCode.InvalidArgument, "InvalidArgument"} ,
{ ErrorCode.NoSuchFile, "NoSuchFile" },
{ ErrorCode.NoModel, "NoModel" },
{ ErrorCode.EngineError, "EngineError" },
{ ErrorCode.RuntimeException, "RuntimeException" },
{ ErrorCode.InvalidProtobuf, "InvalidProtobuf" },
{ ErrorCode.ModelLoaded, "ModelLoaded" },
{ ErrorCode.NotImplemented, "NotImplemented" },
{ ErrorCode.InvalidGraph, "InvalidGraph" },
{ ErrorCode.ShapeInferenceNotRegistered, "ShapeInferenceNotRegistered" },
{ ErrorCode.RequirementNotRegistered, "RequirementNotRegistered" }
};
internal CoreRuntimeException(ErrorCode errorCode, string message)
:base("[ErrorCode:" + errorCodeToString[errorCode] + "] " + message)
{
}
}
}

Просмотреть файл

@ -0,0 +1,297 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
using System.Collections.Generic;
using System.IO;
using System.Linq;
namespace Microsoft.ML.OnnxRuntime
{
public struct RunOptions
{
// placeholder for RunOptions
}
/// <summary>
/// Represents an Inference Session against an ONNX Model
/// </summary>
public class InferenceSession: IDisposable
{
protected IntPtr _nativeHandle;
protected Dictionary<string, NodeMetadata> _inputMetadata, _outputMetadata;
internal InferenceSession(IntPtr nativeHandle)
{
_nativeHandle = nativeHandle;
}
#region Public API
public InferenceSession(string modelPath)
: this(modelPath, SessionOptions.Default)
{
}
public InferenceSession(string modelPath, SessionOptions options)
{
var envHandle = OnnxRuntime.Handle;
_nativeHandle = IntPtr.Zero;
try
{
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeCreateInferenceSession(envHandle, modelPath, options.NativeHandle, out _nativeHandle));
// Initialize input/output metadata
_inputMetadata = new Dictionary<string, NodeMetadata>();
_outputMetadata = new Dictionary<string, NodeMetadata>();
// get input count
ulong inputCount = 0;
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeInferenceSessionGetInputCount(_nativeHandle, out inputCount));
// get all the output names
for (ulong i = 0; i < inputCount; i++)
{
_inputMetadata[GetInputName(i)] = new NodeMetadata(); //TODO: fill the shape/type when C-api available
}
// get output count
ulong outputCount = 0;
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeInferenceSessionGetOutputCount(_nativeHandle, out outputCount));
// get all the output names
for (ulong i = 0; i < outputCount; i++)
{
_outputMetadata[GetOutputName(i)] = new NodeMetadata(); //TODO: fill the shape/type when C-api available
}
}
catch (OnnxRuntimeException e)
{
if (_nativeHandle != IntPtr.Zero)
{
NativeMethods.ReleaseONNXSession(_nativeHandle);
_nativeHandle = IntPtr.Zero;
}
throw e;
}
}
public IReadOnlyDictionary<string, NodeMetadata> InputMetadata
{
get
{
return _inputMetadata;
}
}
public IReadOnlyDictionary<string, NodeMetadata> OutputMetadata
{
get
{
return _outputMetadata;
}
}
public ModelMetadata ModelMetadata
{
get
{
return new ModelMetadata(); //TODO: implement
}
}
public IReadOnlyCollection<NamedOnnxValue> Run(IReadOnlyCollection<NamedOnnxValue> inputs, RunOptions options = new RunOptions())
{
string[] outputNames = new string[_outputMetadata.Count];
_outputMetadata.Keys.CopyTo(outputNames, 0);
return Run(inputs, outputNames, options);
}
/// <summary>
/// Runs the loaded model for the given inputs, and fetches the specified outputs in <paramref name="outputNames"/>.
/// </summary>
/// <param name="inputs"></param>
/// <param name="outputNames"></param>
/// <param name="options"></param>
/// <returns>Output Tensors in a Dictionary</returns>
public IReadOnlyCollection<NamedOnnxValue> Run(IReadOnlyCollection<NamedOnnxValue> inputs, IReadOnlyCollection<string> outputNames, RunOptions options = new RunOptions())
{
var inputNames = new string[inputs.Count];
var inputTensors = new IntPtr[inputs.Count];
var pinnedBufferHandles = new System.Buffers.MemoryHandle[inputs.Count];
int offset = 0;
foreach (var input in inputs)
{
inputNames[offset] = input.Name;
// create Tensor fromt the input if feasible, else throw notsupported exception for now
input.ToNativeOnnxValue(out inputTensors[offset], out pinnedBufferHandles[offset]);
offset++;
}
string[] outputNamesArray = outputNames.ToArray();
IntPtr[] outputValueArray = new IntPtr[outputNames.Count];
IntPtr status = NativeMethods.ONNXRuntimeRunInference(
this._nativeHandle,
inputNames,
inputTensors,
(ulong)(inputTensors.Length), /* TODO: size_t, make it portable for x86 arm */
outputNamesArray,
(ulong)outputNames.Count, /* TODO: size_t, make it portable for x86 and arm */
outputValueArray /* An array of output value pointers. Array must be allocated by the caller */
);
try
{
NativeApiStatus.VerifySuccess(status);
var result = new List<NamedOnnxValue>();
for (uint i = 0; i < outputValueArray.Length; i++)
{
result.Add(NamedOnnxValue.CreateFromOnnxValue(outputNamesArray[i], outputValueArray[i]));
}
return result;
}
catch (OnnxRuntimeException e)
{
//clean up the individual output tensors if it is not null;
for (uint i = 0; i < outputValueArray.Length; i++)
{
if (outputValueArray[i] != IntPtr.Zero)
{
NativeMethods.ReleaseONNXValue(outputValueArray[i]);
}
}
throw e;
}
finally
{
// always unpin the input buffers, and delete the native Onnx value objects
for (int i = 0; i < inputs.Count; i++)
{
NativeMethods.ReleaseONNXValue(inputTensors[i]); // this should not release the buffer, but should delete the native tensor object
pinnedBufferHandles[i].Dispose();
}
}
}
#endregion
#region private methods
private string GetOutputName(ulong index)
{
IntPtr nameHandle = IntPtr.Zero;
string str = null;
IntPtr status = NativeMethods.ONNXRuntimeInferenceSessionGetOutputName(
_nativeHandle,
index,
NativeMemoryAllocator.DefaultInstance.Handle,
out nameHandle);
try
{
NativeApiStatus.VerifySuccess(status);
str = Marshal.PtrToStringAnsi(nameHandle); //assumes charset = ANSI
}
finally
{
if (nameHandle != IntPtr.Zero)
{
NativeMemoryAllocator.DefaultInstance.FreeMemory(nameHandle);
}
}
return str;
}
private string GetInputName(ulong index)
{
IntPtr nameHandle = IntPtr.Zero;
string str = null;
IntPtr status = NativeMethods.ONNXRuntimeInferenceSessionGetInputName(
_nativeHandle,
index,
NativeMemoryAllocator.DefaultInstance.Handle,
out nameHandle);
try
{
NativeApiStatus.VerifySuccess(status);
str = Marshal.PtrToStringAnsi(nameHandle); //assumes charset = ANSI
}
finally
{
if (nameHandle != IntPtr.Zero)
{
NativeMemoryAllocator.DefaultInstance.FreeMemory(nameHandle);
}
}
return str;
}
#endregion
#region destructors disposers
~InferenceSession()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// cleanup managed resources
}
// cleanup unmanaged resources
if (_nativeHandle != IntPtr.Zero)
{
NativeMethods.ReleaseONNXSession(_nativeHandle);
}
}
#endregion
}
public struct NodeMetadata
{
//TODO: currently shape and type is not available in C-api, so this struct may change based on implementation
public uint[] Shape
{
get; internal set;
}
public System.Type Type
{
get; internal set;
}
}
public struct ModelMetadata
{
//TODO: placeholder for Model metadata. Currently C-API does not expose this
}
}

Просмотреть файл

@ -0,0 +1,15 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard1.1</TargetFramework>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<SignAssembly>true</SignAssembly>
<DelaySign>false</DelaySign>
<AssemblyOriginatorKeyFile>OnnxRuntime.snk</AssemblyOriginatorKeyFile>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="System.Numerics.Tensors" Version="0.1.0" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,323 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.Numerics.Tensors;
using System.Buffers;
using System.Collections;
using System.Diagnostics;
namespace Microsoft.ML.OnnxRuntime
{
public class NamedOnnxValue
{
protected Object _value;
protected string _name;
public NamedOnnxValue(string name, Object value)
{
_name = name;
_value = value;
}
public string Name { get { return _name; } }
public Tensor<T> AsTensor<T>()
{
return _value as Tensor<T>; // will return null if not castable
}
/// <summary>
/// Attempts to Pin the buffer, and create a native OnnxValue out of it. the pinned MemoryHandle is passed to output.
/// In this case, the pinnedHandle should be kept alive till the native OnnxValue is used, then dispose it.
/// If it is not possible to Pin the buffer, then creates OnnxValue from the copy of the data. The output pinnedMemoryHandle
/// contains a default value in that case.
/// Attempts to infer the type of the value while creating the OnnxValue
/// </summary>
/// <param name="onnxValue"></param>
/// <param name="pinnedMemoryHandle"></param>
internal void ToNativeOnnxValue(out IntPtr onnxValue, out MemoryHandle pinnedMemoryHandle)
{
//try to cast _value to Tensor<T>
TensorElementType nativeElementType = TensorElementType.DataTypeMax; //invalid
IntPtr dataBufferPointer = IntPtr.Zero;
int dataBufferLength = 0;
ReadOnlySpan<int> shape = null;
int rank = 0;
onnxValue = IntPtr.Zero;
if (TryPinAsTensor<float>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<double>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<int>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<uint>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<long>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<ulong>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<short>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<ushort>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<byte>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
else if (TryPinAsTensor<bool>(out pinnedMemoryHandle,
out dataBufferPointer,
out dataBufferLength,
out shape,
out rank,
out nativeElementType
))
{
}
//TODO: add other types
else
{
// nothing to cleanup here, since no memory has been pinned
throw new NotSupportedException("The inference value " + nameof(_value) + " is not of a supported type");
}
Debug.Assert(dataBufferPointer != IntPtr.Zero, "dataBufferPointer must be non-null after obtaining the pinned buffer");
// copy to an ulong[] shape to match size_t[]
ulong[] longShape = new ulong[rank];
for (int i = 0; i < rank; i++)
{
longShape[i] = (ulong)shape[i];
}
IntPtr status = NativeMethods.ONNXRuntimeCreateTensorWithDataAsONNXValue(
NativeMemoryAllocatorInfo.DefaultInstance.Handle,
dataBufferPointer,
(ulong)(dataBufferLength),
longShape,
(ulong)rank,
nativeElementType,
out onnxValue
);
try
{
NativeApiStatus.VerifySuccess(status);
}
catch (OnnxRuntimeException e)
{
pinnedMemoryHandle.Dispose();
throw e;
}
}
internal static NamedOnnxValue CreateFromOnnxValue(string name, IntPtr nativeOnnxValue)
{
NamedOnnxValue result = null;
if (true /* TODO: check native data type when API available. assuming Tensor<float> for now */)
{
NativeOnnxTensorMemory<float> nativeTensorWrapper = new NativeOnnxTensorMemory<float>(nativeOnnxValue);
DenseTensor<float> dt = new DenseTensor<float>(nativeTensorWrapper.Memory, nativeTensorWrapper.Dimensions);
result = new NamedOnnxValue(name, dt);
}
return result;
}
private bool TryPinAsTensor<T>(
out MemoryHandle pinnedMemoryHandle,
out IntPtr dataBufferPointer,
out int dataBufferLength,
out ReadOnlySpan<int> shape,
out int rank,
out TensorElementType nativeElementType
)
{
nativeElementType = TensorElementType.DataTypeMax; //invalid
dataBufferPointer = IntPtr.Zero;
dataBufferLength = 0;
shape = null;
rank = 0;
pinnedMemoryHandle = default(MemoryHandle);
if (_value is Tensor<T>)
{
Tensor<T> t = _value as Tensor<T>;
if (t.IsReversedStride)
{
//TODO: not sure how to support reverse stride. may be able to calculate the shape differently
throw new NotSupportedException(nameof(Tensor<T>) + " of reverseStride is not supported");
}
DenseTensor<T> dt = null;
if (_value is DenseTensor<T>)
{
dt = _value as DenseTensor<T>;
}
else
{
dt = t.ToDenseTensor();
}
shape = dt.Dimensions; // does not work for reverse stride
rank = dt.Rank;
pinnedMemoryHandle = dt.Buffer.Pin();
unsafe
{
dataBufferPointer = (IntPtr)pinnedMemoryHandle.Pointer;
}
// find the native type
if (typeof(T) == typeof(float))
{
nativeElementType = TensorElementType.Float;
dataBufferLength = dt.Buffer.Length * sizeof(float);
}
else if (typeof(T) == typeof(double))
{
nativeElementType = TensorElementType.Double;
dataBufferLength = dt.Buffer.Length * sizeof(double);
}
else if (typeof(T) == typeof(int))
{
nativeElementType = TensorElementType.Int32;
dataBufferLength = dt.Buffer.Length * sizeof(int);
}
else if (typeof(T) == typeof(uint))
{
nativeElementType = TensorElementType.UInt32;
dataBufferLength = dt.Buffer.Length * sizeof(uint);
}
else if (typeof(T) == typeof(long))
{
nativeElementType = TensorElementType.Int64;
dataBufferLength = dt.Buffer.Length * sizeof(long);
}
else if (typeof(T) == typeof(ulong))
{
nativeElementType = TensorElementType.UInt64;
dataBufferLength = dt.Buffer.Length * sizeof(ulong);
}
else if (typeof(T) == typeof(short))
{
nativeElementType = TensorElementType.Int16;
dataBufferLength = dt.Buffer.Length * sizeof(short);
}
else if (typeof(T) == typeof(ushort))
{
nativeElementType = TensorElementType.UInt16;
dataBufferLength = dt.Buffer.Length * sizeof(ushort);
}
else if (typeof(T) == typeof(byte))
{
nativeElementType = TensorElementType.UInt8;
dataBufferLength = dt.Buffer.Length * sizeof(byte);
}
//TODO: Not supporting boolean for now. bool is non-blittable, the interop needs some care, and possibly need to copy
//else if (typeof(T) == typeof(bool))
//{
//}
else
{
//TODO: may extend the supported types
// do not throw exception, rather assign the sentinel value
nativeElementType = TensorElementType.DataTypeMax;
}
return true;
}
return false;
}
// may expose different types of getters in future
}
internal enum TensorElementType
{
Float = 1,
UInt8 = 2,
Int8 = 3,
UInt16 = 4,
Int16 = 5,
Int32 = 6,
Int64 = 7,
String = 8,
Bool = 9,
Float16 = 10,
Double = 11,
UInt32 = 12,
UInt64 = 13,
Complex64 = 14,
Complex128 = 15,
BFloat16 = 16,
DataTypeMax = 17
}
}

Просмотреть файл

@ -0,0 +1,35 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
namespace Microsoft.ML.OnnxRuntime
{
class NativeApiStatus
{
private static string GetErrorMessage(IntPtr /*(ONNXStatus*)*/status)
{
IntPtr nativeString = NativeMethods.ONNXRuntimeGetErrorMessage(status);
string str = Marshal.PtrToStringAnsi(nativeString); //assumes charset = ANSI
return str;
}
/// <summary>
/// Checks the native Status if the errocode is OK/Success. Otherwise constructs an appropriate exception and throws.
/// Releases the native status object, as needed.
/// </summary>
/// <param name="nativeStatus"></param>
/// <throws></throws>
public static void VerifySuccess(IntPtr nativeStatus)
{
if (nativeStatus != IntPtr.Zero)
{
ErrorCode statusCode = NativeMethods.ONNXRuntimeGetErrorCode(nativeStatus);
string errorMessage = GetErrorMessage(nativeStatus);
NativeMethods.ReleaseONNXStatus(nativeStatus);
throw new CoreRuntimeException(statusCode, errorMessage);
}
}
}
}

Просмотреть файл

@ -0,0 +1,154 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.Runtime.InteropServices;
namespace Microsoft.ML.OnnxRuntime
{
internal class NativeMemoryAllocatorInfo : SafeHandle
{
protected static readonly Lazy<NativeMemoryAllocatorInfo> _defaultCpuAllocInfo = new Lazy<NativeMemoryAllocatorInfo>(CreateCpuAllocatorInfo);
private static NativeMemoryAllocatorInfo CreateCpuAllocatorInfo()
{
IntPtr allocInfo = IntPtr.Zero;
try
{
IntPtr status = NativeMethods.ONNXRuntimeCreateCpuAllocatorInfo(NativeMethods.AllocatorType.DeviceAllocator, NativeMethods.MemoryType.Cpu, out allocInfo);
NativeApiStatus.VerifySuccess(status);
}
catch (Exception e)
{
if (allocInfo != IntPtr.Zero)
{
Delete(allocInfo);
}
throw e;
}
return new NativeMemoryAllocatorInfo(allocInfo);
}
internal static NativeMemoryAllocatorInfo DefaultInstance
{
get
{
return _defaultCpuAllocInfo.Value;
}
}
internal IntPtr Handle // May throw exception in every access, if the constructor have thrown an exception
{
get
{
return handle;
}
}
public override bool IsInvalid
{
get
{
return (handle == IntPtr.Zero);
}
}
private NativeMemoryAllocatorInfo(IntPtr allocInfo)
: base(IntPtr.Zero, true) //set 0 as invalid pointer
{
handle = allocInfo;
}
private static void Delete(IntPtr nativePtr)
{
NativeMethods.ReleaseONNXRuntimeAllocatorInfo(nativePtr);
}
protected override bool ReleaseHandle()
{
Delete(handle);
return true;
}
}
internal class NativeMemoryAllocator : SafeHandle
{
protected static readonly Lazy<NativeMemoryAllocator> _defaultInstance = new Lazy<NativeMemoryAllocator>(CreateDefaultCpuAllocator);
private static NativeMemoryAllocator CreateDefaultCpuAllocator()
{
IntPtr allocator = IntPtr.Zero;
try
{
IntPtr status = NativeMethods.ONNXRuntimeCreateDefaultAllocator(out allocator);
NativeApiStatus.VerifySuccess(status);
}
catch (Exception e)
{
if (allocator != IntPtr.Zero)
{
Delete(allocator);
}
throw e;
}
return new NativeMemoryAllocator(allocator);
}
static internal NativeMemoryAllocator DefaultInstance // May throw exception in every access, if the constructor have thrown an exception
{
get
{
return _defaultInstance.Value;
}
}
/// <summary>
/// Releases native memory previously allocated by the allocator
/// </summary>
/// <param name="memory"></param>
internal void FreeMemory(IntPtr memory)
{
NativeMethods.ONNXRuntimeAllocatorFree(handle, memory);
}
public override bool IsInvalid
{
get
{
return (this.handle == IntPtr.Zero);
}
}
internal IntPtr Handle
{
get
{
return handle;
}
}
protected NativeMemoryAllocator(IntPtr allocator)
: base(IntPtr.Zero, true)
{
this.handle = allocator;
}
protected static void Delete(IntPtr allocator)
{
NativeMethods.ONNXRuntimeReleaseObject(allocator);
}
protected override bool ReleaseHandle()
{
Delete(this.handle);
return true;
}
}
}

Просмотреть файл

@ -0,0 +1,288 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.InteropServices;
namespace Microsoft.ML.OnnxRuntime
{
/// <summary>
/// NamedOnnxValue type, must match the native enum
/// </summary>
internal static class NativeMethods
{
private const string nativeLib = "onnxruntime.dll";
internal const CharSet charSet = CharSet.Ansi;
#region Runtime/Environment API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus* */ONNXRuntimeInitialize(
LogLevel default_warning_level,
string logId,
out IntPtr /*(ONNXEnv*)*/ env);
// ReleaseONNXEnv should not be used
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXEnv(IntPtr /*(ONNXEnv*)*/ env);
#endregion Runtime/Environment API
#region Status API
[DllImport(nativeLib, CharSet = charSet)]
public static extern ErrorCode ONNXRuntimeGetErrorCode(IntPtr /*(ONNXStatus*)*/status);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* char* */ONNXRuntimeGetErrorMessage(IntPtr /* (ONNXStatus*) */status);
// returns char*, need to convert to string by the caller.
// does not free the underlying ONNXStatus*
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXStatus(IntPtr /*(ONNXStatus*)*/ statusPtr);
#endregion Status API
#region InferenceSession API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus* */ONNXRuntimeCreateInferenceSession(
IntPtr /* (ONNXEnv*) */ environment,
[MarshalAs(UnmanagedType.LPWStr)]string modelPath, //the model path is consumed as a wchar* in the C-api
IntPtr /* (ONNXRuntimeSessionOptions*) */sessopnOptions,
out IntPtr /**/ session);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNStatus*)*/ ONNXRuntimeRunInferenceAndFetchAll(
IntPtr /*(ONNXSessionPtr)*/ session,
string[] inputNames,
IntPtr[] /*(ONNXValuePtr[])*/ inputValues,
ulong inputLength, // size_t, TODO: make it portable for x86, arm
out IntPtr /* (ONNXValueListPtr*)*/ outputValues,
out ulong /* (size_t*) */ outputLength); //TODO: make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNStatus*)*/ ONNXRuntimeRunInference(
IntPtr /*(ONNXSession*)*/ session,
string[] inputNames,
IntPtr[] /* (ONNXValue*[])*/ inputValues,
ulong inputCount, /* TODO: size_t, make it portable for x86 arm */
string[] outputNames,
ulong outputCount, /* TODO: size_t, make it portable for x86 and arm */
[MarshalAs(UnmanagedType.LPArray, SizeParamIndex = 5 /*index of outputCount*/)][In, Out]
IntPtr[] outputValues /* An array of output value pointers. Array must be allocated by the caller */
);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeInferenceSessionGetInputCount(
IntPtr /*(ONNXSession*)*/ session,
out ulong /* TODO: size_t */ count);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeInferenceSessionGetOutputCount(
IntPtr /*(ONNXSession*)*/ session,
out ulong /*TODO: size_t port*/ count);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ONNXRuntimeInferenceSessionGetInputName(
IntPtr /*(ONNXSession*)*/ session,
ulong index, //TODO: port size_t
IntPtr /*(ONNXRuntimeAllocator*)*/ allocator,
out IntPtr /*(char**)*/name);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ONNXRuntimeInferenceSessionGetOutputName(
IntPtr /*(ONNXSession*)*/ session,
ulong index, //TODO: port size_t
IntPtr /*(ONNXRuntimeAllocator*)*/ allocator,
out IntPtr /*(char**)*/name);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXSession(IntPtr /*(ONNXSession*)*/session);
#endregion InferenceSession API
#region SessionOptions API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*ONNXRuntimeSessionOptions* */ ONNXRuntimeCreateSessionOptions();
//DEFINE_RUNTIME_CLASS(ONNXRuntimeSessionOptions)
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXRuntimeSessionOptions(IntPtr /*(ONNXRuntimeSessionOptions*)*/ sessionOptions);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableSequentialExecution(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableSequentialExecution(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableProfiling(IntPtr /* ONNXRuntimeSessionOptions* */ options, string profilePathPrefix);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableProfiling(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableMemPattern(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableMemPattern(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeEnableCpuMemArena(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableCpuMemArena(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeSetSessionLogId(IntPtr /* ONNXRuntimeSessionOptions* */ options, string logId);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeSetSessionLogVerbosityLevel(IntPtr /* ONNXRuntimeSessionOptions* */ options, LogLevel sessionLogVerbosityLevel);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeSetSessionThreadPoolSize(IntPtr /* ONNXRuntimeSessionOptions* */ options, int sessionThreadPoolSize);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeEnableCudaProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options, int deviceId);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableCudaProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern int ONNXRuntimeEnableMklProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeDisableMklProvider(IntPtr /* ONNXRuntimeSessionOptions* */ options);
#endregion
#region Allocator/AllocatorInfo API
//TODO: consider exposing them publicly, when allocator API is exposed
public enum AllocatorType
{
DeviceAllocator = 0,
ArenaAllocator = 1
}
//TODO: consider exposing them publicly when allocator API is exposed
public enum MemoryType
{
CpuInput = -2, // Any CPU memory used by non-CPU execution provider
CpuOutput = -1, // CPU accessible memory outputted by non-CPU execution provider, i.e. CUDA_PINNED
Cpu = CpuOutput, // temporary CPU accessible memory allocated by non-CPU execution provider, i.e. CUDA_PINNED
Default = 0, // the default allocator for execution provider
}
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* (ONNXStatus*)*/ ONNXRuntimeCreateAllocatorInfo(
IntPtr /*(const char*) */name,
AllocatorType allocatorType,
int identifier,
MemoryType memType,
out IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo // memory ownership transfered to caller
);
//ONNXRUNTIME_API_STATUS(ONNXRuntimeCreateCpuAllocatorInfo, enum ONNXRuntimeAllocatorType type, enum ONNXRuntimeMemType mem_type1, _Out_ ONNXRuntimeAllocatorInfo** out)
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* (ONNXStatus*)*/ ONNXRuntimeCreateCpuAllocatorInfo(
AllocatorType allocatorType,
MemoryType memoryType,
out IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo
);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXRuntimeAllocatorInfo(IntPtr /*(ONNXRuntimeAllocatorInfo*)*/ allocatorInfo);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ONNXRuntimeCreateDefaultAllocator(out IntPtr /*(ONNXRuntimeAllocator**)*/ allocator);
/// <summary>
/// Releases/Unrefs any object, including the Allocator
/// </summary>
/// <param name="ptr"></param>
/// <returns>remaining ref count</returns>
[DllImport(nativeLib, CharSet = charSet)]
public static extern uint /*remaining ref count*/ ONNXRuntimeReleaseObject(IntPtr /*(void*)*/ ptr);
/// <summary>
/// Release any object allocated by an allocator
/// </summary>
/// <param name="allocator"></param>
/// <param name="memory"></param>
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeAllocatorFree(IntPtr allocator, IntPtr memory);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(const struct ONNXRuntimeAllocatorInfo*)*/ ONNXRuntimeAllocatorGetInfo(IntPtr /*(const ONNXRuntimeAllocator*)*/ ptr);
#endregion Allocator/AllocatorInfo API
#region Tensor/OnnxValue API
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /* ONNXStatus */ ONNXRuntimeCreateTensorWithDataAsONNXValue(
IntPtr /* (const ONNXRuntimeAllocatorInfo*) */ allocatorInfo,
IntPtr /* (void*) */dataBufferHandle,
ulong dataLength, //size_t, TODO: make it portable for x86, arm
ulong[] shape, //size_t* or size_t[], TODO: make it portable for x86, arm
ulong shapeLength, //size_t, TODO: make it portable for x86, arm
TensorElementType type,
out IntPtr /* ONNXValuePtr* */ outputValue);
/// This function doesn't work with string tensor
/// this is a no-copy method whose pointer is only valid until the backing ONNXValuePtr is free'd.
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorMutableData(IntPtr /*(ONNXValue*)*/ value, out IntPtr /* (void**)*/ dataBufferHandle);
//[DllImport(nativeLib, CharSet = charSet)]
//public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorShapeDimCount(IntPtr /*(ONNXValue*)*/ value, out ulong dimension); //size_t TODO: make it portable for x86, arm
//[DllImport(nativeLib, CharSet = charSet)]
//public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorShapeElementCount(IntPtr /*(ONNXValue*)*/value, out ulong count);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXStatus*)*/ ONNXRuntimeGetTensorShapeAndType(IntPtr /*(ONNXValue*)*/ value, out IntPtr /*(struct ONNXRuntimeTensorTypeAndShapeInfo*)*/ typeAndShapeInfo);
[DllImport(nativeLib, CharSet = charSet)]
public static extern TensorElementType ONNXRuntimeGetTensorElementType(IntPtr /*(const struct ONNXRuntimeTensorTypeAndShapeInfo*)*/ typeAndShapeInfo);
[DllImport(nativeLib, CharSet = charSet)]
public static extern ulong /*TODO: port for size_t */ONNXRuntimeGetNumOfDimensions(IntPtr /*(const struct ONNXRuntimeTensorTypeAndShapeInfo*)*/ typeAndShapeInfo);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ONNXRuntimeGetDimensions(
IntPtr /*(const struct ONNXRuntimeTensorTypeAndShapeInfo*)*/ typeAndShapeInfo,
long[] dim_values,
ulong dim_values_length);
/**
* How many elements does this tensor have.
* May return a negative value
* e.g.
* [] -> 1
* [1,3,4] -> 12
* [2,0,4] -> 0
* [-1,3,4] -> -1
*/
[DllImport(nativeLib, CharSet = charSet)]
public static extern long ONNXRuntimeGetTensorShapeElementCount(IntPtr /*(const struct ONNXRuntimeTensorTypeAndShapeInfo*)*/ typeAndShapeInfo);
[DllImport(nativeLib, CharSet = charSet)]
public static extern IntPtr /*(ONNXValuePtr)*/ ONNXRuntimeONNXValueListGetNthValue(IntPtr /*(ONNXValueListPtr)*/ list, ulong index); // 0-based index TODO: size_t, make it portable for x86, arm
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXValue(IntPtr /*(ONNXValue*)*/ value);
[DllImport(nativeLib, CharSet = charSet)]
public static extern void ReleaseONNXValueList(IntPtr /*(ONNXValueList*)*/ valueList);
#endregion
} //class NativeMethods
} //namespace

Просмотреть файл

@ -0,0 +1,233 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
using System.Buffers;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Threading;
namespace Microsoft.ML.OnnxRuntime
{
internal class NativeOnnxTensorMemory<T> : MemoryManager<T>
{
private bool _disposed;
private int _referenceCount;
private IntPtr _onnxValueHandle;
private IntPtr _dataBufferHandle;
private int _elementCount;
private int _elementWidth;
private int[] _dimensions;
public NativeOnnxTensorMemory(IntPtr onnxValueHandle)
{
IntPtr typeAndShape = IntPtr.Zero;
try
{
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorShapeAndType(onnxValueHandle, out typeAndShape));
TensorElementType elemType = NativeMethods.ONNXRuntimeGetTensorElementType(typeAndShape);
Type type = null;
int width = 0;
GetTypeAndWidth(elemType, out type, out width);
if (typeof(T) != type)
throw new NotSupportedException(nameof(NativeOnnxTensorMemory<T>)+" does not support T = "+nameof(T));
_elementWidth = width;
_onnxValueHandle = onnxValueHandle;
// derive the databuffer pointer, element_count, element_width, and shape
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeGetTensorMutableData(_onnxValueHandle, out _dataBufferHandle));
// throws OnnxRuntimeException if native call failed
ulong dimension = NativeMethods.ONNXRuntimeGetNumOfDimensions(typeAndShape);
long count = NativeMethods.ONNXRuntimeGetTensorShapeElementCount(typeAndShape); // count can be negative.
if (count < 0)
{
throw new NotSupportedException("Symbolic dimensions in the tensor is not supported");
}
long[] shape = new long[dimension];
NativeMethods.ONNXRuntimeGetDimensions(typeAndShape, shape, dimension); //Note: shape must be alive during the call
_elementCount = (int)count;
_dimensions = new int[dimension];
for (ulong i = 0; i < dimension; i++)
{
_dimensions[i] = (int)shape[i];
}
}
catch (Exception e)
{
//TODO: cleanup any partially created state
//Do not call ReleaseTensor here. If the constructor has thrown exception, then this NativeOnnxTensorWrapper is not created, so caller should take appropriate action to dispose
throw e;
}
finally
{
if (typeAndShape != IntPtr.Zero)
{
NativeMethods.ONNXRuntimeReleaseObject(typeAndShape);
}
}
}
~NativeOnnxTensorMemory()
{
Dispose(false);
}
public bool IsDisposed => _disposed;
protected bool IsRetained => _referenceCount > 0;
public int[] Dimensions
{
get
{
return _dimensions;
}
}
public int Rank
{
get
{
return _dimensions.Length;
}
}
public override Span<T> GetSpan()
{
if (IsDisposed)
throw new ObjectDisposedException(nameof(NativeOnnxTensorMemory<T>));
Span<T> span = null;
unsafe
{
span = new Span<T>((void*)_dataBufferHandle, _elementCount);
}
return span;
}
public override MemoryHandle Pin(int elementIndex = 0)
{
//Note: always pin the full buffer and return
unsafe
{
if (elementIndex >= _elementCount)
{
throw new ArgumentOutOfRangeException(nameof(elementIndex));
}
Retain();
return new MemoryHandle((void*)((int)_dataBufferHandle + elementIndex*_elementWidth)); //could not use Unsafe.Add
}
}
public override void Unpin()
{
Release();
}
private bool Release()
{
int newRefCount = Interlocked.Decrement(ref _referenceCount);
if (newRefCount < 0)
{
throw new InvalidOperationException("Unmatched Release/Retain");
}
return newRefCount != 0;
}
private void Retain()
{
if (IsDisposed)
{
throw new ObjectDisposedException(nameof(NativeOnnxTensorMemory<T>));
}
Interlocked.Increment(ref _referenceCount);
}
protected override void Dispose(bool disposing)
{
if (_disposed)
{
return;
}
if (disposing)
{
// do managed objects cleanup
}
NativeMethods.ReleaseONNXValue(_onnxValueHandle);
_disposed = true;
}
protected override bool TryGetArray(out ArraySegment<T> arraySegment)
{
// cannot expose managed array
arraySegment = default(ArraySegment<T>);
return false;
}
internal static void GetTypeAndWidth(TensorElementType elemType, out Type type, out int width)
{
switch (elemType)
{
case TensorElementType.Float:
type = typeof(float);
width = sizeof(float);
break;
case TensorElementType.Double:
type = typeof(double);
width = sizeof(double);
break;
case TensorElementType.Int16:
type = typeof(short);
width = sizeof(short);
break;
case TensorElementType.UInt16:
type = typeof(ushort);
width = sizeof(ushort);
break;
case TensorElementType.Int32:
type = typeof(int);
width = sizeof(int);
break;
case TensorElementType.UInt32:
type = typeof(uint);
width = sizeof(uint);
break;
case TensorElementType.Int64:
type = typeof(long);
width = sizeof(long);
break;
case TensorElementType.UInt64:
type = typeof(ulong);
width = sizeof(ulong);
break;
case TensorElementType.UInt8:
type = typeof(byte);
width = sizeof(byte);
break;
default:
type = null;
width = 0;
break;
}
}
}
}

Просмотреть файл

@ -0,0 +1,82 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Collections.Generic;
namespace Microsoft.ML.OnnxRuntime
{
internal struct GlobalOptions //Options are currently not accessible to user
{
public string LogId { get; set; }
public LogLevel LogLevel { get; set; }
}
internal enum LogLevel
{
Verbose = 0,
Info = 1,
Warning = 2,
Error = 3,
Fatal = 4
}
/// <summary>
/// This class intializes the process-global ONNX runtime
/// C# API users do not need to access this, thus kept as internal
/// </summary>
internal sealed class OnnxRuntime : SafeHandle
{
private static readonly Lazy<OnnxRuntime> _instance = new Lazy<OnnxRuntime>(()=> new OnnxRuntime());
internal static IntPtr Handle // May throw exception in every access, if the constructor have thrown an exception
{
get
{
return _instance.Value.handle;
}
}
public override bool IsInvalid
{
get
{
return (handle == IntPtr.Zero);
}
}
private OnnxRuntime() //Problem: it is not possible to pass any option for a Singleton
:base(IntPtr.Zero, true)
{
handle = IntPtr.Zero;
try
{
NativeApiStatus.VerifySuccess(NativeMethods.ONNXRuntimeInitialize(LogLevel.Warning, @"CSharpOnnxRuntime", out handle));
}
catch (OnnxRuntimeException e)
{
if (handle != IntPtr.Zero)
{
Delete(handle);
handle = IntPtr.Zero;
}
throw e;
}
}
private static void Delete(IntPtr nativePtr)
{
NativeMethods.ReleaseONNXEnv(nativePtr);
}
protected override bool ReleaseHandle()
{
Delete(handle);
return true;
}
}
}

Двоичные данные
csharp/src/Microsoft.ML.OnnxRuntime/OnnxRuntime.snk Normal file

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,59 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.Collections.Generic;
using System.Text;
namespace Microsoft.ML.OnnxRuntime
{
public class SessionOptions : IDisposable
{
private static SessionOptions _defaultOptions = new SessionOptions();
private IntPtr _nativeHandle;
public SessionOptions()
{
_nativeHandle = NativeMethods.ONNXRuntimeCreateSessionOptions();
}
internal IntPtr NativeHandle
{
get
{
return _nativeHandle;
}
}
public static SessionOptions Default
{
get
{
return _defaultOptions;
}
}
#region destructors disposers
~SessionOptions()
{
Dispose(false);
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// cleanup managed resources
}
// cleanup unmanaged resources
}
#endregion
}
}

Просмотреть файл

@ -0,0 +1,111 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Numerics.Tensors;
using Xunit;
using Microsoft.ML.OnnxRuntime;
namespace Microsoft.ML.OnnxRuntime.Tests
{
public class InfereceTest
{
[Fact]
public void CanCreateAndDisposeSessionWithModelPath()
{
string modelPath = Directory.GetCurrentDirectory() + @"\squeezenet.onnx";
using (var session = new InferenceSession(modelPath))
{
Assert.NotNull(session);
Assert.NotNull(session.InputMetadata);
Assert.Equal(1, session.InputMetadata.Count); // 1 input node
Assert.True(session.InputMetadata.ContainsKey("data_0")); // input node name
Assert.NotNull(session.OutputMetadata);
Assert.Equal(1, session.OutputMetadata.Count); // 1 output node
Assert.True(session.OutputMetadata.ContainsKey("softmaxout_1")); // output node name
//TODO: verify shape/type of the input/output nodes when API available
}
}
[Fact]
private void CanRunInferenceOnAModel()
{
string modelPath = Directory.GetCurrentDirectory() + @"\squeezenet.onnx";
using (var session = new InferenceSession(modelPath))
{
var inputMeta = session.InputMetadata;
// User should be able to detect input name/type/shape from the metadata.
// Currently InputMetadata implementation is inclomplete, so assuming Tensor<flot> of predefined dimension.
var shape0 = new int[] { 1, 3, 224, 224 };
float[] inputData0 = LoadTensorFromFile(@"bench.in");
var tensor = new DenseTensor<float>(inputData0, shape0);
var container = new List<NamedOnnxValue>();
container.Add(new NamedOnnxValue("data_0", tensor));
// Run the inference
var results = session.Run(container); // results is an IReadOnlyList<NamedOnnxValue> container
Assert.Equal(1, results.Count);
float[] expectedOutput = LoadTensorFromFile(@"bench.expected_out");
float errorMargin = 1e-6F;
// validate the results
foreach (var r in results)
{
Assert.Equal("softmaxout_1", r.Name);
var resultTensor = r.AsTensor<float>();
int[] expectedDimensions = { 1, 1000, 1, 1 }; // hardcoded for now for the test data
Assert.Equal(expectedDimensions.Length, resultTensor.Rank);
var resultDimensions = resultTensor.Dimensions;
for (int i = 0; i < expectedDimensions.Length; i++)
{
Assert.Equal(expectedDimensions[i], resultDimensions[i]);
}
var resultArray = r.AsTensor<float>().ToArray();
Assert.Equal(expectedOutput.Length, resultArray.Length);
for (int i = 0; i < expectedOutput.Length; i++)
{
Assert.InRange<float>(resultArray[i], expectedOutput[i] - errorMargin, expectedOutput[i] + errorMargin);
}
}
}
}
static float[] LoadTensorFromFile(string filename)
{
var tensorData = new List<float>();
// read data from file
using (var inputFile = new System.IO.StreamReader(filename))
{
inputFile.ReadLine(); //skip the input name
string[] dataStr = inputFile.ReadLine().Split(new char[] { ',', '[', ']' }, StringSplitOptions.RemoveEmptyEntries);
for (int i = 0; i < dataStr.Length; i++)
{
tensorData.Add(Single.Parse(dataStr[i]));
}
}
return tensorData.ToArray();
}
}
}

Просмотреть файл

@ -0,0 +1,38 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp2.0</TargetFramework>
<IsPackable>false</IsPackable>
<OnnxRuntimeCsharpRoot>..\..</OnnxRuntimeCsharpRoot>
<buildDirectory Condition="'$(buildDirectory)'==''">$(OnnxRuntimeCsharpRoot)\..\build\Windows</buildDirectory>
<NativeBuildOutputDir>$(buildDirectory)\$(Configuration)\$(Configuration)</NativeBuildOutputDir>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.8.0" />
<PackageReference Include="xunit" Version="2.4.0" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="$(OnnxRuntimeCsharpRoot)\src\Microsoft.ML.OnnxRuntime\Microsoft.ML.OnnxRuntime.csproj" />
</ItemGroup>
<ItemGroup>
<None Include="$(NativeBuildOutputDir)\*.dll;$(NativeBuildOutputDir)\*.pdb">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
<None Include="$(OnnxRuntimeCSharpRoot)\testdata\*">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
<Visible>false</Visible>
</None>
</ItemGroup>
<ItemGroup>
<Service Include="{508349b6-6b84-4df5-91f0-309beebad82d}" />
</ItemGroup>
</Project>

2
csharp/testdata/bench.expected_out поставляемый Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

2
csharp/testdata/bench.in поставляемый Normal file

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Двоичные данные
csharp/testdata/squeezenet.onnx поставляемый Normal file

Двоичный файл не отображается.

46
docs/ABI.md Normal file
Просмотреть файл

@ -0,0 +1,46 @@
# ONNXRuntime ABI
We release ONNXRuntime as both static library and shared library on Windows, Linux and Mac OS X. [ABI (Application Binary Interface)](https://en.wikipedia.org/wiki/Application_binary_interface) is only for the shared library. It allows you upgrade ONNXRuntime to a newer version without recompiling.
The ABI contains:
1. A set of C functions for creating an inference session and load/run a model. No global variables would be directly exported. All these functions can be directly used in .net and .net core through PInvoke. All these public symbols are C symbols, no C++.
2. (TODO) A C++ API for authoring custom ops and put them into a separated dll.
# Functionality
[C API](C_API.md)
# Integration
Q: Should I statically link to ONNXRuntime or dynamically?
A: On Windows, Any custom op DLL must dynamically link to ONNXRuntime.
Dynamical linking also helps on solving diamond dependency problem. For example, if part of your program depends on ONNX 1.2 but ONNXRuntime depends on ONNX 1.3, then dynamically linking to them would be better.
Q: Any requirement on CUDA version? My program depends on CUDA 9.0, but the ONNXRuntime binary was built with CUDA 9.1. Is it ok to put them together?
A: Yes. Because ONNXRuntime statically linked to CUDA.
# Dev Notes
## Global Variables
Global variables may get constructed or destructed inside "DllMain". There are significant limits on what you can safely do in a DLL entry point. See ['DLL General Best Practices'](https://docs.microsoft.com/en-us/windows/desktop/dlls/dynamic-link-library-best-practices). For example, you can't put a ONNXRuntime InferenceSession into a global variable.
## Component Object Model (COM)
ONNXRuntime doesn't contain a COM interface, whether it's on Windows or Linux. Because .Net Core doesn't support COM on Linux and we need to make ONNXRuntime available to .Net Core.
## Undefined symbols in a shared library
On Windows, you can't build a DLL with undefined symbols. Every symbol must be get resolved at link time. On Linux, you can.
In this project, we setup a rule: when building a shared library, every symbol must get resolved at link time, unless it's a custom op.
For custom op, on Linux, don't pass any libraries(except libc, pthreads) to linker. So that, even the application is statically linked to ONNXRuntime, they can still use the same custom op binary.
## Default visibility
On POSIX systems, please always specify "-fvisibility=hidden" and "-fPIC" when compiling any code in ONNXRuntime shared library.
See [pybind11 FAQ](https://github.com/pybind/pybind11/blob/master/docs/faq.rst#someclass-declared-with-greater-visibility-than-the-type-of-its-field-someclassmember--wattributes)
## RTLD_LOCAL vs RTLD_GLOBAL
RTLD_LOCAL and RTLD_GLOBAL are two flags of [dlopen(3)](http://pubs.opengroup.org/onlinepubs/9699919799/functions/dlopen.html) function on Linux. By default, it's RTLD_LOCAL. And basically you can say, there no corresponding things like RTLD_GLOBAL on Windows.
If your application is a shared library, which statically linked to ONNXRuntime, and your application needs to dynamically load a custom op, then your application must be loaded with RTLD_GLOBAL. In all other cases, you should use RTLD_LOCAL. ONNXRuntime python binding is a good example of why sometimes RTLD_GLOBAL is needed.

27
docs/AddingCustomOp.md Normal file
Просмотреть файл

@ -0,0 +1,27 @@
Adding a new op
===============
## A new op can be written and registered with ONNXRuntime in the following 3 ways
### 1. Using a dynamic shared library
* First write the implementation of the op and schema (if required) and assemble them in a shared library.
See [this](../onnxruntime/test/custom_op_shared_lib) for an example. Currently
this is supported for Linux only.
Example of creating a shared lib using g++ on Linux:
```g++ -std=c++14 -shared test_custom_op.cc -o test_custom_op.so -fPIC -I. -Iinclude/onnxruntime -L. -lonnxruntime -DONNX_ML -DONNX_NAMESPACE=onnx```
* Register the shared lib with ONNXRuntime.
See [this](../onnxruntime/test/shared_lib/test_inference.cc) for an example.
### 2. Using RegisterCustomRegistry API
* Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder).
* Create a CustomRegistry object and register your kernel and schema with this registry.
* Register the custom registry with ONNXRuntime using RegisterCustomRegistry API.
See
[this](../onnxruntime/test/framework/local_kernel_registry_test.cc) for an example.
### 3. Contributing the op to ONNXRuntime
This is mostly meant for ops that are in the process of being proposed to ONNX. This way you don't have to wait for an approval from the ONNX team
if the op is required in production today.
See [this](../onnxruntime/contrib_ops) for an example.

Просмотреть файл

@ -0,0 +1,12 @@
# Adding a new execution provider
* All execution providers inherit from
[IExecutionProvider](../include/onnxruntime/core/framework/execution_provider.h)
* The best way to start adding a provider is to start with examples already
added in ONNXRuntime
* [CPU Execution
Provider](../onnxruntime/core/providers/cpu/cpu_execution_provider.h)
* [CUDA Execution
Provider](../onnxruntime/core/providers/cuda/cuda_execution_provider.h)
* [MKL-DNN Execution
Provider](../onnxruntime/core/providers/mkldnn/mkldnn_execution_provider.h)

1
docs/CSharp_API.md Normal file
Просмотреть файл

@ -0,0 +1 @@
# C# API

14
docs/C_API.md Normal file
Просмотреть файл

@ -0,0 +1,14 @@
# C API
## Headers
[onnxruntime_c_api.h](include/onnxruntime/core/session/onnxruntime_c_api.h)
## Functionality
* Creating an InferenceSession from an on-disk model file and a set of SessionOptions.
* Registering customized loggers.
* Registering customized allocators.
* Registering predefined providers and set the priority order. ONNXRuntime has a set of predefined execution providers,like CUDA, MKLDNN. User can register providers to their InferenceSession. The order of registration indicates the preference order as well.
* Running a model with inputs. These inputs must be in CPU memory, not GPU. If the model has multiple outputs, user can specify which outputs they want.
* Converting an in-memory ONNX Tensor encoded in protobuf format, to a pointer that can be used as model input.
* Setting the thread pool size for each session.
* Dynamically loading custom ops.

Просмотреть файл

@ -0,0 +1,44 @@
# ONNX Runtime coding conventions and standards
## Code Style
Google style from https://google.github.io/styleguide/cppguide.html with a few minor alterations:
* Max line length 120
* Aim for 80, but up to 120 is fine.
* Exceptions
* Allowed to throw fatal errors that are expected to result in a top level handler catching them, logging them and terminating the program.
* Non-const references
* Allowed
* However const correctness and usage of smart pointers (shared_ptr and unique_ptr) is expected, so a non-const reference equates to “this is a non-null object that you can change but are not being given ownership of”.
* 'using namespace' permitted with limited scope
* Not allowing 'using namespace' at all is overly restrictive. Follow the C++ Core Guidelines:
* [SF.6: Use using namespace directives for transition, for foundation libraries (such as std), or within a local scope (only)](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rs-using)
* [SF.7: Don't write using namespace at global scope in a header file](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Rs-using-directive)
#### Clang-format
Clang-format will handle automatically formatting code to these rules. Theres a Visual Studio plugin that can format on save at https://marketplace.visualstudio.com/items?itemName=LLVMExtensions.ClangFormat, or alternatively the latest versions of Visual Studio 2017 include [clang-format support](https://blogs.msdn.microsoft.com/vcblog/2018/03/13/clangformat-support-in-visual-studio-2017-15-7-preview-1/).
There is a .clang-format file in the root directory that has the max line length override and defaults to the google rules. This should be automatically discovered by the clang-format tools.
## Code analysis
Visual Studio Code Analysis with [C++ Core guidelines](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md) rules enabled is configured to run on build for the onnxruntime_common, onnxruntime_graph and onnxruntime_util libraries. Updating the onnxruntime_framework and onnxruntime_provider libraries to enable Code Analysis and build warning free is pending.
Code changes should build with no Code Analysis warnings, however this is somewhat difficult to achieve consistently as the Code Analysis implementation is in fairly constant flux. Different minor releases may have less false positives (a build with the latest version may be warning free, and a build with an earlier version may not), or detect additional problems (an earlier version builds warning free and a later version doesn't).
## Unit Testing and Code Coverage
There should be unit tests that cover the core functionality of the product, expected edge cases, and expected errors.
Code coverage from these tests should aim at maintaining over 80% coverage.
All changes should be covered by new or existing unit tests.
In order to check that all the code you expect to be covered by testing is covered, run code coverage in Visual Studio using 'Analyze Code Coverage' under the Test menu.
There is a configuration file in onnxruntime\VSCodeCoverage.runsettings that can be used to configure code coverage so that it reports numbers for just the onnxruntime code. Select that file in Visual Studio via the Test menu: 'Test' -> 'Test Settings' -> 'Select Test Settings File'.
Using 'Show Code Coverage Coloring' will allow you to visually inspect which lines were hit by the tests. See <https://docs.microsoft.com/en-us/visualstudio/test/using-code-coverage-to-determine-how-much-code-is-being-tested?view=vs-2017>.

87
docs/HighLevelDesign.md Normal file
Просмотреть файл

@ -0,0 +1,87 @@
# ONNX Runtime High Level Design
This document outlines the high level design of
ONNXRuntime - a high performance, cross platform engine.
## Key objectives
* Maximally and automatically leverage the custom accelerators and runtimes
available on disparate platforms.
* Provide the right abstraction and runtime support for custom accelerators and
runtimes. We call this abstraction an [execution
provider](../include/onnxruntime/core/framework/execution_provider.h). It defines and exposes a set of
its capabilities to ONNXRuntime: a set of single or fused nodes it can
execute, its memory allocator and more. Custom accelerators and runtimes are
instances of execution provider.
* We don't expect that an execution provider can always run an ONNX model fully
on its device. This means that ONNXRuntime must be able to execute a single
model in a heterogeneous environment involving multiple execution providers.
* Provide support for high-level optimizations that can be expressed as
model-to-model transformations via a [graph-transformation
API](../include/onnxruntime/core/graph/graph_transformer.h). Such
transformations fall into two categories: global transformations, those that
require analysis and transformation of the entire graph, and local
transformations, which can be captured as simple (algebraic) [rewriting
rules](../include/onnxruntime/core/graph/rewrite_rule.h).
## High-level system architecture
The flow is quite simple. Starting from an ONNX model, ONNXRuntime first
converts the model graph into its in-memory graph representation. It then
applies a number of graph transformations that a) perform a set of provider
independent optimizations such cast transformations between float16 and float32, and b) partition the
graph into a set of subgraphs based on the available execution providers. Each
subgraph is assigned to an execution provider. We ensure that a subgraph can be
executed by an execution provider by querying the capability of the execution
provider using the GetCapability() API.
![ONNXRuntime high level system architecture](https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/228d22d3-6e3e-48b1-811c-1d48353f031c.png)
*Note: TensorRT and nGraph support in the works.*
### More about partitioning
ONNXRuntime partitions a model graph based on the available execution providers
into subgraphs, each for a distinct provider respectively. ONNXRuntime provides
a default execution provider that is used for fallback execution for the
operators that cannot be pushed onto the more specialized but more efficient
execution providers. Intuitively we probably want to push computation to the
specialized execution providers as much as possible.
We use a simple graph partitioning technique. The available execution providers
will be considered in a specific order, and each will be assigned the maximal
subgraphs (possibly more than one) that it is able to handle. The
ONNXRuntime-provided default execution provider will be the last one to be
considered, and it ensures completeness. More sophisticated optimizations can be
considered in the future (or can even be implemented as a composite execution
provider).
Conceptually, each partition is reduced to a single fused operator. It is
created by invoking the execution provider's Compile() method and wrap it as a
custom operator. We support both sync and async modes of execution for custom
operators. We also support both strict and non-strict invocations. An execution
provider exposes its memory allocator, which is used to allocate the input
tensors for the execution provider. The rewriting and partitioning transform the
initial model graph into a new graph composed with operators assigned to either
the default execution provider or other registered execution
providers. ONNXRuntime execution engine is responsible for running this graph.
## Key design decisions
* Multiple threads should be able to inovke the Run() method on the same
inference session object. See [API doc](C_API.md) for more details.
* To facilitate the above the Compute() function of all kernels is const
implying the kernels are stateless.
* We call implementations of the operators by execution providers as
kernels. Each execution provider supports a subset of the (ONNX)
operators/kernels.
* ONNXRuntime runtime guarantees that all operators are supported by the default
execution provider.
* Tensor representation: ONNXRuntime will utilize a standard representation for
the tensor runtime values. The execution providers can internally use a
different representation, if they choose to, but it is their responsibility to
convert the values from/to the standard representation at the boundaries of
their subgraph.
## Extensibility points
* [Add a custom operator/kernel](AddingCustomOp.md)
* [Add an execution provider](AddingExecutionProvider.md)
* [Add a new graph
transform](../include/onnxruntime/core/graph/graph_transformer.h)
* [Add a new rewrite rule](../include/onnxruntime/core/graph/rewrite_rule.h)

47
docs/Model_Test.md Normal file
Просмотреть файл

@ -0,0 +1,47 @@
ONNX has a collection of standard tests. This document describes how to run these tests through a C++ program named 'onnx_test_runner' in this repo. You could also run these test through onnxruntime python binding, which would be much easier to setup, but, a bit harder to debug issues.
# Get the test data
You should have:
1. onnx single node test data
2. onnx model zoo models
## Install onnx python package
You can get onnx python package from [pypi](https://pypi.org/). However, if you are a onnxruntime developer, you may need to work on a cutting edge ONNX version. In this case, you need to build and install ONNX from source code.
### Install ONNX from source code
1. (windows) set ONNX_ML=1
(linux) export ONNX_ML=1
2. Install protobuf and put protoc into your PATH environment. When you compile protobuf, it's better to only enable the static libraries.
3. run "python setup.py bdist_wheel" and "pip install dist/*.whl"
## Generate node test data
$ python3 -m onnx.backend.test.cmd_tools generate-data -o <dest_folder>
e.g.
python3 -m onnx.backend.test.cmd_tools generate-data -o C:\testdata
## Get the onnx model zoo models
Download the files from: https://github.com/onnx/models. Unzip them.
(TODO: put a full copy on Azure blob, instead of downloading these files from different sources individually)
# Compile onnxruntime_test_runner and run the tests
onnxruntime_test_runner is a C++ program. Its source code is in onnxruntime/test/onnx directory.
Usage: onnx_test_runner [options...] <data_root>
Options:
-j [models]: Specifies the number of models to run simultaneously.
-A : Disable memory arena
-c [runs]: Specifies the number of Session::Run() to invoke simultaneously for each model.
-r [repeat]: Specifies the number of times to repeat
-v: verbose
-n [test_case_name]: Specifies a single test case to run.
-e [EXECUTION_PROVIDER]: EXECUTION_PROVIDER could be 'cpu', 'cuda' or 'mkldnn'. Default: 'cpu'.
-x: Use parallel executor, default (without -x): sequential executor.
-h: help
e.g.
//run the tests under C:\testdata dir and enable CUDA provider
$ onnx_test_runner -e cuda C:\testdata
//run the tests sequentially. It would be easier to debug
$ onnx_test_runner -c 1 -j 1 C:\testdata

16
docs/Python_Dev_Notes.md Normal file
Просмотреть файл

@ -0,0 +1,16 @@
# Python Dev Notes
Each Python version uses a specific compiler version. In most cases, you should use the same compiler version for building python extensions.
## Which Microsoft Visual C++ compiler to use with a specific Python version ?
| Visual C++ | CPython |
|-------------|:-----------------------:|
|2015, 2017 | 3.7 |
|2015 | 3.5,3.6 |
|2010 | 3.3,3.4 |
|2008 | 2.6, 2.7, 3.0, 3.1, 3.2 |
Currently, ONNXRuntime only supports Visual C++ 2017. Therefore, Python 3.7 seems to be the best choice.
CPython 3.7 is distributed with a VC++ 2017 runtime. Unlike the earlier VC++ version, VC++ 2017 Runtime is binary backward compatible with VC++ 2015. Which means you could build your application with VC++ 2015 then run it with VC++ 2017 runtime.

11
docs/ReleaseManagement.md Normal file
Просмотреть файл

@ -0,0 +1,11 @@
# Release Management
This describes the process by which versions of ONNX Runtime are officially
released to the public.
## Releases
Releases are versioned according to
[docs/Versioning.md](Versioning/md). We plan to release ONNX Runtime packages
every 6 months.
(TBD: Add more here later)

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше