The README.md doc update for the release (#87)
* Update the README.md for the release. * polish the code * add the mobile section * update the example code in the README.md * correction on pytorch tutorial
This commit is contained in:
Родитель
3f377e0911
Коммит
a563194d41
|
@ -18,6 +18,7 @@ gen
|
|||
.DS_Store
|
||||
*~
|
||||
.vs
|
||||
.ipynb_checkpoints/
|
||||
tutorials/*.onnx
|
||||
Testing/
|
||||
TestResults/
|
||||
|
|
78
README.md
78
README.md
|
@ -1,12 +1,79 @@
|
|||
# ONNXRuntime CustomOps
|
||||
# ONNXRuntime Extensions
|
||||
[![Build Status](https://dev.azure.com/aiinfra/ONNX%20Converters/_apis/build/status/microsoft.ort-customops?repoName=microsoft%2Fonnxruntime-extensions&branchName=main)](https://dev.azure.com/aiinfra/ONNX%20Converters/_build/latest?definitionId=907&repoName=microsoft%2Fonnxruntime-extensions&branchName=main)
|
||||
|
||||
ONNXRuntime CustomOps Library is a comprehensive package to extent the ONNXRuntime with some capabilities via its custom ops API.
|
||||
1. This repository provides a library of add-on custom operators for [ONNX Runtime](http://onnxruntime.ai). The package can be installed to run with ONNX Runtime for operators not natively supported by ORT. Learn more about [custom ops in ORT](https://www.onnxruntime.ai/docs/how-to/add-custom-op.html). And the custom operator support list is in [docs/custom_text_ops.md](./docs/custom_text_ops.md)
|
||||
ONNXRuntime Extensions is a comprehensive package to extend the capability of the ONNX conversion and inference.
|
||||
1. The CustomOp C++ library for [ONNX Runtime](http://onnxruntime.ai) on ONNXRuntime CustomOp API.
|
||||
2. Support PyOp feature to implement the custom op with a Python function.
|
||||
3. Build all-in-one ONNX model from the pre/post processing code, go to [docs/pre_post_processing.md](docs/pre_post_processing.md) for details.
|
||||
4. Support Python per operator debugging, checking ```hook_model_op``` in onnxruntime_customops Python package.
|
||||
|
||||
# Quick Start
|
||||
The following code shows how to run ONNX model and ONNXRuntime customop more straightforwardly.
|
||||
```python
|
||||
import numpy
|
||||
from onnxruntime_customops import PyOrtFunction, VectorToString
|
||||
# <ProjectDir>/tutorials/data/gpt-2/gpt2_tok.onnx
|
||||
encode = PyOrtFunction.from_model('gpt2_tok.onnx')
|
||||
# https://github.com/onnx/models/blob/master/text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx
|
||||
gpt2_core = PyOrtFunction.from_model('gpt2-lm-head-10.onnx')
|
||||
decode = PyOrtFunction.from_customop(VectorToString, map={' a': [257]}, unk='<unknown>')
|
||||
|
||||
input_text = ['It is very cool to have']
|
||||
output, *_ = gpt2_core(input_ids)
|
||||
next_id = numpy.argmax(output[:, :, -1, :], axis=-1)
|
||||
print(input_text[0] + decode(next_id).item())
|
||||
```
|
||||
This is a simplified version of GPT-2 inference for the demonstration only, The comprehensive solution on the GPT-2 model and its deviants are under development, and here is the [link](tutorials/gpt2_e2e.py) to the experimental.
|
||||
|
||||
## Android/iOS
|
||||
The previous processing python code can be translated into all-in-one model to be run in Android/iOS mobile platform, without any Python runtime and the 3rd-party dependencies requirement. Here is the [tutorial](tutorials/ort_mobile.py)
|
||||
|
||||
## CustomOp Conversion
|
||||
The mainstream ONNX converters support the custom op generation if there is the operation from the original framework cannot be interpreted as ONNX standard operators. Check the following two examples on how to do this.
|
||||
1. [CustomOp conversion by pytorch.onnx.exporter](tutorials/pytorch_custom_ops_tutorial.ipynb)
|
||||
2. [CustomOp conversion by tf2onnx](tutorials/tf2onnx_custom_ops_tutorial.ipynb)
|
||||
|
||||
## Inference with CustomOp library
|
||||
The CustomOp library was written with C++, so that it supports run the model in the native binaries. The following is the example of C++ version.
|
||||
```C++
|
||||
// The line loads the customop library into ONNXRuntime engine to load the ONNX model with the custom op
|
||||
Ort::ThrowOnError(Ort::GetApi().RegisterCustomOpsLibrary((OrtSessionOptions*)session_options, custom_op_library_filename, &handle));
|
||||
|
||||
// The regular ONNXRuntime invoking to run the model.
|
||||
Ort::Session session(env, model_uri, session_options);
|
||||
RunSession(session, inputs, outputs);
|
||||
```
|
||||
Of course, with Python language, the thing becomes much easier since PyOrtFunction will directly translate the ONNX model into a python function. But if the ONNXRuntime Custom Python API want to be used, the inference process will be
|
||||
```python
|
||||
import onnxruntime as _ort
|
||||
from onnxruntime_customops import get_library_path as _lib_path
|
||||
|
||||
so = _ort.SessionOptions()
|
||||
so.register_custom_ops_library(_lib_path())
|
||||
|
||||
# Run the ONNXRuntime Session.
|
||||
# sess = _ort.InferenceSession(model, so)
|
||||
# sess.run (...)
|
||||
```
|
||||
|
||||
## More CustomOp
|
||||
Welcome to contribute the customop C++ implementation directly in this repository, which will widely benefit other users. Besides C++, if you want to quickly verify the ONNX model with some custom operators with Python language, PyOp will help with that
|
||||
```python
|
||||
import numpy
|
||||
from onnxruntime_customops import PyOp, onnx_op
|
||||
|
||||
# Implement the CustomOp by decorating a function with onnx_op
|
||||
@onnx_op(op_type="Inverse", inputs=[PyOp.dt_float])
|
||||
def inverse(x):
|
||||
# the user custom op implementation here:
|
||||
return numpy.linalg.inv(x)
|
||||
|
||||
# Run the model with this custom op
|
||||
# model_func = PyOrtFunction(model_path)
|
||||
# outputs = model_func(inputs)
|
||||
# ...
|
||||
```
|
||||
|
||||
# Build and Development
|
||||
This project supports Python and can be built from source easily, or a simple cmake build without Python dependency.
|
||||
## Python package
|
||||
|
@ -18,10 +85,13 @@ This project supports Python and can be built from source easily, or a simple cm
|
|||
Test:
|
||||
- run `pytest test` in the project root directory.
|
||||
|
||||
## The share library or DLL only
|
||||
## The share library for non-Python
|
||||
If only DLL/shared library is needed without any Python dependencies, please run `build.bat` or `bash ./build.sh` to build the library.
|
||||
By default the DLL or the library will be generated in the directory `out/<OS>/<FLAVOR>`. There is a unit test to help verify the build.
|
||||
|
||||
## The static library and link with ONNXRuntime
|
||||
For sake of the binary size, the project can be built as a static library and link into ONNXRuntime. Here is [the script](ci_build/onnxruntime_integration/build_with_onnxruntime.sh) to this, which is especially usefully on building the mobile release.
|
||||
|
||||
# Contributing
|
||||
This project welcomes contributions and suggestions. Most contributions require you to agree to a
|
||||
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
|
||||
|
|
|
@ -16,6 +16,7 @@ from ._ocos import Opdef, PyCustomOpDef, hash_64, enable_custom_op # noqa
|
|||
from ._ocos import expand_onnx_inputs # noqa
|
||||
from ._ocos import hook_model_op # noqa
|
||||
from ._ocos import default_opset_domain # noqa
|
||||
from ._cuops import * # noqa
|
||||
from .eager_op import EagerOp as PyOrtFunction
|
||||
|
||||
|
||||
|
|
|
@ -5,13 +5,12 @@
|
|||
|
||||
import numpy as np
|
||||
import onnxruntime as _ort
|
||||
from onnx import onnx_pb as onnx_proto
|
||||
from ._ocos import default_opset_domain, get_library_path # noqa
|
||||
from ._cuops import * # noqa
|
||||
|
||||
|
||||
def _get_opset_version_from_ort():
|
||||
ORT_OPSET_SUPPORT_TABLE = {
|
||||
_ORT_OPSET_SUPPORT_TABLE = {
|
||||
"1.5": 11,
|
||||
"1.6": 12,
|
||||
"1.7": 13,
|
||||
|
@ -19,14 +18,14 @@ def _get_opset_version_from_ort():
|
|||
}
|
||||
|
||||
ort_ver_string = '.'.join(_ort.__version__.split('.')[0:2])
|
||||
return ORT_OPSET_SUPPORT_TABLE.get(ort_ver_string, 11)
|
||||
return _ORT_OPSET_SUPPORT_TABLE.get(ort_ver_string, 11)
|
||||
|
||||
|
||||
class EagerOp:
|
||||
|
||||
@classmethod
|
||||
def get_ort_session_options(cls):
|
||||
# ONNXRuntime has a bug support reusing the SessionOptions object.
|
||||
# ONNXRuntime has an issue to support reusing the SessionOptions object.
|
||||
# Create a new one every time here
|
||||
so = _ort.SessionOptions()
|
||||
so.register_custom_ops_library(get_library_path())
|
||||
|
|
Различия файлов скрыты, потому что одна или несколько строк слишком длинны
|
@ -103,7 +103,7 @@
|
|||
"outputs": [],
|
||||
"source": [
|
||||
"import numpy\n",
|
||||
"from onnxruntime_customops import onnx_op, PyOp, PyOrtFunction\n",
|
||||
"from onnxruntime_customops import onnx_op, PyOp\n",
|
||||
"@onnx_op(op_type=\"Inverse\")\n",
|
||||
"def inverse(x):\n",
|
||||
" # the user custom op implementation here:\n",
|
||||
|
@ -114,7 +114,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### ONNX Inference"
|
||||
"* **ONNX Inference**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -133,6 +133,7 @@
|
|||
}
|
||||
],
|
||||
"source": [
|
||||
"from onnxruntime_customops import PyOrtFunction\n",
|
||||
"onnx_fn = PyOrtFunction.from_model(onnx_model)\n",
|
||||
"y = onnx_fn(x0.numpy())\n",
|
||||
"print(y)"
|
||||
|
@ -142,7 +143,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Compare the result with Pytorch"
|
||||
"* **Compare the result with Pytorch**"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -160,7 +161,7 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"## Implement the customop in C++ (optional)\n",
|
||||
"To support the ONNX model running on all other language supported by ONNX Runtime and independdent of Python, a C++ implmentation is needs, check here for the [inverse.hpp](../operators/math/inverse.hpp) for an example on how to do it."
|
||||
"To make the ONNX model with the CustomOp runn on all other language supported by ONNX Runtime and be independdent of Python, a C++ implmentation is needed, check here for the [inverse.hpp](../operators/math/inverse.hpp) for an example on how to do that."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
Загрузка…
Ссылка в новой задаче