* Reorg the code folder structure
* update the math test case
* Add an matrix inverse op.
* turn off the ctest by default.
* disbable jpeg lib in dlib for Linux build issue.
* Linux build fixing
* typo
* enable dlib library on Win32 build
* rename ocos to operators
* add the missing operator folder
* Support the tensor renaming for the embedded graph
* Add ORT verifying step in the conversion.
* make the gpt-e2e work
* Support the loop in mytorch
* gpt2 end-to-end works
* Polish the code and fix the unit test.
* initial checkins
* restructure the implementation.
* refine the Python interface
* Finalize the interface.
* Add the custmop class for the customization.
* Test the eager_op with vector_to_string customop
* Refine the customop conversion interface.
* initial onnx builder
* Runable with incorrect result.
* reformat the onnx_ops calls
* a few of operators working on tracing
* handcraft all op conversion
* Add the unit testing for mytorch
* unit test passed.
* Add some documents...
* Move non-torch API into onnxruntime_customops.utils module.
* Fix the unit test issues.
* Fix some typos.
* add batch_mode and padding for GPT2Tokenizer
* fix text
* fix test and add doc
* fix test
* fix comments
* delete header
Co-authored-by: Ze Tao <zetao@microsoft.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
* add vector_to_string
* fix merge conflict
* fix building failure
* remove debug code
* fix test
* move back unicode
* fix typo
* move base64 back
* move the right place
* support only int64_t
Co-authored-by: Ze Tao <zetao@microsoft.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
* Add attribute global_replace to StringRegexReplace
Signed-off-by: xavier dupré <xavier.dupre@gmail.com>
* fix potential wrong pointer
Signed-off-by: xavier dupré <xavier.dupre@gmail.com>
* update sep
Signed-off-by: xavier dupré <xavier.dupre@gmail.com>
Co-authored-by: xavier dupré <xavier.dupre@gmail.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
> It seems to be working now.
I enabled some less secured option in pipeline. let's see how it goes.
* enable c++ test on Windows
* add linux/macos platform
* no extractfile task on unix-like platform
* fixing on unix-like platform
* try a fixing on macos
* basic changes
build issues fixing
runing on Windows Platform
deploy the ort library in CMake
update gitignore
* Add C++ shared tests
* enable ctest
* fixing the python build issue
* remove cc test
* why does macos needs openmp package?
* test attribute
* finish improvement
Co-authored-by: Ze Tao <zetao@microsoft.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
* enable directly pip package build.
* some link symbols
* fixing on Windows platform
* update the build instruction
* update the ci pipeline
* Fix the Linux and MacOS build.
* Update mshost.yaml
* updat the ci python version
* update the pipeline
* simplify the instruction.
* update according to the comments.
Co-authored-by: Wenbing Li <wenli@MacM1.local>
* initialize a bbpe tokenizer
* add the json library.
* gpt2 tokenizer cpp implementation.
* Tom/add tutorial (#32)
* Added getting started instructions for Windows
Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
* Created a tutorial for converting models with custom ops. WIP
* Removed long outputs
* Changed to keras syntax and added setup instructions
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
* rename gpt2 test case file
* polish the symbol names in the sources
* polish it again.
* fix the build issue on macos
* another fixing
* another fixing 3
Co-authored-by: TomWildenhain-Microsoft <67606533+TomWildenhain-Microsoft@users.noreply.github.com>
* Added getting started instructions for Windows
Signed-off-by: Tom Wildenhain <tomwi@microsoft.com>
* Created a tutorial for converting models with custom ops. WIP
* Removed long outputs
* Changed to keras syntax and added setup instructions
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>