* re-enable Python pipelines
* deprecate the direct setup.py call
* pipeline fixing
* run cmake from visual studio
* remove the self check
* support different ort versions
* remove ort 1.9 and add 1.13
* ci pipeline fixing
* fix the test with latest pytorch
* code refinement
* torch version detection
* make package python version more consistent.
* Support the tensor renaming for the embedded graph
* Add ORT verifying step in the conversion.
* make the gpt-e2e work
* Support the loop in mytorch
* gpt2 end-to-end works
* Polish the code and fix the unit test.
* initial checkins
* restructure the implementation.
* refine the Python interface
* Finalize the interface.
* Add the custmop class for the customization.
* Test the eager_op with vector_to_string customop
* Refine the customop conversion interface.
* initial onnx builder
* Runable with incorrect result.
* reformat the onnx_ops calls
* a few of operators working on tracing
* handcraft all op conversion
* Add the unit testing for mytorch
* unit test passed.
* Add some documents...
* Move non-torch API into onnxruntime_customops.utils module.
* Fix the unit test issues.
* Fix some typos.
* refactoring
* remove useless include
* remove pragma once from cc files
* add custom_op_test.onnx
* remove unnecessary imports, add header in project file, run C++ unit tests