* introducing BatchContainer
* BatchContainer basic functionality done
* pass test_input to _convert
* introduce convert_batch API
* use convert_batch in the benchmark
* store _batch_size attribute
* test working
* run black, add concat output option, fix benchmark
* fix getattr
* fix operator benchmark
* support transform and decision function
* make sure input is tuple not list
* fix torch backend prediction
* begin fixing tests
* squeeze and ravel on onnx regression output
* all tests in test_extra_conf.py working
* restore BATCH_SIZE and k neighbor test
* fix onnxml test
* run black on test_extra_conf.py
* fix test_sklearn_normalizer_converter.py
* fix test_lightgbm_converter.py
* fixing more onnxml tests
* fixed remaining onnxml tests
* use format, fix pylint
* fix typo
* add document
* add missing doc
* fix typo
* doc update, remove unused stuff
* add containers for onnx models
* add tvm_installed, initial work on topology
* add containers
add tvm backend to supported
add few tests
* fix type error in TVM
tree_trav and perf_tree_trav now work
* Add TVM_MAX_FUSE_DEPTH option
Add BATCH_SIZE option
Tree trav generate indexes based on batch size (if available)
TVM takes the max fuse detph configuration if set
* add benchmark code for trees
* device can be added directly to convert
* add code for tvm
* refactoring of the tree benchmark files
* add operators scripts
few fixes in the tree bench