71c951d31c
* introducing BatchContainer * BatchContainer basic functionality done * pass test_input to _convert * introduce convert_batch API * use convert_batch in the benchmark * store _batch_size attribute * test working * run black, add concat output option, fix benchmark * fix getattr * fix operator benchmark * support transform and decision function * make sure input is tuple not list * fix torch backend prediction * begin fixing tests * squeeze and ravel on onnx regression output * all tests in test_extra_conf.py working * restore BATCH_SIZE and k neighbor test * fix onnxml test * run black on test_extra_conf.py * fix test_sklearn_normalizer_converter.py * fix test_lightgbm_converter.py * fixing more onnxml tests * fixed remaining onnxml tests * use format, fix pylint * fix typo * add document * add missing doc * fix typo * doc update, remove unused stuff |
||
---|---|---|
.. | ||
README.md | ||
__init__.py | ||
run.py | ||
score.py | ||
train.py |
README.md
Operators Experiments
This directory contains the script to reproduce the experiments of Section 6.1.2 of the paper A Tensor Compiler for Unified Machine Learning Prediction Serving. This script is configured to run sklearn and compare it against onnx-ml, torchscript and onnx (the last 2 using Hummingbird), for the iris dataset over 1 core, and with batch of 1M.
python run.py
will run the benchmarks for CPUpython run.py -gpu
will run the benchmarks for GPU