hummingbird/benchmarks
masahi 71c951d31c
Introduce BatchContainer for batch by batch prediction use case (#377)
* introducing BatchContainer

* BatchContainer basic functionality done

* pass test_input to _convert

* introduce convert_batch API

* use convert_batch in the benchmark

* store _batch_size attribute

* test working

* run black, add concat output option, fix benchmark

* fix getattr

* fix operator benchmark

* support transform and decision function

* make sure input is tuple not list

* fix torch backend prediction

* begin fixing tests

* squeeze and ravel on onnx regression output

* all tests in test_extra_conf.py working

* restore BATCH_SIZE and k neighbor test

* fix onnxml test

* run black on test_extra_conf.py

* fix test_sklearn_normalizer_converter.py

* fix test_lightgbm_converter.py

* fixing more onnxml tests

* fixed remaining onnxml tests

* use format, fix pylint

* fix typo

* add document

* add missing doc

* fix typo

* doc update, remove unused stuff
2020-12-14 14:09:37 -08:00
..
operators Introduce BatchContainer for batch by batch prediction use case (#377) 2020-12-14 14:09:37 -08:00
pipelines Add TVM backend (#236) 2020-11-03 13:21:02 -08:00
trees Introduce BatchContainer for batch by batch prediction use case (#377) 2020-12-14 14:09:37 -08:00
README.md Add pipeline benchmark (#331) 2020-10-27 09:35:31 -07:00
__init__.py Add benchmark scripts for trees (#328) 2020-10-22 15:06:13 -07:00
datasets.py Fix few issues with the benchmars (#354) 2020-10-30 08:35:40 -07:00
timer.py Add benchmark scripts for trees (#328) 2020-10-22 15:06:13 -07:00

README.md

Hummingbird Benchmarks

This is the main entry point for the evaluation of Hummingbird!

The benchmark is divided in three main folders:

  • trees will allow to run all the tree-related experiments contained in section 6.1.1 in the paper A Tensor Compiler for Unified Machine Learning Prediction Serving. Please check the related README file for specifics.
  • operators will allow to run experiments on operators beside trees. This is pretty much Section 6.1.2 of the paper. Again, please check the related README file for specifics.
  • pipelines will allow to reproduce the results of section 6.3.

Take in mind that running the complete benchmark will take several days.