hummingbird/benchmarks
Karla Saur 3edbfb7566
flake8-6.1.0 adds E721 check (#730)
2023-08-07 12:43:13 -07:00
..
operators flake8-6.1.0 adds E721 check (#730) 2023-08-07 12:43:13 -07:00
pipelines flake8-6.1.0 adds E721 check (#730) 2023-08-07 12:43:13 -07:00
trees fix few bugs due to versioning in benchmarks (#542) 2021-09-23 09:46:17 -07:00
README.md Add pipeline benchmark (#331) 2020-10-27 09:35:31 -07:00
__init__.py Add benchmark scripts for trees (#328) 2020-10-22 15:06:13 -07:00
datasets.py not using os.sys call (#673) 2023-01-10 22:40:00 +01:00
timer.py Add benchmark scripts for trees (#328) 2020-10-22 15:06:13 -07:00

README.md

Hummingbird Benchmarks

This is the main entry point for the evaluation of Hummingbird!

The benchmark is divided in three main folders:

  • trees will allow to run all the tree-related experiments contained in section 6.1.1 in the paper A Tensor Compiler for Unified Machine Learning Prediction Serving. Please check the related README file for specifics.
  • operators will allow to run experiments on operators beside trees. This is pretty much Section 6.1.2 of the paper. Again, please check the related README file for specifics.
  • pipelines will allow to reproduce the results of section 6.3.

Take in mind that running the complete benchmark will take several days.