hummingbird/setup.py

59 строки
1.9 KiB
Python
Исходник Обычный вид История

First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
from distutils.core import setup
from setuptools import find_packages
import os
import sys
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
this = os.path.dirname(__file__)
packages = find_packages()
assert packages
# read version from the package file.
with (open(os.path.join(this, "hummingbird/__init__.py"), "r")) as f:
line = [_ for _ in [_.strip("\r\n ") for _ in f.readlines()] if _.startswith("__version__")]
if len(line) > 0:
version_str = line[0].split("=")[1].strip('" ')
README = os.path.join(os.getcwd(), "README.md")
with open(README) as f:
long_description = f.read()
start_pos = long_description.find("## Introduction")
if start_pos >= 0:
long_description = long_description[start_pos:]
install_requires = ["numpy>=1.15", "onnxconverter-common>=1.6.0", "scikit-learn==0.22.1"]
if sys.platform == "darwin" or sys.platform == "linux":
install_requires.append("torch==1.5.1")
else:
install_requires.append("torch==1.5.1+cpu")
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
setup(
2020-05-07 21:40:39 +03:00
name="hummingbird-ml",
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
version=version_str,
2020-05-06 02:34:12 +03:00
description="Convert trained traditional machine learning models into tensor computations",
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
license="MIT License",
author="Microsoft Corporation",
author_email="hummingbird-dev@microsoft.com",
url="https://github.com/microsoft/hummingbird",
packages=packages,
include_package_data=True,
install_requires=install_requires,
extras_require={
"tests": ["flake8", "pytest", "coverage", "pre-commit"],
"docs": ["pdoc"],
"onnx": ["onnxruntime>=1.0.0", "onnxmltools>=1.6.0"],
"extra": [
# The need each for these depends on which libraries you plan to convert from
"xgboost==0.90",
"lightgbm>=2.2",
],
},
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
classifiers=[
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python",
2020-05-07 21:40:39 +03:00
"Operating System :: OS Independent",
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
"License :: OSI Approved :: MIT License",
],
2020-06-02 03:05:15 +03:00
python_requires=">=3.5",
First commit into master (#38) * initial upload * dev branch workflow * Update README.md * starting to setup coverage * flake err cleanup * deleted more unused code * can't find a good githubactions coverage * can't find a good githubactions coverage * bug fixes * consolidating tests * XGB Regressor is failing * commiting lgbm regressor tests * using params * fixing lgbm max_depth bug * better test output. TODO: fix the max_depth for lgbm and xgb to not fall through to None, need to compute * adding failure case test. TODO: why does RF not have extra_config in regressor * pinning to xgboost .90 for now * refactoring tree's extra_config for xgb and lgbm * fixing reversed param * adding gbdt test file * refactoring beam params functions * making all beam params as numpy * increasing coverege by shifting label starts and by deleting unused model.infer_initial_types() * extra config for rf reg * flake8 * more error testing * using onnxconverter types instead of copypaste * more consolidation * more test coverage * first step in refactor * cleaning up batch params * adding beam++ to node size 1 test * there is a bug, documenting * renaming trees to match paper * test * adding precommit hooks * README.md * readme update * commit hooks * Fixing badge link to be relative * notebook for demo * notebook for demo * notebook params change * reveriting 2c95f488a4dc and reopening issue #9; this solution is too clunky * bumping pyt req * Fix pytorch requirements * Fix to brackets for alpha in xgboost * Few minor fixes to comments in tests * Removed unecessary regression tests * Add binary classification tests for gemm, tree_trav and perf_tree_trav * Fixes to whitespaces * updating readme * filling out contrib section * expanding readme example so that (1) it actually runs (2) it actually does a thing * cleaning notebook example * Fix to typo and update to the requirements * Fix to flake8 errors * readme changes from this morning * changes based on feedback * Few edits to contributing * Few edits in the README file * fixing mailto: syntax * Remove initial_types from the converter API * Rename Skl2PyTorch container into HBPyTorch * Add convert_xgboost and convert_lightgbm API * Fix to spacing * remove pandas check (for the moment) * fix import * Fix readme to use the new API * removed common directory * add some documentation * renamed few things * code refactoring for trees * refactor lightgbm and xgboost by moving stuff into gbdt_commons * done with a pass on gbdt after moving everything to _gbdt_common * final refactoring of gbdt classes * rename random forest stuff into decision tree * major refactoring for tree implementations * some renaming here and there * minor fix * Add test to validate that issue #7 is closed. * import container stuff from onnx-common * fix the parser to use the topology in onnx-common * remove unnecessary files * address first chunk of Karla's comments * fix typo in calibration * Another round of comments addressed * fix typo * these two lines seem unnecessary * moving notebooks from broken branch * adding notebooks with new API changes * removing comment * removed few unnecessary code and edited some documentation * Update CONTRIBUTING.md * remove . from git clone * Final pass over non-converters files documentation / API * add constants for converters * simplify a bit the API by using extra_config for optional parameters * Update CONTRIBUTING.md * done with documentation over public classes , methods * add contants and extra config management * addressing Karla's comments * pip install pdoc; pdoc --html hummingbird * pdoc3, using overrides to get extra doc if we want it * add few tests to check that we actually pick the correct implementation * Update README.md * Reformat doc * add HB logo to readme file * Add HB logo in doc * add assertion on model being not None Co-authored-by: Karla Saur <karla.saur@microsoft.com> Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
2020-04-29 19:09:41 +03:00
)