* python 3.8 has outdated skl version (1.3) and we should use newer python for Coverage step
* fix in-place warning
* fix 'UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow.'
* skipping docgen entirely on non-main
* fixing Warning: 'Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20:'
* s/base_estimator/estimator/g
* s/sparse/sparse_output/g
* in progress, todo implement parser for function
* first attempt at _parse_sklearn_function_transformer
* s/boston/cali/g
* removing the Normalizer with wrong params as it is no longer possible
* Whiten parameter of PCA must now be bool
* lint
* pulling from github for skl to unblock for now
* trying again with skl2onnx
* skl2onnx install keeps getting overwritten, must be in two places for now
* we skipped identity in parse, causing it to miss probab
* lint
* feat: dynamic axes
* feat: add test for varying batch size
* chore: cleanup variable naming
* chore: lint
`pre-commit run --all-files` was quite aggressive
* fix: check onnx conv, not torch
* chore: remove `fix_graph`
* chore: lint
* fix: skip onnx test if not installed
* chore: add more tests
notably, the multi-classification models appear to still have the previous behavior.
* Revert "chore: lint"
This reverts commit 8b4ea4ed33.
* fix: limit array properly
* chore: clean up remaining print statements
* fix: set first dimension in tests to custom
Necessary to support arbitrary batch sizes
Also refactors tests to properly separate out model conversion and testing
* chore: update LGBM-ONNXML notebook
Must set first dimension to None in the ONNXMLTools conversion.
* fix: more bumpversion locations
* Added support for more decision conditions in trees and ONNX conversion
* Added test for BRANCH_LT in ONNXML
* Use convert_model from onnxmltools instead of manually picking converter for test
* remove `import enable_hist_gradient_boosting` because it's not needed since skl 1.0
* remove the `except` part and use `n_features_in_` for HistGradientBoosting
* use `var_` instead of `sigma_` because it was deprecated in skl 1.0 and will be removed in skl 1.2
* Use `algorithm=lloyd` for KMeans instead of `algorithm=full/auto`
* Use `csr_matrix` from the `scipy.sparse` namespace because the `scipy.sparse.csr` namespace is deprecated.
* Use `loss='log_loss'` instead of `loss='log'` because it was deprecated in v1.1 and will be removed in version 1.3.
* Use `eigenvalues_` instead of `lambdas_` because it was deprecated in version 1.0 and will be removed in 1.2.
* Use `eigenvectors_` instead of `alphas_` because it was deprecated in version 1.0 and will be removed in 1.2.
* check for model NotFitted before conversion
* check for model type
* better model type check and basic tests for conversion without fit()
Co-authored-by: SangamSwadiK <sangamswadiK@users.noreply.github.com>
* Testing skl 1.1.1 again
* scikit-learn 1.1.1 requires Python (>= 3.8). I think it's best to deprecate 3.7
* checking errors if remove pin
* pulling two tests for now
* putting this back before i forget. TODO: fix and add back 2 onnx tests
* adding raises for falsefalse scaler.
* simplifying test
* debugging skl hist gb
* skip test for dbg
* for now, bad hack around extra params
* adding back others as well
* adding back others as well
* n_components == 10, must be <= 3; updates to PCA in skl 1.1.1
* matteo's changes
* wip
* fixing param order
* fix atol for regression chain
* more fixes
* fixing flake issues
* test data in float32
* test data in float32
* limiting tree depth for perf_tree_trav
* rtol 1e-4
* converting back to float32 after float64 tree operations
* better renaming of variables
* fix order
* wip
* explicitly specify dtype
* removing .predict test. increasing tolerance
* small refactoring
* fix missing self
* just the tensor without dtype is sufficient
* per-init one
Co-authored-by: Matteo Interlandi <mainterl@microsoft.com>
Co-authored-by: snakanda <snakanda@node.testvm.orion-pg0.wisc.cloudlab.us>