* basic gpu_linear_tree_learner implementation
* corresponding config of gpu linear tree
* Update src/io/config.cpp
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* work around for gpu linear tree learner without gpu enabled
* add #endif
* add #ifdef USE_GPU
* fix lint problems
* fix compilation when USE_GPU is OFF
* add destructor
* add gpu_linear_tree_learner.cpp in make file list
* use template for linear tree learner
---------
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: shiyu1994 <shiyu_k1994@qq.com>
* add bagging by query for lambdarank
* fix pre-commit
* fix bagging by query with cuda
* fix bagging by query test case
* fix bagging by query test case
* fix bagging by query test case
* add #include <vector>
* Update include/LightGBM/objective_function.h
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Update tests/python_package_test/test_engine.py
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Update tests/python_package_test/test_engine.py
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
---------
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* [python-package] Allow to pass early stopping min delta in params
* Fix test
* Add separate test
* Fix
* Add to cpp config
* Adjust test
* Adjust test
* Debug
* Revert
* Apply suggestions from code review
---------
Co-authored-by: James Lamb <jaylamb20@gmail.com>
* support quantized training with categorical features on cpu
* remove white spaces
* add tests for quantized training with categorical features
* skip tests for cuda version
* fix cases when only 1 data block in row-wise quantized histogram construction with 8 inner bits
* remove useless capture
* fix compilation warnings
revert useless changes
* revert useless change
* separate functions in feature histogram into cpp file
* add feature_histogram.o in Makevars
* add quantized training (first stage)
* add histogram construction functions for integer gradients
* add stochastic rounding
* update docs
* fix compilation errors by adding template instantiations
* update files for compilation
* fix compilation of gpu version
* initialize gradient discretizer before share states
* add a test case for quantized training
* add quantized training for data distributed training
* Delete origin.pred
* Delete ifelse.pred
* Delete LightGBM_model.txt
* remove useless changes
* fix lint error
* remove debug loggings
* fix mismatch of vector and allocator types
* remove changes in main.cpp
* fix bugs with uninitialized gradient discretizer
* initialize ordered gradients in gradient discretizer
* disable quantized training with gpu and cuda
fix msvc compilation errors and warnings
* fix bug in data parallel tree learner
* make quantized training test deterministic
* make quantized training in test case more accurate
* refactor test_quantized_training
* fix leaf splits initialization with quantized training
* check distributed quantized training result
* add cuda gradient discretizer
* add quantized training for CUDA version in tree learner
* remove cuda computability 6.1 and 6.2
* fix parts of gpu quantized training errors and warnings
* fix build-python.sh to install locally built version
* fix memory access bugs
* fix lint errors
* mark cuda quantized training on cuda with categorical features as unsupported
* rename cuda_utils.h to cuda_utils.hu
* enable quantized training with cuda
* fix cuda quantized training with sparse row data
* allow using global memory buffer in histogram construction with cuda quantized training
* recover build-python.sh
enlarge allowed package size to 100M
* fix leaf splits update after split in quantized training
* fix preparation ordered gradients for quantized training
* remove force_row_wise in distributed test for quantized training
* Update src/treelearner/leaf_splits.hpp
---------
Co-authored-by: James Lamb <jaylamb20@gmail.com>
* Update regression_objective.hpp
* Update regression_objective.hpp
Maybe still need a (1.0 - alpha)
* fix position in percentile calculation
* fix regression metric threshold for l1
---------
Co-authored-by: shiyu1994 <shiyu_k1994@qq.com>
* add quantized training (first stage)
* add histogram construction functions for integer gradients
* add stochastic rounding
* update docs
* fix compilation errors by adding template instantiations
* update files for compilation
* fix compilation of gpu version
* initialize gradient discretizer before share states
* add a test case for quantized training
* add quantized training for data distributed training
* Delete origin.pred
* Delete ifelse.pred
* Delete LightGBM_model.txt
* remove useless changes
* fix lint error
* remove debug loggings
* fix mismatch of vector and allocator types
* remove changes in main.cpp
* fix bugs with uninitialized gradient discretizer
* initialize ordered gradients in gradient discretizer
* disable quantized training with gpu and cuda
fix msvc compilation errors and warnings
* fix bug in data parallel tree learner
* make quantized training test deterministic
* make quantized training in test case more accurate
* refactor test_quantized_training
* fix leaf splits initialization with quantized training
* check distributed quantized training result
* add cuda quantile regression objective
* remove white space
* resolve merge conflicts
* remove useless changes
* remove useless changes
* enable cuda quantile regression objective
* add a test case for quantile regression objective
* remove useless changes
* remove useless changes
* reduce DP_SHARED_HIST_SIZE to 5176 for CUDA 10
---------
Co-authored-by: James Lamb <jaylamb20@gmail.com>