LightGBM/docs/Parameters.rst

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1379 строки
76 KiB
ReStructuredText
Исходник Постоянная ссылка Обычный вид История

.. List of parameters is auto generated by LightGBM\helpers\parameter_generator.py from LightGBM\include\LightGBM\config.h file.
.. role:: raw-html(raw)
:format: html
Parameters
==========
This page contains descriptions of all parameters in LightGBM.
**List of other helpful links**
- `Python API <./Python-API.rst>`__
- `Parameters Tuning <./Parameters-Tuning.rst>`__
Parameters Format
-----------------
The parameters format is ``key1=value1 key2=value2 ...``.
Parameters can be set both in config file and command line.
By using command line, parameters should not have spaces before and after ``=``.
By using config files, one line can only contain one parameter. You can use ``#`` to comment.
If one parameter appears in both command line and config file, LightGBM will use the parameter from the command line.
For the Python and R packages, any parameters that accept a list of values (usually they have ``multi-xxx`` type, e.g. ``multi-int`` or ``multi-double``) can be specified in those languages' default array types.
For example, ``monotone_constraints`` can be specified as follows.
**Python**
.. code-block:: python
params = {
"monotone_constraints": [-1, 0, 1]
}
**R**
.. code-block:: r
params <- list(
monotone_constraints = c(-1, 0, 1)
)
.. start params list
Core Parameters
---------------
- ``config`` :raw-html:`<a id="config" title="Permalink to this parameter" href="#config">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``config_file``
- path of config file
- **Note**: can be used only in CLI version
- ``task`` :raw-html:`<a id="task" title="Permalink to this parameter" href="#task">&#x1F517;&#xFE0E;</a>`, default = ``train``, type = enum, options: ``train``, ``predict``, ``convert_model``, ``refit``, aliases: ``task_type``
- ``train``, for training, aliases: ``training``
- ``predict``, for prediction, aliases: ``prediction``, ``test``
2020-07-27 05:24:59 +03:00
- ``convert_model``, for converting model file into if-else format, see more information in `Convert Parameters <#convert-parameters>`__
- ``refit``, for refitting existing models with new data, aliases: ``refit_tree``
- ``save_binary``, load train (and validation) data then save dataset to binary file. Typical usage: ``save_binary`` first, then run multiple ``train`` tasks in parallel using the saved binary file
- **Note**: can be used only in CLI version; for language-specific packages you can use the correspondent functions
- ``objective`` :raw-html:`<a id="objective" title="Permalink to this parameter" href="#objective">&#x1F517;&#xFE0E;</a>`, default = ``regression``, type = enum, options: ``regression``, ``regression_l1``, ``huber``, ``fair``, ``poisson``, ``quantile``, ``mape``, ``gamma``, ``tweedie``, ``binary``, ``multiclass``, ``multiclassova``, ``cross_entropy``, ``cross_entropy_lambda``, ``lambdarank``, ``rank_xendcg``, aliases: ``objective_type``, ``app``, ``application``, ``loss``
- regression application
- ``regression``, L2 loss, aliases: ``regression_l2``, ``l2``, ``mean_squared_error``, ``mse``, ``l2_root``, ``root_mean_squared_error``, ``rmse``
- ``regression_l1``, L1 loss, aliases: ``l1``, ``mean_absolute_error``, ``mae``
- ``huber``, `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__
- ``fair``, `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- ``poisson``, `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__
- ``quantile``, `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- ``mape``, `MAPE loss <https://en.wikipedia.org/wiki/Mean_absolute_percentage_error>`__, aliases: ``mean_absolute_percentage_error``
2019-12-01 07:23:10 +03:00
- ``gamma``, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be `gamma-distributed <https://en.wikipedia.org/wiki/Gamma_distribution#Occurrence_and_applications>`__
2018-01-21 06:23:49 +03:00
2019-12-10 05:58:12 +03:00
- ``tweedie``, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be `tweedie-distributed <https://en.wikipedia.org/wiki/Tweedie_distribution#Occurrence_and_applications>`__
2018-01-21 06:23:49 +03:00
- binary classification application
- ``binary``, binary `log loss <https://en.wikipedia.org/wiki/Cross_entropy>`__ classification (or logistic regression)
- requires labels in {0, 1}; see ``cross-entropy`` application for general probability labels in [0, 1]
- multi-class classification application
- ``multiclass``, `softmax <https://en.wikipedia.org/wiki/Softmax_function>`__ objective function, aliases: ``softmax``
- ``multiclassova``, `One-vs-All <https://en.wikipedia.org/wiki/Multiclass_classification#One-vs.-rest>`__ binary objective function, aliases: ``multiclass_ova``, ``ova``, ``ovr``
- ``num_class`` should be set as well
- cross-entropy application
- ``cross_entropy``, objective function for cross-entropy (with optional linear weights), aliases: ``xentropy``
- ``cross_entropy_lambda``, alternative parameterization of cross-entropy, aliases: ``xentlambda``
- label is anything in interval [0, 1]
- ranking application
- ``lambdarank``, `lambdarank <https://proceedings.neurips.cc/paper_files/paper/2006/file/af44c4c56f385c43f2529f9b1b018f6a-Paper.pdf>`__ objective. `label_gain <#label_gain>`__ can be used to set the gain (weight) of ``int`` label and all values in ``label`` must be smaller than number of elements in ``label_gain``
- ``rank_xendcg``, `XE_NDCG_MART <https://arxiv.org/abs/1911.09798>`__ ranking objective function, aliases: ``xendcg``, ``xe_ndcg``, ``xe_ndcg_mart``, ``xendcg_mart``
- ``rank_xendcg`` is faster than and achieves the similar performance as ``lambdarank``
- label should be ``int`` type, and larger number represents the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
Decouple Boosting Types (fixes #3128) (#4827) * add parameter data_sample_strategy * abstract GOSS as a sample strategy(GOSS1), togetherwith origial GOSS (Normal Bagging has not been abstracted, so do NOT use it now) * abstract Bagging as a subclass (BAGGING), but original Bagging members in GBDT are still kept * fix some variables * remove GOSS(as boost) and Bagging logic in GBDT * rename GOSS1 to GOSS(as sample strategy) * add warning about use GOSS as boosting_type * a little ; bug * remove CHECK when "gradients != nullptr" * rename DataSampleStrategy to avoid confusion * remove and add some ccomments, followingconvention * fix bug about GBDT::ResetConfig (ObjectiveFunction inconsistencty bet… * add std::ignore to avoid compiler warnings (anpotential fails) * update Makevars and vcxproj * handle constant hessian move resize of gradient vectors out of sample strategy * mark override for IsHessianChange * fix lint errors * rerun parameter_generator.py * update config_auto.cpp * delete redundant blank line * update num_data_ when train_data_ is updated set gradients and hessians when GOSS * check bagging_freq is not zero * reset config_ value merge ResetBaggingConfig and ResetGOSS * remove useless check * add ttests in test_engine.py * remove whitespace in blank line * remove arguments verbose_eval and evals_result * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/boosting/sample_strategy.cpp modify warning about setting goss as `boosting_type` Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py replace load_boston() with make_regression() remove value checks of mean_squared_error in test_sample_strategy_with_boosting() * Update tests/python_package_test/test_engine.py add value checks of mean_squared_error in test_sample_strategy_with_boosting() * Modify warnning about using goss as boosting type * Update tests/python_package_test/test_engine.py add random_state=42 for make_regression() reduce the threshold of mean_square_error * Update src/boosting/sample_strategy.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * remove goss from boosting types in documentation * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * rename GOSS with GOSSStrategy * update doc * address comments * fix table in doc * Update include/LightGBM/config.h Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * update documentation * update test case * revert useless change in test_engine.py * add tests for evaluation results in test_sample_strategy_with_boosting * include <string> * change to assert_allclose in test_goss_boosting_and_strategy_equivalent * more tolerance in result checking, due to minor difference in results of gpu versions * change == to np.testing.assert_allclose * fix test case * set gpu_use_dp to true * change --report to --report-level for rstcheck * use gpu_use_dp=true in test_goss_boosting_and_strategy_equivalent * revert unexpected changes of non-ascii characters * revert unexpected changes of non-ascii characters * remove useless changes * allocate gradients_pointer_ and hessians_pointer when necessary * add spaces * remove redundant virtual * include <LightGBM/utils/log.h> for USE_CUDA * check for in test_goss_boosting_and_strategy_equivalent * check for identity in test_sample_strategy_with_boosting * remove cuda option in test_sample_strategy_with_boosting * Update tests/python_package_test/test_engine.py Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update tests/python_package_test/test_engine.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after bagging * remove useless code * check objective_function_ instead of gradients * enable rf with goss simplify params in test cases * remove useless changes * allow rf with feature subsampling alone * change position of ResetGradientBuffers * check for dask * add parameter types for data_sample_strategy Co-authored-by: Guangda Liu <v-guangdaliu@microsoft.com> Co-authored-by: Yu Shi <shiyu_k1994@qq.com> Co-authored-by: GuangdaLiu <90019144+GuangdaLiu@users.noreply.github.com> Co-authored-by: James Lamb <jaylamb20@gmail.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
2022-12-28 09:09:11 +03:00
- ``boosting`` :raw-html:`<a id="boosting" title="Permalink to this parameter" href="#boosting">&#x1F517;&#xFE0E;</a>`, default = ``gbdt``, type = enum, options: ``gbdt``, ``rf``, ``dart``, aliases: ``boosting_type``, ``boost``
- ``gbdt``, traditional Gradient Boosting Decision Tree, aliases: ``gbrt``
- ``rf``, Random Forest, aliases: ``random_forest``
- ``dart``, `Dropouts meet Multiple Additive Regression Trees <https://arxiv.org/abs/1505.01866>`__
2020-07-27 05:24:59 +03:00
- **Note**: internally, LightGBM uses ``gbdt`` mode for the first ``1 / learning_rate`` iterations
Decouple Boosting Types (fixes #3128) (#4827) * add parameter data_sample_strategy * abstract GOSS as a sample strategy(GOSS1), togetherwith origial GOSS (Normal Bagging has not been abstracted, so do NOT use it now) * abstract Bagging as a subclass (BAGGING), but original Bagging members in GBDT are still kept * fix some variables * remove GOSS(as boost) and Bagging logic in GBDT * rename GOSS1 to GOSS(as sample strategy) * add warning about use GOSS as boosting_type * a little ; bug * remove CHECK when "gradients != nullptr" * rename DataSampleStrategy to avoid confusion * remove and add some ccomments, followingconvention * fix bug about GBDT::ResetConfig (ObjectiveFunction inconsistencty bet… * add std::ignore to avoid compiler warnings (anpotential fails) * update Makevars and vcxproj * handle constant hessian move resize of gradient vectors out of sample strategy * mark override for IsHessianChange * fix lint errors * rerun parameter_generator.py * update config_auto.cpp * delete redundant blank line * update num_data_ when train_data_ is updated set gradients and hessians when GOSS * check bagging_freq is not zero * reset config_ value merge ResetBaggingConfig and ResetGOSS * remove useless check * add ttests in test_engine.py * remove whitespace in blank line * remove arguments verbose_eval and evals_result * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/boosting/sample_strategy.cpp modify warning about setting goss as `boosting_type` Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py replace load_boston() with make_regression() remove value checks of mean_squared_error in test_sample_strategy_with_boosting() * Update tests/python_package_test/test_engine.py add value checks of mean_squared_error in test_sample_strategy_with_boosting() * Modify warnning about using goss as boosting type * Update tests/python_package_test/test_engine.py add random_state=42 for make_regression() reduce the threshold of mean_square_error * Update src/boosting/sample_strategy.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * remove goss from boosting types in documentation * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * rename GOSS with GOSSStrategy * update doc * address comments * fix table in doc * Update include/LightGBM/config.h Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * update documentation * update test case * revert useless change in test_engine.py * add tests for evaluation results in test_sample_strategy_with_boosting * include <string> * change to assert_allclose in test_goss_boosting_and_strategy_equivalent * more tolerance in result checking, due to minor difference in results of gpu versions * change == to np.testing.assert_allclose * fix test case * set gpu_use_dp to true * change --report to --report-level for rstcheck * use gpu_use_dp=true in test_goss_boosting_and_strategy_equivalent * revert unexpected changes of non-ascii characters * revert unexpected changes of non-ascii characters * remove useless changes * allocate gradients_pointer_ and hessians_pointer when necessary * add spaces * remove redundant virtual * include <LightGBM/utils/log.h> for USE_CUDA * check for in test_goss_boosting_and_strategy_equivalent * check for identity in test_sample_strategy_with_boosting * remove cuda option in test_sample_strategy_with_boosting * Update tests/python_package_test/test_engine.py Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update tests/python_package_test/test_engine.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after bagging * remove useless code * check objective_function_ instead of gradients * enable rf with goss simplify params in test cases * remove useless changes * allow rf with feature subsampling alone * change position of ResetGradientBuffers * check for dask * add parameter types for data_sample_strategy Co-authored-by: Guangda Liu <v-guangdaliu@microsoft.com> Co-authored-by: Yu Shi <shiyu_k1994@qq.com> Co-authored-by: GuangdaLiu <90019144+GuangdaLiu@users.noreply.github.com> Co-authored-by: James Lamb <jaylamb20@gmail.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
2022-12-28 09:09:11 +03:00
- ``data_sample_strategy`` :raw-html:`<a id="data_sample_strategy" title="Permalink to this parameter" href="#data_sample_strategy">&#x1F517;&#xFE0E;</a>`, default = ``bagging``, type = enum, options: ``bagging``, ``goss``
- ``bagging``, Randomly Bagging Sampling
- **Note**: ``bagging`` is only effective when ``bagging_freq > 0`` and ``bagging_fraction < 1.0``
- ``goss``, Gradient-based One-Side Sampling
- *New in 4.0.0*
- ``data`` :raw-html:`<a id="data" title="Permalink to this parameter" href="#data">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``train``, ``train_data``, ``train_data_file``, ``data_filename``
- path of training data, LightGBM will train from this data
- **Note**: can be used only in CLI version
- ``valid`` :raw-html:`<a id="valid" title="Permalink to this parameter" href="#valid">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``test``, ``valid_data``, ``valid_data_file``, ``test_data``, ``test_data_file``, ``valid_filenames``
- path(s) of validation/test data, LightGBM will output metrics for these data
- support multiple validation data, separated by ``,``
- **Note**: can be used only in CLI version
- ``num_iterations`` :raw-html:`<a id="num_iterations" title="Permalink to this parameter" href="#num_iterations">&#x1F517;&#xFE0E;</a>`, default = ``100``, type = int, aliases: ``num_iteration``, ``n_iter``, ``num_tree``, ``num_trees``, ``num_round``, ``num_rounds``, ``nrounds``, ``num_boost_round``, ``n_estimators``, ``max_iter``, constraints: ``num_iterations >= 0``
- number of boosting iterations
- **Note**: internally, LightGBM constructs ``num_class * num_iterations`` trees for multi-class classification problems
- ``learning_rate`` :raw-html:`<a id="learning_rate" title="Permalink to this parameter" href="#learning_rate">&#x1F517;&#xFE0E;</a>`, default = ``0.1``, type = double, aliases: ``shrinkage_rate``, ``eta``, constraints: ``learning_rate > 0.0``
- shrinkage rate
- in ``dart``, it also affects on normalization weights of dropped trees
- ``num_leaves`` :raw-html:`<a id="num_leaves" title="Permalink to this parameter" href="#num_leaves">&#x1F517;&#xFE0E;</a>`, default = ``31``, type = int, aliases: ``num_leaf``, ``max_leaves``, ``max_leaf``, ``max_leaf_nodes``, constraints: ``1 < num_leaves <= 131072``
- max number of leaves in one tree
- ``tree_learner`` :raw-html:`<a id="tree_learner" title="Permalink to this parameter" href="#tree_learner">&#x1F517;&#xFE0E;</a>`, default = ``serial``, type = enum, options: ``serial``, ``feature``, ``data``, ``voting``, aliases: ``tree``, ``tree_type``, ``tree_learner_type``
- ``serial``, single machine tree learner
- ``feature``, feature parallel tree learner, aliases: ``feature_parallel``
- ``data``, data parallel tree learner, aliases: ``data_parallel``
- ``voting``, voting parallel tree learner, aliases: ``voting_parallel``
- refer to `Distributed Learning Guide <./Parallel-Learning-Guide.rst>`__ to get more details
- ``num_threads`` :raw-html:`<a id="num_threads" title="Permalink to this parameter" href="#num_threads">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = int, aliases: ``num_thread``, ``nthread``, ``nthreads``, ``n_jobs``
- used only in ``train``, ``prediction`` and ``refit`` tasks or in correspondent functions of language-specific packages
- number of threads for LightGBM
- ``0`` means default number of threads in OpenMP
- for the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPUs use `hyper-threading <https://en.wikipedia.org/wiki/Hyper-threading>`__ to generate 2 threads per CPU core)
- do not set it too large if your dataset is small (for instance, do not use 64 threads for a dataset with 10,000 rows)
- be aware a task manager or any similar CPU monitoring tool might report that cores not being fully utilized. **This is normal**
- for distributed learning, do not use all CPU cores because this will cause poor performance for the network communication
- **Note**: please **don't** change this during training, especially when running multiple jobs simultaneously by external packages, otherwise it may cause undesirable errors
- ``device_type`` :raw-html:`<a id="device_type" title="Permalink to this parameter" href="#device_type">&#x1F517;&#xFE0E;</a>`, default = ``cpu``, type = enum, options: ``cpu``, ``gpu``, ``cuda``, aliases: ``device``
- device for the tree learning
- ``cpu`` supports all LightGBM functionality and is portable across the widest range of operating systems and hardware
- ``cuda`` offers faster training than ``gpu`` or ``cpu``, but only works on GPUs supporting CUDA
- ``gpu`` can be faster than ``cpu`` and works on a wider range of GPUs than CUDA
- **Note**: it is recommended to use the smaller ``max_bin`` (e.g. 63) to get the better speed up
- **Note**: for the faster speed, GPU uses 32-bit float point to sum up by default, so this may affect the accuracy for some tasks. You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training
- **Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build LightGBM with GPU support
- ``seed`` :raw-html:`<a id="seed" title="Permalink to this parameter" href="#seed">&#x1F517;&#xFE0E;</a>`, default = ``None``, type = int, aliases: ``random_seed``, ``random_state``
- this seed is used to generate other seeds, e.g. ``data_random_seed``, ``feature_fraction_seed``, etc.
- by default, this seed is unused in favor of default values of other seeds
- this seed has lower priority in comparison with other seeds, which means that it will be overridden, if you set other seeds explicitly
- ``deterministic`` :raw-html:`<a id="deterministic" title="Permalink to this parameter" href="#deterministic">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only with ``cpu`` device type
- setting this to ``true`` should ensure the stable results when using the same data and the same parameters (and different ``num_threads``)
- when you use the different seeds, different LightGBM versions, the binaries compiled by different compilers, or in different systems, the results are expected to be different
- you can `raise issues <https://github.com/microsoft/LightGBM/issues>`__ in LightGBM GitHub repo when you meet the unstable results
- **Note**: setting this to ``true`` may slow down the training
- **Note**: to avoid potential instability due to numerical issues, please set ``force_col_wise=true`` or ``force_row_wise=true`` when setting ``deterministic=true``
Learning Control Parameters
---------------------------
- ``force_col_wise`` :raw-html:`<a id="force_col_wise" title="Permalink to this parameter" href="#force_col_wise">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only with ``cpu`` device type
- set this to ``true`` to force col-wise histogram building
- enabling this is recommended when:
- the number of columns is large, or the total number of bins is large
2020-07-27 05:24:59 +03:00
- ``num_threads`` is large, e.g. ``> 20``
- you want to reduce memory cost
- **Note**: when both ``force_col_wise`` and ``force_row_wise`` are ``false``, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one to ``true`` manually
- **Note**: this parameter cannot be used at the same time with ``force_row_wise``, choose only one of them
- ``force_row_wise`` :raw-html:`<a id="force_row_wise" title="Permalink to this parameter" href="#force_row_wise">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only with ``cpu`` device type
- set this to ``true`` to force row-wise histogram building
- enabling this is recommended when:
- the number of data points is large, and the total number of bins is relatively small
2020-07-27 05:24:59 +03:00
- ``num_threads`` is relatively small, e.g. ``<= 16``
Decouple Boosting Types (fixes #3128) (#4827) * add parameter data_sample_strategy * abstract GOSS as a sample strategy(GOSS1), togetherwith origial GOSS (Normal Bagging has not been abstracted, so do NOT use it now) * abstract Bagging as a subclass (BAGGING), but original Bagging members in GBDT are still kept * fix some variables * remove GOSS(as boost) and Bagging logic in GBDT * rename GOSS1 to GOSS(as sample strategy) * add warning about use GOSS as boosting_type * a little ; bug * remove CHECK when "gradients != nullptr" * rename DataSampleStrategy to avoid confusion * remove and add some ccomments, followingconvention * fix bug about GBDT::ResetConfig (ObjectiveFunction inconsistencty bet… * add std::ignore to avoid compiler warnings (anpotential fails) * update Makevars and vcxproj * handle constant hessian move resize of gradient vectors out of sample strategy * mark override for IsHessianChange * fix lint errors * rerun parameter_generator.py * update config_auto.cpp * delete redundant blank line * update num_data_ when train_data_ is updated set gradients and hessians when GOSS * check bagging_freq is not zero * reset config_ value merge ResetBaggingConfig and ResetGOSS * remove useless check * add ttests in test_engine.py * remove whitespace in blank line * remove arguments verbose_eval and evals_result * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py reduce num_boost_round Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/boosting/sample_strategy.cpp modify warning about setting goss as `boosting_type` Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update tests/python_package_test/test_engine.py replace load_boston() with make_regression() remove value checks of mean_squared_error in test_sample_strategy_with_boosting() * Update tests/python_package_test/test_engine.py add value checks of mean_squared_error in test_sample_strategy_with_boosting() * Modify warnning about using goss as boosting type * Update tests/python_package_test/test_engine.py add random_state=42 for make_regression() reduce the threshold of mean_square_error * Update src/boosting/sample_strategy.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * remove goss from boosting types in documentation * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/bagging.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update src/boosting/goss.hpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * rename GOSS with GOSSStrategy * update doc * address comments * fix table in doc * Update include/LightGBM/config.h Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * update documentation * update test case * revert useless change in test_engine.py * add tests for evaluation results in test_sample_strategy_with_boosting * include <string> * change to assert_allclose in test_goss_boosting_and_strategy_equivalent * more tolerance in result checking, due to minor difference in results of gpu versions * change == to np.testing.assert_allclose * fix test case * set gpu_use_dp to true * change --report to --report-level for rstcheck * use gpu_use_dp=true in test_goss_boosting_and_strategy_equivalent * revert unexpected changes of non-ascii characters * revert unexpected changes of non-ascii characters * remove useless changes * allocate gradients_pointer_ and hessians_pointer when necessary * add spaces * remove redundant virtual * include <LightGBM/utils/log.h> for USE_CUDA * check for in test_goss_boosting_and_strategy_equivalent * check for identity in test_sample_strategy_with_boosting * remove cuda option in test_sample_strategy_with_boosting * Update tests/python_package_test/test_engine.py Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update tests/python_package_test/test_engine.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after ResetSampleConfig * ResetGradientBuffers after bagging * remove useless code * check objective_function_ instead of gradients * enable rf with goss simplify params in test cases * remove useless changes * allow rf with feature subsampling alone * change position of ResetGradientBuffers * check for dask * add parameter types for data_sample_strategy Co-authored-by: Guangda Liu <v-guangdaliu@microsoft.com> Co-authored-by: Yu Shi <shiyu_k1994@qq.com> Co-authored-by: GuangdaLiu <90019144+GuangdaLiu@users.noreply.github.com> Co-authored-by: James Lamb <jaylamb20@gmail.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
2022-12-28 09:09:11 +03:00
- you want to use small ``bagging_fraction`` or ``goss`` sample strategy to speed up
- **Note**: setting this to ``true`` will double the memory cost for Dataset object. If you have not enough memory, you can try setting ``force_col_wise=true``
- **Note**: when both ``force_col_wise`` and ``force_row_wise`` are ``false``, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one to ``true`` manually
- **Note**: this parameter cannot be used at the same time with ``force_col_wise``, choose only one of them
- ``histogram_pool_size`` :raw-html:`<a id="histogram_pool_size" title="Permalink to this parameter" href="#histogram_pool_size">&#x1F517;&#xFE0E;</a>`, default = ``-1.0``, type = double, aliases: ``hist_pool_size``
- max cache size in MB for historical histogram
- ``< 0`` means no limit
- ``max_depth`` :raw-html:`<a id="max_depth" title="Permalink to this parameter" href="#max_depth">&#x1F517;&#xFE0E;</a>`, default = ``-1``, type = int
- limit the max depth for tree model. This is used to deal with over-fitting when ``#data`` is small. Tree still grows leaf-wise
- ``<= 0`` means no limit
- ``min_data_in_leaf`` :raw-html:`<a id="min_data_in_leaf" title="Permalink to this parameter" href="#min_data_in_leaf">&#x1F517;&#xFE0E;</a>`, default = ``20``, type = int, aliases: ``min_data_per_leaf``, ``min_data``, ``min_child_samples``, ``min_samples_leaf``, constraints: ``min_data_in_leaf >= 0``
- minimal number of data in one leaf. Can be used to deal with over-fitting
- **Note**: this is an approximation based on the Hessian, so occasionally you may observe splits which produce leaf nodes that have less than this many observations
- ``min_sum_hessian_in_leaf`` :raw-html:`<a id="min_sum_hessian_in_leaf" title="Permalink to this parameter" href="#min_sum_hessian_in_leaf">&#x1F517;&#xFE0E;</a>`, default = ``1e-3``, type = double, aliases: ``min_sum_hessian_per_leaf``, ``min_sum_hessian``, ``min_hessian``, ``min_child_weight``, constraints: ``min_sum_hessian_in_leaf >= 0.0``
- minimal sum hessian in one leaf. Like ``min_data_in_leaf``, it can be used to deal with over-fitting
- ``bagging_fraction`` :raw-html:`<a id="bagging_fraction" title="Permalink to this parameter" href="#bagging_fraction">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, aliases: ``sub_row``, ``subsample``, ``bagging``, constraints: ``0.0 < bagging_fraction <= 1.0``
- like ``feature_fraction``, but this will randomly select part of data without resampling
- can be used to speed up training
- can be used to deal with over-fitting
- **Note**: to enable bagging, ``bagging_freq`` should be set to a non zero value as well
- ``pos_bagging_fraction`` :raw-html:`<a id="pos_bagging_fraction" title="Permalink to this parameter" href="#pos_bagging_fraction">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, aliases: ``pos_sub_row``, ``pos_subsample``, ``pos_bagging``, constraints: ``0.0 < pos_bagging_fraction <= 1.0``
- used only in ``binary`` application
- used for imbalanced binary classification problem, will randomly sample ``#pos_samples * pos_bagging_fraction`` positive samples in bagging
- should be used together with ``neg_bagging_fraction``
- set this to ``1.0`` to disable
- **Note**: to enable this, you need to set ``bagging_freq`` and ``neg_bagging_fraction`` as well
- **Note**: if both ``pos_bagging_fraction`` and ``neg_bagging_fraction`` are set to ``1.0``, balanced bagging is disabled
- **Note**: if balanced bagging is enabled, ``bagging_fraction`` will be ignored
- ``neg_bagging_fraction`` :raw-html:`<a id="neg_bagging_fraction" title="Permalink to this parameter" href="#neg_bagging_fraction">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, aliases: ``neg_sub_row``, ``neg_subsample``, ``neg_bagging``, constraints: ``0.0 < neg_bagging_fraction <= 1.0``
- used only in ``binary`` application
- used for imbalanced binary classification problem, will randomly sample ``#neg_samples * neg_bagging_fraction`` negative samples in bagging
- should be used together with ``pos_bagging_fraction``
- set this to ``1.0`` to disable
- **Note**: to enable this, you need to set ``bagging_freq`` and ``pos_bagging_fraction`` as well
- **Note**: if both ``pos_bagging_fraction`` and ``neg_bagging_fraction`` are set to ``1.0``, balanced bagging is disabled
- **Note**: if balanced bagging is enabled, ``bagging_fraction`` will be ignored
- ``bagging_freq`` :raw-html:`<a id="bagging_freq" title="Permalink to this parameter" href="#bagging_freq">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = int, aliases: ``subsample_freq``
- frequency for bagging
- ``0`` means disable bagging; ``k`` means perform bagging at every ``k`` iteration. Every ``k``-th iteration, LightGBM will randomly select ``bagging_fraction * 100 %`` of the data to use for the next ``k`` iterations
- **Note**: bagging is only effective when ``0.0 < bagging_fraction < 1.0``
- ``bagging_seed`` :raw-html:`<a id="bagging_seed" title="Permalink to this parameter" href="#bagging_seed">&#x1F517;&#xFE0E;</a>`, default = ``3``, type = int, aliases: ``bagging_fraction_seed``
- random seed for bagging
- ``feature_fraction`` :raw-html:`<a id="feature_fraction" title="Permalink to this parameter" href="#feature_fraction">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, aliases: ``sub_feature``, ``colsample_bytree``, constraints: ``0.0 < feature_fraction <= 1.0``
- LightGBM will randomly select a subset of features on each iteration (tree) if ``feature_fraction`` is smaller than ``1.0``. For example, if you set it to ``0.8``, LightGBM will select 80% of features before training each tree
- can be used to speed up training
- can be used to deal with over-fitting
- ``feature_fraction_bynode`` :raw-html:`<a id="feature_fraction_bynode" title="Permalink to this parameter" href="#feature_fraction_bynode">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, aliases: ``sub_feature_bynode``, ``colsample_bynode``, constraints: ``0.0 < feature_fraction_bynode <= 1.0``
- LightGBM will randomly select a subset of features on each tree node if ``feature_fraction_bynode`` is smaller than ``1.0``. For example, if you set it to ``0.8``, LightGBM will select 80% of features at each tree node
- can be used to deal with over-fitting
- **Note**: unlike ``feature_fraction``, this cannot speed up training
- **Note**: if both ``feature_fraction`` and ``feature_fraction_bynode`` are smaller than ``1.0``, the final fraction of each node is ``feature_fraction * feature_fraction_bynode``
- ``feature_fraction_seed`` :raw-html:`<a id="feature_fraction_seed" title="Permalink to this parameter" href="#feature_fraction_seed">&#x1F517;&#xFE0E;</a>`, default = ``2``, type = int
- random seed for ``feature_fraction``
2021-04-23 14:51:38 +03:00
- ``extra_trees`` :raw-html:`<a id="extra_trees" title="Permalink to this parameter" href="#extra_trees">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``extra_tree``
- use extremely randomized trees
- if set to ``true``, when evaluating node splits LightGBM will check only one randomly-chosen threshold for each feature
- can be used to speed up training
- can be used to deal with over-fitting
- ``extra_seed`` :raw-html:`<a id="extra_seed" title="Permalink to this parameter" href="#extra_seed">&#x1F517;&#xFE0E;</a>`, default = ``6``, type = int
- random seed for selecting thresholds when ``extra_trees`` is true
- ``early_stopping_round`` :raw-html:`<a id="early_stopping_round" title="Permalink to this parameter" href="#early_stopping_round">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = int, aliases: ``early_stopping_rounds``, ``early_stopping``, ``n_iter_no_change``
- will stop training if one metric of one validation data doesn't improve in last ``early_stopping_round`` rounds
- ``<= 0`` means disable
- can be used to speed up training
- ``first_metric_only`` :raw-html:`<a id="first_metric_only" title="Permalink to this parameter" href="#first_metric_only">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- LightGBM allows you to provide multiple evaluation metrics. Set this to ``true``, if you want to use only the first metric for early stopping
- ``max_delta_step`` :raw-html:`<a id="max_delta_step" title="Permalink to this parameter" href="#max_delta_step">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, aliases: ``max_tree_output``, ``max_leaf_output``
- used to limit the max output of tree leaves
- ``<= 0`` means no constraint
- the final max output of leaves is ``learning_rate * max_delta_step``
- ``lambda_l1`` :raw-html:`<a id="lambda_l1" title="Permalink to this parameter" href="#lambda_l1">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, aliases: ``reg_alpha``, ``l1_regularization``, constraints: ``lambda_l1 >= 0.0``
- L1 regularization
- ``lambda_l2`` :raw-html:`<a id="lambda_l2" title="Permalink to this parameter" href="#lambda_l2">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, aliases: ``reg_lambda``, ``lambda``, ``l2_regularization``, constraints: ``lambda_l2 >= 0.0``
- L2 regularization
Trees with linear models at leaves (#3299) * Add Eigen library. * Working for simple test. * Apply changes to config params. * Handle nan data. * Update docs. * Add test. * Only load raw data if boosting=gbdt_linear * Remove unneeded code. * Minor updates. * Update to work with sk-learn interface. * Update to work with chunked datasets. * Throw error if we try to create a Booster with an already-constructed dataset having incompatible parameters. * Save raw data in binary dataset file. * Update docs and fix parameter checking. * Fix dataset loading. * Add test for regularization. * Fix bugs when saving and loading tree. * Add test for load/save linear model. * Remove unneeded code. * Fix case where not enough leaf data for linear model. * Simplify code. * Speed up code. * Speed up code. * Simplify code. * Speed up code. * Fix bugs. * Working version. * Store feature data column-wise (not fully working yet). * Fix bugs. * Speed up. * Speed up. * Remove unneeded code. * Small speedup. * Speed up. * Minor updates. * Remove unneeded code. * Fix bug. * Fix bug. * Speed up. * Speed up. * Simplify code. * Remove unneeded code. * Fix bug, add more tests. * Fix bug and add test. * Only store numerical features * Fix bug and speed up using templates. * Speed up prediction. * Fix bug with regularisation * Visual studio files. * Working version * Only check nans if necessary * Store coeff matrix as an array. * Align cache lines * Align cache lines * Preallocation coefficient calculation matrices * Small speedups * Small speedup * Reverse cache alignment changes * Change to dynamic schedule * Update docs. * Refactor so that linear tree learner is not a separate class. * Add refit capability. * Speed up * Small speedups. * Speed up add prediction to score. * Fix bug * Fix bug and speed up. * Speed up dataload. * Speed up dataload * Use vectors instead of pointers * Fix bug * Add OMP exception handling. * Change return type of LGBM_BoosterGetLinear to bool * Change return type of LGBM_BoosterGetLinear back to int, only parameter type needed to change * Remove unused internal_parent_ property of tree * Remove unused parameter to CreateTreeLearner * Remove reference to LinearTreeLearner * Minor style issues * Remove unneeded check * Reverse temporary testing change * Fix Visual Studio project files * Restore LightGBM.vcxproj.filters * Speed up * Speed up * Simplify code * Update docs * Simplify code * Initialise storage space for max num threads * Move Eigen to include directory and delete unused files * Remove old files. * Fix so it compiles with mingw * Fix gpu tree learner * Change AddPredictionToScore back to const * Fix python lint error * Fix C++ lint errors * Change eigen to a submodule * Update comment * Add the eigen folder * Try to fix build issues with eigen * Remove eigen files * Add eigen as submodule * Fix include paths * Exclude eigen files from Python linter * Ignore eigen folders for pydocstyle * Fix C++ linting errors * Fix docs * Fix docs * Exclude eigen directories from doxygen * Update manifest to include eigen * Update build_r to include eigen files * Fix compiler warnings * Store raw feature data as float * Use float for calculating linear coefficients * Remove eigen directory from GLOB * Don't compile linear model code when building R package * Fix doxygen issue * Fix lint issue * Fix lint issue * Remove uneeded code * Restore delected lines * Restore delected lines * Change return type of has_raw to bool * Update docs * Rename some variables and functions for readability * Make tree_learner parameter const in AddScore * Fix style issues * Pass vectors as const reference when setting tree properties * Make temporary storage of serial_tree_learner mutable so we can make the object's methods const * Remove get_raw_size, use num_numeric_features instead * Fix typo * Make contains_nan_ and any_nan_ properties immutable again * Remove data_has_nan_ property of tree * Remove temporary test code * Make linear_tree a dataset param * Fix lint error * Make LinearTreeLearner a separate class * Fix lint errors * Fix lint error * Add linear_tree_learner.o * Simulate omp_get_max_threads if openmp is not available * Update PushOneData to also store raw data. * Cast size to int * Fix bug in ReshapeRaw * Speed up code with multithreading * Use OMP_NUM_THREADS * Speed up with multithreading * Update to use ArrayToString * Fix tests * Fix test * Fix bug introduced in merge * Minor updates * Update docs
2020-12-24 09:01:23 +03:00
- ``linear_lambda`` :raw-html:`<a id="linear_lambda" title="Permalink to this parameter" href="#linear_lambda">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, constraints: ``linear_lambda >= 0.0``
- linear tree regularization, corresponds to the parameter ``lambda`` in Eq. 3 of `Gradient Boosting with Piece-Wise Linear Regression Trees <https://arxiv.org/pdf/1802.05640.pdf>`__
Trees with linear models at leaves (#3299) * Add Eigen library. * Working for simple test. * Apply changes to config params. * Handle nan data. * Update docs. * Add test. * Only load raw data if boosting=gbdt_linear * Remove unneeded code. * Minor updates. * Update to work with sk-learn interface. * Update to work with chunked datasets. * Throw error if we try to create a Booster with an already-constructed dataset having incompatible parameters. * Save raw data in binary dataset file. * Update docs and fix parameter checking. * Fix dataset loading. * Add test for regularization. * Fix bugs when saving and loading tree. * Add test for load/save linear model. * Remove unneeded code. * Fix case where not enough leaf data for linear model. * Simplify code. * Speed up code. * Speed up code. * Simplify code. * Speed up code. * Fix bugs. * Working version. * Store feature data column-wise (not fully working yet). * Fix bugs. * Speed up. * Speed up. * Remove unneeded code. * Small speedup. * Speed up. * Minor updates. * Remove unneeded code. * Fix bug. * Fix bug. * Speed up. * Speed up. * Simplify code. * Remove unneeded code. * Fix bug, add more tests. * Fix bug and add test. * Only store numerical features * Fix bug and speed up using templates. * Speed up prediction. * Fix bug with regularisation * Visual studio files. * Working version * Only check nans if necessary * Store coeff matrix as an array. * Align cache lines * Align cache lines * Preallocation coefficient calculation matrices * Small speedups * Small speedup * Reverse cache alignment changes * Change to dynamic schedule * Update docs. * Refactor so that linear tree learner is not a separate class. * Add refit capability. * Speed up * Small speedups. * Speed up add prediction to score. * Fix bug * Fix bug and speed up. * Speed up dataload. * Speed up dataload * Use vectors instead of pointers * Fix bug * Add OMP exception handling. * Change return type of LGBM_BoosterGetLinear to bool * Change return type of LGBM_BoosterGetLinear back to int, only parameter type needed to change * Remove unused internal_parent_ property of tree * Remove unused parameter to CreateTreeLearner * Remove reference to LinearTreeLearner * Minor style issues * Remove unneeded check * Reverse temporary testing change * Fix Visual Studio project files * Restore LightGBM.vcxproj.filters * Speed up * Speed up * Simplify code * Update docs * Simplify code * Initialise storage space for max num threads * Move Eigen to include directory and delete unused files * Remove old files. * Fix so it compiles with mingw * Fix gpu tree learner * Change AddPredictionToScore back to const * Fix python lint error * Fix C++ lint errors * Change eigen to a submodule * Update comment * Add the eigen folder * Try to fix build issues with eigen * Remove eigen files * Add eigen as submodule * Fix include paths * Exclude eigen files from Python linter * Ignore eigen folders for pydocstyle * Fix C++ linting errors * Fix docs * Fix docs * Exclude eigen directories from doxygen * Update manifest to include eigen * Update build_r to include eigen files * Fix compiler warnings * Store raw feature data as float * Use float for calculating linear coefficients * Remove eigen directory from GLOB * Don't compile linear model code when building R package * Fix doxygen issue * Fix lint issue * Fix lint issue * Remove uneeded code * Restore delected lines * Restore delected lines * Change return type of has_raw to bool * Update docs * Rename some variables and functions for readability * Make tree_learner parameter const in AddScore * Fix style issues * Pass vectors as const reference when setting tree properties * Make temporary storage of serial_tree_learner mutable so we can make the object's methods const * Remove get_raw_size, use num_numeric_features instead * Fix typo * Make contains_nan_ and any_nan_ properties immutable again * Remove data_has_nan_ property of tree * Remove temporary test code * Make linear_tree a dataset param * Fix lint error * Make LinearTreeLearner a separate class * Fix lint errors * Fix lint error * Add linear_tree_learner.o * Simulate omp_get_max_threads if openmp is not available * Update PushOneData to also store raw data. * Cast size to int * Fix bug in ReshapeRaw * Speed up code with multithreading * Use OMP_NUM_THREADS * Speed up with multithreading * Update to use ArrayToString * Fix tests * Fix test * Fix bug introduced in merge * Minor updates * Update docs
2020-12-24 09:01:23 +03:00
- ``min_gain_to_split`` :raw-html:`<a id="min_gain_to_split" title="Permalink to this parameter" href="#min_gain_to_split">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, aliases: ``min_split_gain``, constraints: ``min_gain_to_split >= 0.0``
- the minimal gain to perform split
- can be used to speed up training
- ``drop_rate`` :raw-html:`<a id="drop_rate" title="Permalink to this parameter" href="#drop_rate">&#x1F517;&#xFE0E;</a>`, default = ``0.1``, type = double, aliases: ``rate_drop``, constraints: ``0.0 <= drop_rate <= 1.0``
- used only in ``dart``
- dropout rate: a fraction of previous trees to drop during the dropout
- ``max_drop`` :raw-html:`<a id="max_drop" title="Permalink to this parameter" href="#max_drop">&#x1F517;&#xFE0E;</a>`, default = ``50``, type = int
- used only in ``dart``
- max number of dropped trees during one boosting iteration
- ``<=0`` means no limit
- ``skip_drop`` :raw-html:`<a id="skip_drop" title="Permalink to this parameter" href="#skip_drop">&#x1F517;&#xFE0E;</a>`, default = ``0.5``, type = double, constraints: ``0.0 <= skip_drop <= 1.0``
- used only in ``dart``
- probability of skipping the dropout procedure during a boosting iteration
- ``xgboost_dart_mode`` :raw-html:`<a id="xgboost_dart_mode" title="Permalink to this parameter" href="#xgboost_dart_mode">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only in ``dart``
- set this to ``true``, if you want to use xgboost dart mode
- ``uniform_drop`` :raw-html:`<a id="uniform_drop" title="Permalink to this parameter" href="#uniform_drop">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only in ``dart``
- set this to ``true``, if you want to use uniform drop
- ``drop_seed`` :raw-html:`<a id="drop_seed" title="Permalink to this parameter" href="#drop_seed">&#x1F517;&#xFE0E;</a>`, default = ``4``, type = int
- used only in ``dart``
- random seed to choose dropping models
- ``top_rate`` :raw-html:`<a id="top_rate" title="Permalink to this parameter" href="#top_rate">&#x1F517;&#xFE0E;</a>`, default = ``0.2``, type = double, constraints: ``0.0 <= top_rate <= 1.0``
- used only in ``goss``
- the retain ratio of large gradient data
- ``other_rate`` :raw-html:`<a id="other_rate" title="Permalink to this parameter" href="#other_rate">&#x1F517;&#xFE0E;</a>`, default = ``0.1``, type = double, constraints: ``0.0 <= other_rate <= 1.0``
- used only in ``goss``
- the retain ratio of small gradient data
- ``min_data_per_group`` :raw-html:`<a id="min_data_per_group" title="Permalink to this parameter" href="#min_data_per_group">&#x1F517;&#xFE0E;</a>`, default = ``100``, type = int, constraints: ``min_data_per_group > 0``
- minimal number of data per categorical group
- ``max_cat_threshold`` :raw-html:`<a id="max_cat_threshold" title="Permalink to this parameter" href="#max_cat_threshold">&#x1F517;&#xFE0E;</a>`, default = ``32``, type = int, constraints: ``max_cat_threshold > 0``
- used for the categorical features
- limit number of split points considered for categorical features. See `the documentation on how LightGBM finds optimal splits for categorical features <./Features.rst#optimal-split-for-categorical-features>`_ for more details
- can be used to speed up training
- ``cat_l2`` :raw-html:`<a id="cat_l2" title="Permalink to this parameter" href="#cat_l2">&#x1F517;&#xFE0E;</a>`, default = ``10.0``, type = double, constraints: ``cat_l2 >= 0.0``
- used for the categorical features
- L2 regularization in categorical split
- ``cat_smooth`` :raw-html:`<a id="cat_smooth" title="Permalink to this parameter" href="#cat_smooth">&#x1F517;&#xFE0E;</a>`, default = ``10.0``, type = double, constraints: ``cat_smooth >= 0.0``
- used for the categorical features
- this can reduce the effect of noises in categorical features, especially for categories with few data
- ``max_cat_to_onehot`` :raw-html:`<a id="max_cat_to_onehot" title="Permalink to this parameter" href="#max_cat_to_onehot">&#x1F517;&#xFE0E;</a>`, default = ``4``, type = int, constraints: ``max_cat_to_onehot > 0``
- when number of categories of one feature smaller than or equal to ``max_cat_to_onehot``, one-vs-other split algorithm will be used
- ``top_k`` :raw-html:`<a id="top_k" title="Permalink to this parameter" href="#top_k">&#x1F517;&#xFE0E;</a>`, default = ``20``, type = int, aliases: ``topk``, constraints: ``top_k > 0``
- used only in ``voting`` tree learner, refer to `Voting parallel <./Parallel-Learning-Guide.rst#choose-appropriate-parallel-algorithm>`__
- set this to larger value for more accurate result, but it will slow down the training speed
- ``monotone_constraints`` :raw-html:`<a id="monotone_constraints" title="Permalink to this parameter" href="#monotone_constraints">&#x1F517;&#xFE0E;</a>`, default = ``None``, type = multi-int, aliases: ``mc``, ``monotone_constraint``, ``monotonic_cst``
2018-04-18 06:12:36 +03:00
- used for constraints of monotonic features
2018-04-18 06:12:36 +03:00
- ``1`` means increasing, ``-1`` means decreasing, ``0`` means non-constraint
2018-04-18 06:12:36 +03:00
- you need to specify all features in order. For example, ``mc=-1,0,1`` means decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
Pr4 advanced method monotone constraints (#3264) * No need to pass the tree to all fuctions related to monotone constraints because the pointer is shared. * Fix OppositeChildShouldBeUpdated numerical split optimisation. * No need to use constraints when computing the output of the root. * Refactor existing constraints. * Add advanced constraints method. * Update tests. * Add override. * linting. * Add override. * Simplify condition in LeftRightContainsRelevantInformation. * Add virtual destructor to FeatureConstraint. * Remove redundant blank line. * linting of else. * Indentation. * Lint else. * Replaced non-const reference by pointers. * Forgotten reference. * Leverage USE_MC for efficiency. * Make constraints const again in feature_histogram.hpp. * Update docs. * Add "advanced" to the monotone constraints options. * Update monotone constraints restrictions. * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Remove superfluous parenthesis. * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Remove std namespace qualifier. * Fix unsigned_int size_t comparison. * Set num_features as int for consistency with the rest of the codebase. * Make sure constraints exist before recomputing them. * Initialize previous constraints in UpdateConstraints. * Update monotone constraints restrictions. * Refactor UpdateConstraints loop. * Update src/io/config.cpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Delete white spaces. Co-authored-by: Charles Auguste <charles.auguste@sig.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
2020-09-22 00:50:43 +03:00
- ``monotone_constraints_method`` :raw-html:`<a id="monotone_constraints_method" title="Permalink to this parameter" href="#monotone_constraints_method">&#x1F517;&#xFE0E;</a>`, default = ``basic``, type = enum, options: ``basic``, ``intermediate``, ``advanced``, aliases: ``monotone_constraining_method``, ``mc_method``
Improving monotone constraints ("Fast" method; linked to #2305, #2717) (#2770) * Add util functions. * Added monotone_constraints_method as a parameter. * Add the intermediate constraining method. * Updated tests. * Minor fixes. * Typo. * Linting. * Ran the parameter generator for the doc. * Removed usage of the FeatureMonotone function. * more fixes * Fix. * Remove duplicated code. * Add debug checks. * Typo. * Bug fix. * Disable the use of intermediate monotone constraints and feature sampling at the same time. * Added an alias for monotone constraining method. * Use the right variable to get the number of threads. * Fix DEBUG checks. * Add back check to determine if histogram is splittable. * Added forgotten override keywords. * Perform monotone constraint update only when necessary. * Small refactor of FastLeafConstraints. * Post rebase commit. * Small refactor. * Typo. * Added comment and slightly improved logic of monotone constraints. * Forgot a const. * Vectors that are to be modified need to be pointers. * Rename FastLeafConstraints to IntermediateLeafConstraints to match documentation. * Remove overload of GoUpToFindLeavesToUpdate. * Stop memory leaking. * Fix cpplint issues. * Fix checks. * Fix more cpplint issues. * Refactor config monotone constraints method. * Typos. * Remove useless empty lines. * Add new line to separate includes. * Replace unsigned ind by size_t. * Reduce number of trials in tests to decrease CI time. * Specify monotone constraints better in tests. * Removed outer loop in test of monotone constraints. * Added categorical features to the monotone constraints tests. * Add blank line. * Regenerate parameters automatically. * Speed up ShouldKeepGoingLeftRight. Co-authored-by: Charles Auguste <auguste@dubquantdev801.ire.susq.com> Co-authored-by: guolinke <guolin.ke@outlook.com>
2020-03-23 19:15:46 +03:00
- used only if ``monotone_constraints`` is set
- monotone constraints method
- ``basic``, the most basic monotone constraints method. It does not slow the library at all, but over-constrains the predictions
- ``intermediate``, a `more advanced method <https://hal.science/hal-02862802/document>`__, which may slow the library very slightly. However, this method is much less constraining than the basic method and should significantly improve the results
Improving monotone constraints ("Fast" method; linked to #2305, #2717) (#2770) * Add util functions. * Added monotone_constraints_method as a parameter. * Add the intermediate constraining method. * Updated tests. * Minor fixes. * Typo. * Linting. * Ran the parameter generator for the doc. * Removed usage of the FeatureMonotone function. * more fixes * Fix. * Remove duplicated code. * Add debug checks. * Typo. * Bug fix. * Disable the use of intermediate monotone constraints and feature sampling at the same time. * Added an alias for monotone constraining method. * Use the right variable to get the number of threads. * Fix DEBUG checks. * Add back check to determine if histogram is splittable. * Added forgotten override keywords. * Perform monotone constraint update only when necessary. * Small refactor of FastLeafConstraints. * Post rebase commit. * Small refactor. * Typo. * Added comment and slightly improved logic of monotone constraints. * Forgot a const. * Vectors that are to be modified need to be pointers. * Rename FastLeafConstraints to IntermediateLeafConstraints to match documentation. * Remove overload of GoUpToFindLeavesToUpdate. * Stop memory leaking. * Fix cpplint issues. * Fix checks. * Fix more cpplint issues. * Refactor config monotone constraints method. * Typos. * Remove useless empty lines. * Add new line to separate includes. * Replace unsigned ind by size_t. * Reduce number of trials in tests to decrease CI time. * Specify monotone constraints better in tests. * Removed outer loop in test of monotone constraints. * Added categorical features to the monotone constraints tests. * Add blank line. * Regenerate parameters automatically. * Speed up ShouldKeepGoingLeftRight. Co-authored-by: Charles Auguste <auguste@dubquantdev801.ire.susq.com> Co-authored-by: guolinke <guolin.ke@outlook.com>
2020-03-23 19:15:46 +03:00
- ``advanced``, an `even more advanced method <https://hal.science/hal-02862802/document>`__, which may slow the library. However, this method is even less constraining than the intermediate method and should again significantly improve the results
Pr4 advanced method monotone constraints (#3264) * No need to pass the tree to all fuctions related to monotone constraints because the pointer is shared. * Fix OppositeChildShouldBeUpdated numerical split optimisation. * No need to use constraints when computing the output of the root. * Refactor existing constraints. * Add advanced constraints method. * Update tests. * Add override. * linting. * Add override. * Simplify condition in LeftRightContainsRelevantInformation. * Add virtual destructor to FeatureConstraint. * Remove redundant blank line. * linting of else. * Indentation. * Lint else. * Replaced non-const reference by pointers. * Forgotten reference. * Leverage USE_MC for efficiency. * Make constraints const again in feature_histogram.hpp. * Update docs. * Add "advanced" to the monotone constraints options. * Update monotone constraints restrictions. * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Remove superfluous parenthesis. * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Fix loop iterator. Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Remove std namespace qualifier. * Fix unsigned_int size_t comparison. * Set num_features as int for consistency with the rest of the codebase. * Make sure constraints exist before recomputing them. * Initialize previous constraints in UpdateConstraints. * Update monotone constraints restrictions. * Refactor UpdateConstraints loop. * Update src/io/config.cpp Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Delete white spaces. Co-authored-by: Charles Auguste <charles.auguste@sig.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
2020-09-22 00:50:43 +03:00
2020-04-09 16:25:17 +03:00
- ``monotone_penalty`` :raw-html:`<a id="monotone_penalty" title="Permalink to this parameter" href="#monotone_penalty">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, aliases: ``monotone_splits_penalty``, ``ms_penalty``, ``mc_penalty``, constraints: ``monotone_penalty >= 0.0``
- used only if ``monotone_constraints`` is set
- `monotone penalty <https://hal.science/hal-02862802/document>`__: a penalization parameter X forbids any monotone splits on the first X (rounded down) level(s) of the tree. The penalty applied to monotone splits on a given depth is a continuous, increasing function the penalization parameter
2020-04-09 16:25:17 +03:00
- if ``0.0`` (the default), no penalization is applied
- ``feature_contri`` :raw-html:`<a id="feature_contri" title="Permalink to this parameter" href="#feature_contri">&#x1F517;&#xFE0E;</a>`, default = ``None``, type = multi-double, aliases: ``feature_contrib``, ``fc``, ``fp``, ``feature_penalty``
- used to control feature's split gain, will use ``gain[i] = max(0, feature_contri[i]) * gain[i]`` to replace the split gain of i-th feature
- you need to specify all features in order
- ``forcedsplits_filename`` :raw-html:`<a id="forcedsplits_filename" title="Permalink to this parameter" href="#forcedsplits_filename">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``fs``, ``forced_splits_filename``, ``forced_splits_file``, ``forced_splits``
- path to a ``.json`` file that specifies splits to force at the top of every decision tree before best-first learning commences
- ``.json`` file can be arbitrarily nested, and each split contains ``feature``, ``threshold`` fields, as well as ``left`` and ``right`` fields representing subsplits
- categorical splits are forced in a one-hot fashion, with ``left`` representing the split containing the feature value and ``right`` representing other values
- **Note**: the forced split logic will be ignored, if the split makes gain worse
- see `this file <https://github.com/microsoft/LightGBM/blob/master/examples/binary_classification/forced_splits.json>`__ as an example
2018-04-18 06:12:36 +03:00
- ``refit_decay_rate`` :raw-html:`<a id="refit_decay_rate" title="Permalink to this parameter" href="#refit_decay_rate">&#x1F517;&#xFE0E;</a>`, default = ``0.9``, type = double, constraints: ``0.0 <= refit_decay_rate <= 1.0``
- decay rate of ``refit`` task, will use ``leaf_output = refit_decay_rate * old_leaf_output + (1.0 - refit_decay_rate) * new_leaf_output`` to refit trees
- used only in ``refit`` task in CLI version or as argument in ``refit`` function in language-specific package
Add Cost Effective Gradient Boosting (#2014) * Add configuration parameters for CEGB. * Add skeleton CEGB tree learner Like the original CEGB version, this inherits from SerialTreeLearner. Currently, it changes nothing from the original. * Track features used in CEGB tree learner. * Pull CEGB tradeoff and coupled feature penalty from config. * Implement finding best splits for CEGB This is heavily based on the serial version, but just adds using the coupled penalties. * Set proper defaults for cegb parameters. * Ensure sanity checks don't switch off CEGB. * Implement per-data-point feature penalties in CEGB. * Implement split penalty and remove unused parameters. * Merge changes from CEGB tree learner into serial tree learner * Represent features_used_in_data by a bitset, to reduce the memory overhead of CEGB, and add sanity checks for the lengths of the penalty vectors. * Fix bug where CEGB would incorrectly penalise a previously used feature The tree learner did not update the gains of previously computed leaf splits when splitting a leaf elsewhere in the tree. This caused it to prefer new features due to incorrectly penalising splitting on previously used features. * Document CEGB parameters and add them to the appropriate section. * Remove leftover reference to cegb tree learner. * Remove outdated diff. * Fix warnings * Fix minor issues identified by @StrikerRUS. * Add docs section on CEGB, including citation. * Fix link. * Fix CI failure. * Add some unit tests * Fix pylint issues. * Fix remaining pylint issue
2019-04-04 05:35:11 +03:00
- ``cegb_tradeoff`` :raw-html:`<a id="cegb_tradeoff" title="Permalink to this parameter" href="#cegb_tradeoff">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, constraints: ``cegb_tradeoff >= 0.0``
- cost-effective gradient boosting multiplier for all penalties
- ``cegb_penalty_split`` :raw-html:`<a id="cegb_penalty_split" title="Permalink to this parameter" href="#cegb_penalty_split">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, constraints: ``cegb_penalty_split >= 0.0``
- cost-effective gradient-boosting penalty for splitting a node
- ``cegb_penalty_feature_lazy`` :raw-html:`<a id="cegb_penalty_feature_lazy" title="Permalink to this parameter" href="#cegb_penalty_feature_lazy">&#x1F517;&#xFE0E;</a>`, default = ``0,0,...,0``, type = multi-double
- cost-effective gradient boosting penalty for using a feature
- applied per data point
- ``cegb_penalty_feature_coupled`` :raw-html:`<a id="cegb_penalty_feature_coupled" title="Permalink to this parameter" href="#cegb_penalty_feature_coupled">&#x1F517;&#xFE0E;</a>`, default = ``0,0,...,0``, type = multi-double
- cost-effective gradient boosting penalty for using a feature
- applied once per forest
- ``path_smooth`` :raw-html:`<a id="path_smooth" title="Permalink to this parameter" href="#path_smooth">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = double, constraints: ``path_smooth >= 0.0``
- controls smoothing applied to tree nodes
- helps prevent overfitting on leaves with few samples
- if set to zero, no smoothing is applied
- if ``path_smooth > 0`` then ``min_data_in_leaf`` must be at least ``2``
- larger values give stronger regularization
- the weight of each node is ``w * (n / path_smooth) / (n / path_smooth + 1) + w_p / (n / path_smooth + 1)``, where ``n`` is the number of samples in the node, ``w`` is the optimal node weight to minimise the loss (approximately ``-sum_gradients / sum_hessians``), and ``w_p`` is the weight of the parent node
- note that the parent output ``w_p`` itself has smoothing applied, unless it is the root node, so that the smoothing effect accumulates with the tree depth
- ``interaction_constraints`` :raw-html:`<a id="interaction_constraints" title="Permalink to this parameter" href="#interaction_constraints">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string
- controls which features can appear in the same branch
- by default interaction constraints are disabled, to enable them you can specify
- for CLI, lists separated by commas, e.g. ``[0,1,2],[2,3]``
- for Python-package, list of lists, e.g. ``[[0, 1, 2], [2, 3]]``
- for R-package, list of character or numeric vectors, e.g. ``list(c("var1", "var2", "var3"), c("var3", "var4"))`` or ``list(c(1L, 2L, 3L), c(3L, 4L))``. Numeric vectors should use 1-based indexing, where ``1L`` is the first feature, ``2L`` is the second feature, etc
- any two features can only appear in the same branch only if there exists a constraint containing both features
- ``verbosity`` :raw-html:`<a id="verbosity" title="Permalink to this parameter" href="#verbosity">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, aliases: ``verbose``
- controls the level of LightGBM's verbosity
- ``< 0``: Fatal, ``= 0``: Error (Warning), ``= 1``: Info, ``> 1``: Debug
- ``input_model`` :raw-html:`<a id="input_model" title="Permalink to this parameter" href="#input_model">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``model_input``, ``model_in``
- filename of input model
- for ``prediction`` task, this model will be applied to prediction data
- for ``train`` task, training will be continued from this model
- **Note**: can be used only in CLI version
- ``output_model`` :raw-html:`<a id="output_model" title="Permalink to this parameter" href="#output_model">&#x1F517;&#xFE0E;</a>`, default = ``LightGBM_model.txt``, type = string, aliases: ``model_output``, ``model_out``
- filename of output model in training
- **Note**: can be used only in CLI version
- ``saved_feature_importance_type`` :raw-html:`<a id="saved_feature_importance_type" title="Permalink to this parameter" href="#saved_feature_importance_type">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = int
- the feature importance type in the saved model file
- ``0``: count-based feature importance (numbers of splits are counted); ``1``: gain-based feature importance (values of gain are counted)
- **Note**: can be used only in CLI version
- ``snapshot_freq`` :raw-html:`<a id="snapshot_freq" title="Permalink to this parameter" href="#snapshot_freq">&#x1F517;&#xFE0E;</a>`, default = ``-1``, type = int, aliases: ``save_period``
- frequency of saving model file snapshot
- set this to positive value to enable this function. For example, the model file will be snapshotted at each iteration if ``snapshot_freq=1``
- **Note**: can be used only in CLI version
- ``use_quantized_grad`` :raw-html:`<a id="use_quantized_grad" title="Permalink to this parameter" href="#use_quantized_grad">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- whether to use gradient quantization when training
- enabling this will discretize (quantize) the gradients and hessians into bins of ``num_grad_quant_bins``
- with quantized training, most arithmetics in the training process will be integer operations
- gradient quantization can accelerate training, with little accuracy drop in most cases
- **Note**: can be used only with ``device_type = cpu``
- *New in version 4.0.0*
- ``num_grad_quant_bins`` :raw-html:`<a id="num_grad_quant_bins" title="Permalink to this parameter" href="#num_grad_quant_bins">&#x1F517;&#xFE0E;</a>`, default = ``4``, type = int
- number of bins to quantization gradients and hessians
- with more bins, the quantized training will be closer to full precision training
- **Note**: can be used only with ``device_type = cpu``
- *New in 4.0.0*
- ``quant_train_renew_leaf`` :raw-html:`<a id="quant_train_renew_leaf" title="Permalink to this parameter" href="#quant_train_renew_leaf">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- whether to renew the leaf values with original gradients when quantized training
- renewing is very helpful for good quantized training accuracy for ranking objectives
- **Note**: can be used only with ``device_type = cpu``
- *New in 4.0.0*
- ``stochastic_rounding`` :raw-html:`<a id="stochastic_rounding" title="Permalink to this parameter" href="#stochastic_rounding">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool
- whether to use stochastic rounding in gradient quantization
- *New in 4.0.0*
IO Parameters
-------------
Dataset Parameters
~~~~~~~~~~~~~~~~~~
2021-06-26 15:07:37 +03:00
- ``linear_tree`` :raw-html:`<a id="linear_tree" title="Permalink to this parameter" href="#linear_tree">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``linear_trees``
- fit piecewise linear gradient boosting tree
- tree splits are chosen in the usual way, but the model at each leaf is linear instead of constant
- the linear model at each leaf includes all the numerical features in that leaf's branch
- the first tree has constant leaf values
2021-06-26 15:07:37 +03:00
- categorical features are used for splits as normal but are not used in the linear models
- missing values should not be encoded as ``0``. Use ``np.nan`` for Python, ``NA`` for the CLI, and ``NA``, ``NA_real_``, or ``NA_integer_`` for R
- it is recommended to rescale data before training so that features have similar mean and standard deviation
- **Note**: only works with CPU and ``serial`` tree learner
- **Note**: ``regression_l1`` objective is not supported with linear tree boosting
- **Note**: setting ``linear_tree=true`` significantly increases the memory use of LightGBM
- **Note**: if you specify ``monotone_constraints``, constraints will be enforced when choosing the split points, but not when fitting the linear models on leaves
- ``max_bin`` :raw-html:`<a id="max_bin" title="Permalink to this parameter" href="#max_bin">&#x1F517;&#xFE0E;</a>`, default = ``255``, type = int, aliases: ``max_bins``, constraints: ``max_bin > 1``
- max number of bins that feature values will be bucketed in
- small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
- LightGBM will auto compress memory according to ``max_bin``. For example, LightGBM will use ``uint8_t`` for feature value if ``max_bin=255``
- ``max_bin_by_feature`` :raw-html:`<a id="max_bin_by_feature" title="Permalink to this parameter" href="#max_bin_by_feature">&#x1F517;&#xFE0E;</a>`, default = ``None``, type = multi-int
- max number of bins for each feature
- if not specified, will use ``max_bin`` for all features
- ``min_data_in_bin`` :raw-html:`<a id="min_data_in_bin" title="Permalink to this parameter" href="#min_data_in_bin">&#x1F517;&#xFE0E;</a>`, default = ``3``, type = int, constraints: ``min_data_in_bin > 0``
- minimal number of data inside one bin
- use this to avoid one-data-one-bin (potential over-fitting)
- ``bin_construct_sample_cnt`` :raw-html:`<a id="bin_construct_sample_cnt" title="Permalink to this parameter" href="#bin_construct_sample_cnt">&#x1F517;&#xFE0E;</a>`, default = ``200000``, type = int, aliases: ``subsample_for_bin``, constraints: ``bin_construct_sample_cnt > 0``
- number of data that sampled to construct feature discrete bins
- setting this to larger value will give better training result, but may increase data loading time
- set this to larger value if data is very sparse
- **Note**: don't set this to small values, otherwise, you may encounter unexpected errors and poor accuracy
- ``data_random_seed`` :raw-html:`<a id="data_random_seed" title="Permalink to this parameter" href="#data_random_seed">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, aliases: ``data_seed``
- random seed for sampling data to construct histogram bins
- ``is_enable_sparse`` :raw-html:`<a id="is_enable_sparse" title="Permalink to this parameter" href="#is_enable_sparse">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool, aliases: ``is_sparse``, ``enable_sparse``, ``sparse``
- used to enable/disable sparse optimization
- ``enable_bundle`` :raw-html:`<a id="enable_bundle" title="Permalink to this parameter" href="#enable_bundle">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool, aliases: ``is_enable_bundle``, ``bundle``
- set this to ``false`` to disable Exclusive Feature Bundling (EFB), which is described in `LightGBM: A Highly Efficient Gradient Boosting Decision Tree <https://papers.nips.cc/paper_files/paper/2017/hash/6449f44a102fde848669bdd9eb6b76fa-Abstract.html>`__
- **Note**: disabling this may cause the slow training speed for sparse datasets
- ``use_missing`` :raw-html:`<a id="use_missing" title="Permalink to this parameter" href="#use_missing">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool
- set this to ``false`` to disable the special handle of missing value
- ``zero_as_missing`` :raw-html:`<a id="zero_as_missing" title="Permalink to this parameter" href="#zero_as_missing">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- set this to ``true`` to treat all zero as missing values (including the unshown values in LibSVM / sparse matrices)
- set this to ``false`` to use ``na`` for representing missing values
- ``feature_pre_filter`` :raw-html:`<a id="feature_pre_filter" title="Permalink to this parameter" href="#feature_pre_filter">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool
- set this to ``true`` (the default) to tell LightGBM to ignore the features that are unsplittable based on ``min_data_in_leaf``
- as dataset object is initialized only once and cannot be changed after that, you may need to set this to ``false`` when searching parameters with ``min_data_in_leaf``, otherwise features are filtered by ``min_data_in_leaf`` firstly if you don't reconstruct dataset object
- **Note**: setting this to ``false`` may slow down the training
- ``pre_partition`` :raw-html:`<a id="pre_partition" title="Permalink to this parameter" href="#pre_partition">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``is_pre_partition``
- used for distributed learning (excluding the ``feature_parallel`` mode)
- ``true`` if training data are pre-partitioned, and different machines use different partitions
- ``two_round`` :raw-html:`<a id="two_round" title="Permalink to this parameter" href="#two_round">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``two_round_loading``, ``use_two_round_loading``
- set this to ``true`` if data file is too big to fit in memory
- by default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed, but may cause run out of memory error when the data file is very big
- **Note**: works only in case of loading data directly from text file
- ``header`` :raw-html:`<a id="header" title="Permalink to this parameter" href="#header">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``has_header``
- set this to ``true`` if input data has header
- **Note**: works only in case of loading data directly from text file
- ``label_column`` :raw-html:`<a id="label_column" title="Permalink to this parameter" href="#label_column">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = int or string, aliases: ``label``
- used to specify the label column
- use number for index, e.g. ``label=0`` means column\_0 is the label
- add a prefix ``name:`` for column name, e.g. ``label=name:is_click``
- if omitted, the first column in the training data is used as the label
- **Note**: works only in case of loading data directly from text file
- ``weight_column`` :raw-html:`<a id="weight_column" title="Permalink to this parameter" href="#weight_column">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = int or string, aliases: ``weight``
- used to specify the weight column
- use number for index, e.g. ``weight=0`` means column\_0 is the weight
- add a prefix ``name:`` for column name, e.g. ``weight=name:weight``
- **Note**: works only in case of loading data directly from text file
- **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``
- **Note**: weights should be non-negative
- ``group_column`` :raw-html:`<a id="group_column" title="Permalink to this parameter" href="#group_column">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = int or string, aliases: ``group``, ``group_id``, ``query_column``, ``query``, ``query_id``
- used to specify the query/group id column
- use number for index, e.g. ``query=0`` means column\_0 is the query id
- add a prefix ``name:`` for column name, e.g. ``query=name:query_id``
- **Note**: works only in case of loading data directly from text file
- **Note**: data should be grouped by query\_id, for more information, see `Query Data <#query-data>`__
- **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``
- ``ignore_column`` :raw-html:`<a id="ignore_column" title="Permalink to this parameter" href="#ignore_column">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = multi-int or string, aliases: ``ignore_feature``, ``blacklist``
- used to specify some ignoring columns in training
- use number for index, e.g. ``ignore_column=0,1,2`` means column\_0, column\_1 and column\_2 will be ignored
- add a prefix ``name:`` for column name, e.g. ``ignore_column=name:c1,c2,c3`` means c1, c2 and c3 will be ignored
- **Note**: works only in case of loading data directly from text file
- **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``
- **Note**: despite the fact that specified columns will be completely ignored during the training, they still should have a valid format allowing LightGBM to load file successfully
- ``categorical_feature`` :raw-html:`<a id="categorical_feature" title="Permalink to this parameter" href="#categorical_feature">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = multi-int or string, aliases: ``cat_feature``, ``categorical_column``, ``cat_column``, ``categorical_features``
- used to specify categorical features
- use number for index, e.g. ``categorical_feature=0,1,2`` means column\_0, column\_1 and column\_2 are categorical features
- add a prefix ``name:`` for column name, e.g. ``categorical_feature=name:c1,c2,c3`` means c1, c2 and c3 are categorical features
- **Note**: all values will be cast to ``int32`` (integer codes will be extracted from pandas categoricals in the Python-package)
- **Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``
- **Note**: all values should be less than ``Int32.MaxValue`` (2147483647)
- **Note**: using large values could be memory consuming. Tree decision rule works best when categorical features are presented by consecutive integers starting from zero
- **Note**: all negative values will be treated as **missing values**
[python] Improved python tree plots (#2304) * Some basic changes to the plot of the trees to make them readable. * Squeezed the information in the nodes. * Added colouring when a dictionnary mapping the features to the constraints is passed. * Fix spaces. * Added data percentage as an option in the nodes. * Squeezed the information in the leaves. * Important information is now in bold. * Added a legend for the color of monotone splits. * Changed "split_gain" to "gain" and "internal_value" to "value". * Sqeezed leaves a bit more. * Changed description in the legend. * Revert "Sqeezed leaves a bit more." This reverts commit dd8bf14a3ba604b0dfae3b7bb1c64b6784d15e03. * Increased the readability for the gain. * Tidied up the legend. * Added the data percentage in the leaves. * Added the monotone constraints to the dumped model. * Monotone constraints are now specified automatically when plotting trees. * Raise an exception instead of the bug that was here before. * Removed operators on the branches for a clearer design. * Small cleaning of the code. * Setting a monotone constraint on a categorical feature now returns an exception instead of doing nothing. * Fix bug when monotone constraints are empty. * Fix another bug when monotone constraints are empty. * Variable name change. * Added is / isn't on every edge of the trees. * Fix test "tree_create_digraph". * Add new test for plotting trees with monotone constraints. * Typo. * Update documentation of categorical features. * Typo. * Information in nodes more explicit. * Used regular strings instead of raw strings. * Small refactoring. * Some cleaning. * Added future statement. * Changed output for consistency. * Updated documentation. * Added comments for colors. * Changed text on edges for more clarity. * Small refactoring. * Modified text in leaves for consistency with nodes. * Updated default values and documentaton for consistency. * Replaced CHECK with Log::Fatal for user-friendliness. * Updated tests. * Typo. * Simplify imports. * Swapped count and weight to improve readibility of the leaves in the plotted trees. * Thresholds in bold. * Made information in nodes written in a specific order. * Added information to clarify legend. * Code cleaning.
2019-09-08 19:26:55 +03:00
- **Note**: the output cannot be monotonically constrained with respect to a categorical feature
- **Note**: floating point numbers in categorical features will be rounded towards 0
- ``forcedbins_filename`` :raw-html:`<a id="forcedbins_filename" title="Permalink to this parameter" href="#forcedbins_filename">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string
- path to a ``.json`` file that specifies bin upper bounds for some or all features
- ``.json`` file should contain an array of objects, each containing the word ``feature`` (integer feature index) and ``bin_upper_bound`` (array of thresholds for binning)
- see `this file <https://github.com/microsoft/LightGBM/blob/master/examples/regression/forced_bins.json>`__ as an example
- ``save_binary`` :raw-html:`<a id="save_binary" title="Permalink to this parameter" href="#save_binary">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``is_save_binary``, ``is_save_binary_file``
- if ``true``, LightGBM will save the dataset (including validation data) to a binary file. This speed ups the data loading for the next time
- **Note**: ``init_score`` is not saved in binary file
- **Note**: can be used only in CLI version; for language-specific packages you can use the correspondent function
- ``precise_float_parser`` :raw-html:`<a id="precise_float_parser" title="Permalink to this parameter" href="#precise_float_parser">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- use precise floating point number parsing for text parser (e.g. CSV, TSV, LibSVM input)
- **Note**: setting this to ``true`` may lead to much slower text parsing
- ``parser_config_file`` :raw-html:`<a id="parser_config_file" title="Permalink to this parameter" href="#parser_config_file">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string
- path to a ``.json`` file that specifies customized parser initialized configuration
- see `lightgbm-transform <https://github.com/microsoft/lightgbm-transform>`__ for usage examples
- **Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
- *New in 4.0.0*
Predict Parameters
~~~~~~~~~~~~~~~~~~
- ``start_iteration_predict`` :raw-html:`<a id="start_iteration_predict" title="Permalink to this parameter" href="#start_iteration_predict">&#x1F517;&#xFE0E;</a>`, default = ``0``, type = int
- used only in ``prediction`` task
- used to specify from which iteration to start the prediction
- ``<= 0`` means from the first iteration
- ``num_iteration_predict`` :raw-html:`<a id="num_iteration_predict" title="Permalink to this parameter" href="#num_iteration_predict">&#x1F517;&#xFE0E;</a>`, default = ``-1``, type = int
- used only in ``prediction`` task
- used to specify how many trained iterations will be used in prediction
- ``<= 0`` means no limit
- ``predict_raw_score`` :raw-html:`<a id="predict_raw_score" title="Permalink to this parameter" href="#predict_raw_score">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``is_predict_raw_score``, ``predict_rawscore``, ``raw_score``
- used only in ``prediction`` task
- set this to ``true`` to predict only the raw scores
- set this to ``false`` to predict transformed scores
- ``predict_leaf_index`` :raw-html:`<a id="predict_leaf_index" title="Permalink to this parameter" href="#predict_leaf_index">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``is_predict_leaf_index``, ``leaf_index``
- used only in ``prediction`` task
- set this to ``true`` to predict with leaf index of all trees
- ``predict_contrib`` :raw-html:`<a id="predict_contrib" title="Permalink to this parameter" href="#predict_contrib">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``is_predict_contrib``, ``contrib``
- used only in ``prediction`` task
- set this to ``true`` to estimate `SHAP values <https://arxiv.org/abs/1706.06060>`__, which represent how each feature contributes to each prediction
- produces ``#features + 1`` values where the last value is the expected value of the model output over the training data
- **Note**: if you want to get more explanation for your model's predictions using SHAP values like SHAP interaction values, you can install `shap package <https://github.com/shap>`__
2019-04-28 23:35:11 +03:00
- **Note**: unlike the shap package, with ``predict_contrib`` we return a matrix with an extra column, where the last column is the expected value
- **Note**: this feature is not implemented for linear trees
- ``predict_disable_shape_check`` :raw-html:`<a id="predict_disable_shape_check" title="Permalink to this parameter" href="#predict_disable_shape_check">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only in ``prediction`` task
- control whether or not LightGBM raises an error when you try to predict on data with a different number of features than the training data
- if ``false`` (the default), a fatal error will be raised if the number of features in the dataset you predict on differs from the number seen during training
- if ``true``, LightGBM will attempt to predict on whatever data you provide. This is dangerous because you might get incorrect predictions, but you could use it in situations where it is difficult or expensive to generate some features and you are very confident that they were never chosen for splits in the model
- **Note**: be very careful setting this parameter to ``true``
- ``pred_early_stop`` :raw-html:`<a id="pred_early_stop" title="Permalink to this parameter" href="#pred_early_stop">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only in ``prediction`` task
- used only in ``classification`` and ``ranking`` applications
- used only for predicting normal or raw scores
- if ``true``, will use early-stopping to speed up the prediction. May affect the accuracy
- **Note**: cannot be used with ``rf`` boosting type or custom objective function
- ``pred_early_stop_freq`` :raw-html:`<a id="pred_early_stop_freq" title="Permalink to this parameter" href="#pred_early_stop_freq">&#x1F517;&#xFE0E;</a>`, default = ``10``, type = int
- used only in ``prediction`` task
- the frequency of checking early-stopping prediction
- ``pred_early_stop_margin`` :raw-html:`<a id="pred_early_stop_margin" title="Permalink to this parameter" href="#pred_early_stop_margin">&#x1F517;&#xFE0E;</a>`, default = ``10.0``, type = double
- used only in ``prediction`` task
- the threshold of margin in early-stopping prediction
- ``output_result`` :raw-html:`<a id="output_result" title="Permalink to this parameter" href="#output_result">&#x1F517;&#xFE0E;</a>`, default = ``LightGBM_predict_result.txt``, type = string, aliases: ``predict_result``, ``prediction_result``, ``predict_name``, ``prediction_name``, ``pred_name``, ``name_pred``
- used only in ``prediction`` task
- filename of prediction result
- **Note**: can be used only in CLI version
Convert Parameters
~~~~~~~~~~~~~~~~~~
- ``convert_model_language`` :raw-html:`<a id="convert_model_language" title="Permalink to this parameter" href="#convert_model_language">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string
- used only in ``convert_model`` task
- only ``cpp`` is supported yet; for conversion model to other languages consider using `m2cgen <https://github.com/BayesWitnesses/m2cgen>`__ utility
- if ``convert_model_language`` is set and ``task=train``, the model will be also converted
- **Note**: can be used only in CLI version
- ``convert_model`` :raw-html:`<a id="convert_model" title="Permalink to this parameter" href="#convert_model">&#x1F517;&#xFE0E;</a>`, default = ``gbdt_prediction.cpp``, type = string, aliases: ``convert_model_file``
- used only in ``convert_model`` task
- output filename of converted model
- **Note**: can be used only in CLI version
Objective Parameters
--------------------
- ``objective_seed`` :raw-html:`<a id="objective_seed" title="Permalink to this parameter" href="#objective_seed">&#x1F517;&#xFE0E;</a>`, default = ``5``, type = int
- used only in ``rank_xendcg`` objective
- random seed for objectives, if random process is needed
- ``num_class`` :raw-html:`<a id="num_class" title="Permalink to this parameter" href="#num_class">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, aliases: ``num_classes``, constraints: ``num_class > 0``
- used only in ``multi-class`` classification application
- ``is_unbalance`` :raw-html:`<a id="is_unbalance" title="Permalink to this parameter" href="#is_unbalance">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``unbalance``, ``unbalanced_sets``
2018-04-24 11:19:54 +03:00
- used only in ``binary`` and ``multiclassova`` applications
2018-04-24 11:19:54 +03:00
- set this to ``true`` if training data are unbalanced
2018-04-24 11:19:54 +03:00
- **Note**: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
- **Note**: this parameter cannot be used at the same time with ``scale_pos_weight``, choose only **one** of them
2018-04-24 11:19:54 +03:00
- ``scale_pos_weight`` :raw-html:`<a id="scale_pos_weight" title="Permalink to this parameter" href="#scale_pos_weight">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, constraints: ``scale_pos_weight > 0.0``
- used only in ``binary`` and ``multiclassova`` applications
- weight of labels with positive class
- **Note**: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
- **Note**: this parameter cannot be used at the same time with ``is_unbalance``, choose only **one** of them
- ``sigmoid`` :raw-html:`<a id="sigmoid" title="Permalink to this parameter" href="#sigmoid">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, constraints: ``sigmoid > 0.0``
- used only in ``binary`` and ``multiclassova`` classification and in ``lambdarank`` applications
- parameter for the sigmoid function
- ``boost_from_average`` :raw-html:`<a id="boost_from_average" title="Permalink to this parameter" href="#boost_from_average">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool
- used only in ``regression``, ``binary``, ``multiclassova`` and ``cross-entropy`` applications
- adjusts initial score to the mean of labels for faster convergence
- ``reg_sqrt`` :raw-html:`<a id="reg_sqrt" title="Permalink to this parameter" href="#reg_sqrt">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- used only in ``regression`` application
- used to fit ``sqrt(label)`` instead of original values and prediction result will be also automatically converted to ``prediction^2``
- might be useful in case of large-range labels
- ``alpha`` :raw-html:`<a id="alpha" title="Permalink to this parameter" href="#alpha">&#x1F517;&#xFE0E;</a>`, default = ``0.9``, type = double, constraints: ``alpha > 0.0``
- used only in ``huber`` and ``quantile`` ``regression`` applications
- parameter for `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__ and `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- ``fair_c`` :raw-html:`<a id="fair_c" title="Permalink to this parameter" href="#fair_c">&#x1F517;&#xFE0E;</a>`, default = ``1.0``, type = double, constraints: ``fair_c > 0.0``
- used only in ``fair`` ``regression`` application
- parameter for `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- ``poisson_max_delta_step`` :raw-html:`<a id="poisson_max_delta_step" title="Permalink to this parameter" href="#poisson_max_delta_step">&#x1F517;&#xFE0E;</a>`, default = ``0.7``, type = double, constraints: ``poisson_max_delta_step > 0.0``
- used only in ``poisson`` ``regression`` application
- parameter for `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__ to safeguard optimization
- ``tweedie_variance_power`` :raw-html:`<a id="tweedie_variance_power" title="Permalink to this parameter" href="#tweedie_variance_power">&#x1F517;&#xFE0E;</a>`, default = ``1.5``, type = double, constraints: ``1.0 <= tweedie_variance_power < 2.0``
- used only in ``tweedie`` ``regression`` application
- used to control the variance of the tweedie distribution
- set this closer to ``2`` to shift towards a **Gamma** distribution
- set this closer to ``1`` to shift towards a **Poisson** distribution
- ``lambdarank_truncation_level`` :raw-html:`<a id="lambdarank_truncation_level" title="Permalink to this parameter" href="#lambdarank_truncation_level">&#x1F517;&#xFE0E;</a>`, default = ``30``, type = int, constraints: ``lambdarank_truncation_level > 0``
- used only in ``lambdarank`` application
2020-10-28 04:33:11 +03:00
- controls the number of top-results to focus on during training, refer to "truncation level" in the Sec. 3 of `LambdaMART paper <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf>`__
2020-10-28 04:33:11 +03:00
- this parameter is closely related to the desirable cutoff ``k`` in the metric **NDCG@k** that we aim at optimizing the ranker for. The optimal setting for this parameter is likely to be slightly higher than ``k`` (e.g., ``k + 3``) to include more pairs of documents to train on, but perhaps not too high to avoid deviating too much from the desired target metric **NDCG@k**
- ``lambdarank_norm`` :raw-html:`<a id="lambdarank_norm" title="Permalink to this parameter" href="#lambdarank_norm">&#x1F517;&#xFE0E;</a>`, default = ``true``, type = bool
- used only in ``lambdarank`` application
- set this to ``true`` to normalize the lambdas for different queries, and improve the performance for unbalanced data
- set this to ``false`` to enforce the original lambdarank algorithm
- ``label_gain`` :raw-html:`<a id="label_gain" title="Permalink to this parameter" href="#label_gain">&#x1F517;&#xFE0E;</a>`, default = ``0,1,3,7,15,31,63,...,2^30-1``, type = multi-double
- used only in ``lambdarank`` application
- relevant gain for labels. For example, the gain of label ``2`` is ``3`` in case of default label gains
- separate by ``,``
2018-01-21 06:23:49 +03:00
Treat position bias via GAM in LambdaMART (#5929) * Update dataset.h * Update metadata.cpp * Update rank_objective.hpp * Update metadata.cpp * Update rank_objective.hpp * Update metadata.cpp * Update dataset.h * Update rank_objective.hpp * Update metadata.cpp * Update test_engine.py * Update test_engine.py * Add files via upload * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update _rank.train.position * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update _rank.train.position * Update _rank.train.position * Update test_engine.py * Update _rank.train.position * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update the position of import statement * Update rank_objective.hpp * Update config.h * Update config_auto.cpp * Update rank_objective.hpp * Update rank_objective.hpp * update documentation * remove extra blank line * Update src/io/metadata.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/io/metadata.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * remove _rank.train.position * add position in python API * fix set_positions in basic.py * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * Update Advanced-Topics.rst * remove List from _LGBM_PositionType * move new position parameter to the last in Dataset constructor * add position_filename as a parameter * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update Advanced-Topics.rst * Update src/objective/rank_objective.hpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/io/metadata.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update metadata.cpp * Update python-package/lightgbm/basic.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update python-package/lightgbm/basic.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update python-package/lightgbm/basic.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update python-package/lightgbm/basic.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update src/io/metadata.cpp Co-authored-by: James Lamb <jaylamb20@gmail.com> * more infomrative fatal message address more comments * update documentation for more flexible position specification * fix SetPosition add tests for get_position and set_position * remove position_filename * remove useless changes * Update python-package/lightgbm/basic.py Co-authored-by: James Lamb <jaylamb20@gmail.com> * remove useless files * move position file when position set in Dataset * warn when positions are overwritten * skip ranking with position test in cuda * split test case * remove useless import * Update test_engine.py * Update test_engine.py * Update test_engine.py * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * Update Parameters.rst * Update rank_objective.hpp * Update config.h * update config_auto.cppp * Update docs/Advanced-Topics.rst Co-authored-by: James Lamb <jaylamb20@gmail.com> * fix randomness in test case for gpu --------- Co-authored-by: shiyu1994 <shiyu_k1994@qq.com> Co-authored-by: James Lamb <jaylamb20@gmail.com>
2023-09-04 12:05:46 +03:00
- ``lambdarank_position_bias_regularization`` :raw-html:`<a id="lambdarank_position_bias_regularization" title="Permalink to this parameter" href="#lambdarank_position_bias_regularization">&#x1F517;&#xFE0E;</a>`, default = ``0.0``, type = double, constraints: ``lambdarank_position_bias_regularization >= 0.0``
- used only in ``lambdarank`` application when positional information is provided and position bias is modeled. Larger values reduce the inferred position bias factors.
2023-09-12 01:12:29 +03:00
- *New in version 4.1.0*
Metric Parameters
-----------------
- ``metric`` :raw-html:`<a id="metric" title="Permalink to this parameter" href="#metric">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = multi-enum, aliases: ``metrics``, ``metric_types``
- metric(s) to be evaluated on the evaluation set(s)
- ``""`` (empty string or not specified) means that metric corresponding to specified ``objective`` will be used (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
- ``"None"`` (string, **not** a ``None`` value) means that no metric will be registered, aliases: ``na``, ``null``, ``custom``
- ``l1``, absolute loss, aliases: ``mean_absolute_error``, ``mae``, ``regression_l1``
- ``l2``, square loss, aliases: ``mean_squared_error``, ``mse``, ``regression_l2``, ``regression``
- ``rmse``, root square loss, aliases: ``root_mean_squared_error``, ``l2_root``
- ``quantile``, `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- ``mape``, `MAPE loss <https://en.wikipedia.org/wiki/Mean_absolute_percentage_error>`__, aliases: ``mean_absolute_percentage_error``
- ``huber``, `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__
- ``fair``, `Fair loss <https://www.kaggle.com/c/allstate-claims-severity/discussion/24520>`__
- ``poisson``, negative log-likelihood for `Poisson regression <https://en.wikipedia.org/wiki/Poisson_regression>`__
- ``gamma``, negative log-likelihood for **Gamma** regression
- ``gamma_deviance``, residual deviance for **Gamma** regression
- ``tweedie``, negative log-likelihood for **Tweedie** regression
- ``ndcg``, `NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__, aliases: ``lambdarank``, ``rank_xendcg``, ``xendcg``, ``xe_ndcg``, ``xe_ndcg_mart``, ``xendcg_mart``
- ``map``, `MAP <https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/>`__, aliases: ``mean_average_precision``
- ``auc``, `AUC <https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve>`__
- ``average_precision``, `average precision score <https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html>`__
- ``binary_logloss``, `log loss <https://en.wikipedia.org/wiki/Cross_entropy>`__, aliases: ``binary``
- ``binary_error``, for one sample: ``0`` for correct classification, ``1`` for error classification
- ``auc_mu``, `AUC-mu <http://proceedings.mlr.press/v97/kleiman19a/kleiman19a.pdf>`__
- ``multi_logloss``, log loss for multi-class classification, aliases: ``multiclass``, ``softmax``, ``multiclassova``, ``multiclass_ova``, ``ova``, ``ovr``
- ``multi_error``, error rate for multi-class classification
- ``cross_entropy``, cross-entropy (with optional linear weights), aliases: ``xentropy``
- ``cross_entropy_lambda``, "intensity-weighted" cross-entropy, aliases: ``xentlambda``
- ``kullback_leibler``, `Kullback-Leibler divergence <https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence>`__, aliases: ``kldiv``
- support multiple metrics, separated by ``,``
- ``metric_freq`` :raw-html:`<a id="metric_freq" title="Permalink to this parameter" href="#metric_freq">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, aliases: ``output_freq``, constraints: ``metric_freq > 0``
- frequency for metric output
- **Note**: can be used only in CLI version
- ``is_provide_training_metric`` :raw-html:`<a id="is_provide_training_metric" title="Permalink to this parameter" href="#is_provide_training_metric">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool, aliases: ``training_metric``, ``is_training_metric``, ``train_metric``
- set this to ``true`` to output metric result over training dataset
- **Note**: can be used only in CLI version
- ``eval_at`` :raw-html:`<a id="eval_at" title="Permalink to this parameter" href="#eval_at">&#x1F517;&#xFE0E;</a>`, default = ``1,2,3,4,5``, type = multi-int, aliases: ``ndcg_eval_at``, ``ndcg_at``, ``map_eval_at``, ``map_at``
- used only with ``ndcg`` and ``map`` metrics
- `NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__ and `MAP <https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/>`__ evaluation positions, separated by ``,``
- ``multi_error_top_k`` :raw-html:`<a id="multi_error_top_k" title="Permalink to this parameter" href="#multi_error_top_k">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, constraints: ``multi_error_top_k > 0``
- used only with ``multi_error`` metric
- threshold for top-k multi-error metric
- the error on each sample is ``0`` if the true class is among the top ``multi_error_top_k`` predictions, and ``1`` otherwise
- more precisely, the error on a sample is ``0`` if there are at least ``num_classes - multi_error_top_k`` predictions strictly less than the prediction on the true class
- when ``multi_error_top_k=1`` this is equivalent to the usual multi-error metric
- ``auc_mu_weights`` :raw-html:`<a id="auc_mu_weights" title="Permalink to this parameter" href="#auc_mu_weights">&#x1F517;&#xFE0E;</a>`, default = ``None``, type = multi-double
- used only with ``auc_mu`` metric
- list representing flattened matrix (in row-major order) giving loss weights for classification errors
- list should have ``n * n`` elements, where ``n`` is the number of classes
- the matrix co-ordinate ``[i, j]`` should correspond to the ``i * n + j``-th element of the list
- if not specified, will use equal weights for all classes
Network Parameters
------------------
- ``num_machines`` :raw-html:`<a id="num_machines" title="Permalink to this parameter" href="#num_machines">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, aliases: ``num_machine``, constraints: ``num_machines > 0``
- the number of machines for distributed learning application
- this parameter is needed to be set in both **socket** and **mpi** versions
- ``local_listen_port`` :raw-html:`<a id="local_listen_port" title="Permalink to this parameter" href="#local_listen_port">&#x1F517;&#xFE0E;</a>`, default = ``12400 (random for Dask-package)``, type = int, aliases: ``local_port``, ``port``, constraints: ``local_listen_port > 0``
- TCP listen port for local machines
- **Note**: don't forget to allow this port in firewall settings before training
- ``time_out`` :raw-html:`<a id="time_out" title="Permalink to this parameter" href="#time_out">&#x1F517;&#xFE0E;</a>`, default = ``120``, type = int, constraints: ``time_out > 0``
- socket time-out in minutes
- ``machine_list_filename`` :raw-html:`<a id="machine_list_filename" title="Permalink to this parameter" href="#machine_list_filename">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``machine_list_file``, ``machine_list``, ``mlist``
- path of file that lists machines for this distributed learning application
- each line contains one IP and one port for one machine. The format is ``ip port`` (space as a separator)
- **Note**: can be used only in CLI version
- ``machines`` :raw-html:`<a id="machines" title="Permalink to this parameter" href="#machines">&#x1F517;&#xFE0E;</a>`, default = ``""``, type = string, aliases: ``workers``, ``nodes``
- list of machines in the following format: ``ip1:port1,ip2:port2``
GPU Parameters
--------------
- ``gpu_platform_id`` :raw-html:`<a id="gpu_platform_id" title="Permalink to this parameter" href="#gpu_platform_id">&#x1F517;&#xFE0E;</a>`, default = ``-1``, type = int
- OpenCL platform ID. Usually each GPU vendor exposes one OpenCL platform
- ``-1`` means the system-wide default platform
- **Note**: refer to `GPU Targets <./GPU-Targets.rst#query-opencl-devices-in-your-system>`__ for more details
- ``gpu_device_id`` :raw-html:`<a id="gpu_device_id" title="Permalink to this parameter" href="#gpu_device_id">&#x1F517;&#xFE0E;</a>`, default = ``-1``, type = int
- OpenCL device ID in the specified platform. Each GPU in the selected platform has a unique device ID
- ``-1`` means the default device in the selected platform
- **Note**: refer to `GPU Targets <./GPU-Targets.rst#query-opencl-devices-in-your-system>`__ for more details
- ``gpu_use_dp`` :raw-html:`<a id="gpu_use_dp" title="Permalink to this parameter" href="#gpu_use_dp">&#x1F517;&#xFE0E;</a>`, default = ``false``, type = bool
- set this to ``true`` to use double precision math on GPU (by default single precision is used)
- **Note**: can be used only in OpenCL implementation, in CUDA implementation only double precision is currently supported
[GPU] Add support for CUDA-based GPU build (#3160) * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * Initial CUDA work * redirect log to python console (#3090) * redir log to python console * fix pylint * Apply suggestions from code review * Update basic.py * Apply suggestions from code review Co-authored-by: Nikita Titov <nekit94-08@mail.ru> * Update c_api.h * Apply suggestions from code review * Apply suggestions from code review * super-minor: better wording Co-authored-by: Nikita Titov <nekit94-08@mail.ru> Co-authored-by: StrikerRUS <nekit94-12@hotmail.com> * re-order includes (fixes #3132) (#3133) * Revert "re-order includes (fixes #3132) (#3133)" (#3153) This reverts commit 656d2676c2174781c91747ba158cb6d27f4cacbd. * Missing change from previous rebase * Minor cleanup and removal of development scripts. * Only set gpu_use_dp on by default for CUDA. Other minor change. * Fix python lint indentation problem. * More python lint issues. * Big lint cleanup - more to come. * Another large lint cleanup - more to come. * Even more lint cleanup. * Minor cleanup so less differences in code. * Revert is_use_subset changes * Another rebase from master to fix recent conflicts. * More lint. * Simple code cleanup - add & remove blank lines, revert unneccessary format changes, remove added dead code. * Removed parameters added for CUDA and various bug fix. * Yet more lint and unneccessary changes. * Revert another change. * Removal of unneccessary code. * temporary appveyor.yml for building and testing * Remove return value in ReSize * Removal of unused variables. * Code cleanup from reviewers suggestions. * Removal of FIXME comments and unused defines. * More reviewers comments cleanup. * More reviewers comments cleanup. * More reviewers comments cleanup. * Fix config variables. * Attempt to fix check-docs failure * Update Paramster.rst for num_gpu * Removing test appveyor.yml * Add ƒCUDA_RESOLVE_DEVICE_SYMBOLS to libraries to fix linking issue. * Fixed handling of data elements less than 2K. * More reviewers comments cleanup. * Removal of TODO and fix printing of int64_t * Add cuda change for CI testing and remove cuda from device_type in python. * Missed one change form previous check-in * Removal AdditionConfig and fix settings. * Limit number of GPUs to one for now in CUDA. * Update Parameters.rst for previous check-in * Whitespace removal. * Cleanup unused code. * Changed uint/ushort/ulong to unsigned int/short/long to help Windows based CUDA compiler work. * Lint change from previous check-in. * Changes based on reviewers comments. * More reviewer comment changes. * Adding warning for is_sparse. Revert tmp_subset code. Only return FeatureGroupData if not is_multi_val_ * Fix so that CUDA code will compile even if you enable the SCORE_T_USE_DOUBLE define. * Reviewer comment cleanup. * Replace warning with Log message. Removal of some of the USE_CUDA. Fix typo and removal of pragma once. * Remove PRINT debug for CUDA code. * Allow to use of multiple GPUs for CUDA. * More multi-GPUs enablement for CUDA. * More code cleanup based on reviews comments. * Update docs with latest config changes. Co-authored-by: Gordon Fossum <fossum@us.ibm.com> Co-authored-by: ChipKerchner <ckerchne@linux.vnet.ibm.com> Co-authored-by: Guolin Ke <guolin.ke@outlook.com> Co-authored-by: Nikita Titov <nekit94-08@mail.ru> Co-authored-by: StrikerRUS <nekit94-12@hotmail.com> Co-authored-by: James Lamb <jaylamb20@gmail.com>
2020-09-20 08:35:01 +03:00
- ``num_gpu`` :raw-html:`<a id="num_gpu" title="Permalink to this parameter" href="#num_gpu">&#x1F517;&#xFE0E;</a>`, default = ``1``, type = int, constraints: ``num_gpu > 0``
- number of GPUs
- **Note**: can be used only in CUDA implementation
.. end params list
Others
------
Continued Training with Input Score
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LightGBM supports continued training with initial scores. It uses an additional file to store these initial scores, like the following:
::
0.5
-0.1
0.9
...
It means the initial score of the first data row is ``0.5``, second is ``-0.1``, and so on.
The initial score file corresponds with data file line by line, and has per score per line.
And if the name of data file is ``train.txt``, the initial score file should be named as ``train.txt.init`` and placed in the same folder as the data file.
In this case, LightGBM will auto load initial score file if it exists.
If binary data files exist for raw data file ``train.txt``, for example in the name ``train.txt.bin``, then the initial score file should be named as ``train.txt.bin.init``.
Weight Data
~~~~~~~~~~~
LightGBM supports weighted training. It uses an additional file to store weight data, like the following:
::
1.0
0.5
0.8
...
It means the weight of the first data row is ``1.0``, second is ``0.5``, and so on. Weights should be non-negative.
The weight file corresponds with data file line by line, and has per weight per line.
And if the name of data file is ``train.txt``, the weight file should be named as ``train.txt.weight`` and placed in the same folder as the data file.
In this case, LightGBM will load the weight file automatically if it exists.
Also, you can include weight column in your data file. Please refer to the ``weight_column`` `parameter <#weight_column>`__ in above.
Query Data
~~~~~~~~~~
For learning to rank, it needs query information for training data.
LightGBM uses an additional file to store query data, like the following:
::
27
18
67
...
For wrapper libraries like in Python and R, this information can also be provided as an array-like via the Dataset parameter ``group``.
::
[27, 18, 67, ...]
For example, if you have a 112-document dataset with ``group = [27, 18, 67]``, that means that you have 3 groups, where the first 27 records are in the first group, records 28-45 are in the second group, and records 46-112 are in the third group.
**Note**: data should be ordered by the query.
If the name of data file is ``train.txt``, the query file should be named as ``train.txt.query`` and placed in the same folder as the data file.
In this case, LightGBM will load the query file automatically if it exists.
Also, you can include query/group id column in your data file. Please refer to the ``group_column`` `parameter <#group_column>`__ in above.