For the Python and R packages, any parameters that accept a list of values (usually they have ``multi-xxx`` type, e.g. ``multi-int`` or ``multi-double``) can be specified in those languages' default array types.
For example, ``monotone_constraints`` can be specified as follows.
-``config``:raw-html:`<a id="config" title="Permalink to this parameter" href="#config">🔗︎</a>`, default = ``""``, type = string, aliases: ``config_file``
-``save_binary``, load train (and validation) data then save dataset to binary file. Typical usage: ``save_binary`` first, then run multiple ``train`` tasks in parallel using the saved binary file
-``gamma``, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be `gamma-distributed <https://en.wikipedia.org/wiki/Gamma_distribution#Occurrence_and_applications>`__
-``tweedie``, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be `tweedie-distributed <https://en.wikipedia.org/wiki/Tweedie_distribution#Occurrence_and_applications>`__
-``lambdarank``, `lambdarank <https://proceedings.neurips.cc/paper_files/paper/2006/file/af44c4c56f385c43f2529f9b1b018f6a-Paper.pdf>`__ objective. `label_gain <#label_gain>`__ can be used to set the gain (weight) of ``int`` label and all values in ``label`` must be smaller than number of elements in ``label_gain``
- for the best speed, set this to the number of **real CPU cores**, not the number of threads (most CPUs use `hyper-threading <https://en.wikipedia.org/wiki/Hyper-threading>`__ to generate 2 threads per CPU core)
-**Note**: please **don't** change this during training, especially when running multiple jobs simultaneously by external packages, otherwise it may cause undesirable errors
-**Note**: for the faster speed, GPU uses 32-bit float point to sum up by default, so this may affect the accuracy for some tasks. You can set ``gpu_use_dp=true`` to enable 64-bit float point, but it will slow down the training
-**Note**: refer to `Installation Guide <./Installation-Guide.rst#build-gpu-version>`__ to build LightGBM with GPU support
-``deterministic``:raw-html:`<a id="deterministic" title="Permalink to this parameter" href="#deterministic">🔗︎</a>`, default = ``false``, type = bool
- used only with ``cpu`` device type
- setting this to ``true`` should ensure the stable results when using the same data and the same parameters (and different ``num_threads``)
- when you use the different seeds, different LightGBM versions, the binaries compiled by different compilers, or in different systems, the results are expected to be different
- you can `raise issues <https://github.com/microsoft/LightGBM/issues>`__ in LightGBM GitHub repo when you meet the unstable results
-**Note**: setting this to ``true`` may slow down the training
-**Note**: to avoid potential instability due to numerical issues, please set ``force_col_wise=true`` or ``force_row_wise=true`` when setting ``deterministic=true``
-``force_col_wise``:raw-html:`<a id="force_col_wise" title="Permalink to this parameter" href="#force_col_wise">🔗︎</a>`, default = ``false``, type = bool
-**Note**: when both ``force_col_wise`` and ``force_row_wise`` are ``false``, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one to ``true`` manually
-**Note**: this parameter cannot be used at the same time with ``force_row_wise``, choose only one of them
-``force_row_wise``:raw-html:`<a id="force_row_wise" title="Permalink to this parameter" href="#force_row_wise">🔗︎</a>`, default = ``false``, type = bool
-**Note**: setting this to ``true`` will double the memory cost for Dataset object. If you have not enough memory, you can try setting ``force_col_wise=true``
-**Note**: when both ``force_col_wise`` and ``force_row_wise`` are ``false``, LightGBM will firstly try them both, and then use the faster one. To remove the overhead of testing set the faster one to ``true`` manually
-``histogram_pool_size``:raw-html:`<a id="histogram_pool_size" title="Permalink to this parameter" href="#histogram_pool_size">🔗︎</a>`, default = ``-1.0``, type = double, aliases: ``hist_pool_size``
-**Note**: this is an approximation based on the Hessian, so occasionally you may observe splits which produce leaf nodes that have less than this many observations
-``bagging_freq``:raw-html:`<a id="bagging_freq" title="Permalink to this parameter" href="#bagging_freq">🔗︎</a>`, default = ``0``, type = int, aliases: ``subsample_freq``
-``0`` means disable bagging; ``k`` means perform bagging at every ``k`` iteration. Every ``k``-th iteration, LightGBM will randomly select ``bagging_fraction * 100 %`` of the data to use for the next ``k`` iterations
-``bagging_seed``:raw-html:`<a id="bagging_seed" title="Permalink to this parameter" href="#bagging_seed">🔗︎</a>`, default = ``3``, type = int, aliases: ``bagging_fraction_seed``
- LightGBM will randomly select a subset of features on each iteration (tree) if ``feature_fraction`` is smaller than ``1.0``. For example, if you set it to ``0.8``, LightGBM will select 80% of features before training each tree
- LightGBM will randomly select a subset of features on each tree node if ``feature_fraction_bynode`` is smaller than ``1.0``. For example, if you set it to ``0.8``, LightGBM will select 80% of features at each tree node
-**Note**: unlike ``feature_fraction``, this cannot speed up training
-**Note**: if both ``feature_fraction`` and ``feature_fraction_bynode`` are smaller than ``1.0``, the final fraction of each node is ``feature_fraction * feature_fraction_bynode``
-``feature_fraction_seed``:raw-html:`<a id="feature_fraction_seed" title="Permalink to this parameter" href="#feature_fraction_seed">🔗︎</a>`, default = ``2``, type = int
-``extra_trees``:raw-html:`<a id="extra_trees" title="Permalink to this parameter" href="#extra_trees">🔗︎</a>`, default = ``false``, type = bool, aliases: ``extra_tree``
-``extra_seed``:raw-html:`<a id="extra_seed" title="Permalink to this parameter" href="#extra_seed">🔗︎</a>`, default = ``6``, type = int
- random seed for selecting thresholds when ``extra_trees`` is true
-``first_metric_only``:raw-html:`<a id="first_metric_only" title="Permalink to this parameter" href="#first_metric_only">🔗︎</a>`, default = ``false``, type = bool
- linear tree regularization, corresponds to the parameter ``lambda`` in Eq. 3 of `Gradient Boosting with Piece-Wise Linear Regression Trees <https://arxiv.org/pdf/1802.05640.pdf>`__
-``xgboost_dart_mode``:raw-html:`<a id="xgboost_dart_mode" title="Permalink to this parameter" href="#xgboost_dart_mode">🔗︎</a>`, default = ``false``, type = bool
-``uniform_drop``:raw-html:`<a id="uniform_drop" title="Permalink to this parameter" href="#uniform_drop">🔗︎</a>`, default = ``false``, type = bool
- limit number of split points considered for categorical features. See `the documentation on how LightGBM finds optimal splits for categorical features <./Features.rst#optimal-split-for-categorical-features>`_ for more details
- you need to specify all features in order. For example, ``mc=-1,0,1`` means decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
-``intermediate``, a `more advanced method <https://hal.science/hal-02862802/document>`__, which may slow the library very slightly. However, this method is much less constraining than the basic method and should significantly improve the results
-``advanced``, an `even more advanced method <https://hal.science/hal-02862802/document>`__, which may slow the library. However, this method is even less constraining than the intermediate method and should again significantly improve the results
-`monotone penalty <https://hal.science/hal-02862802/document>`__: a penalization parameter X forbids any monotone splits on the first X (rounded down) level(s) of the tree. The penalty applied to monotone splits on a given depth is a continuous, increasing function the penalization parameter
- path to a ``.json`` file that specifies splits to force at the top of every decision tree before best-first learning commences
-``.json`` file can be arbitrarily nested, and each split contains ``feature``, ``threshold`` fields, as well as ``left`` and ``right`` fields representing subsplits
- categorical splits are forced in a one-hot fashion, with ``left`` representing the split containing the feature value and ``right`` representing other values
-``refit_decay_rate``:raw-html:`<a id="refit_decay_rate" title="Permalink to this parameter" href="#refit_decay_rate">🔗︎</a>`, default = ``0.9``, type = double, constraints: ``0.0 <= refit_decay_rate <= 1.0``
- decay rate of ``refit`` task, will use ``leaf_output = refit_decay_rate * old_leaf_output + (1.0 - refit_decay_rate) * new_leaf_output`` to refit trees
- used only in ``refit`` task in CLI version or as argument in ``refit`` function in language-specific package
-``cegb_tradeoff``:raw-html:`<a id="cegb_tradeoff" title="Permalink to this parameter" href="#cegb_tradeoff">🔗︎</a>`, default = ``1.0``, type = double, constraints: ``cegb_tradeoff >= 0.0``
- cost-effective gradient boosting multiplier for all penalties
-``cegb_penalty_split``:raw-html:`<a id="cegb_penalty_split" title="Permalink to this parameter" href="#cegb_penalty_split">🔗︎</a>`, default = ``0.0``, type = double, constraints: ``cegb_penalty_split >= 0.0``
- cost-effective gradient-boosting penalty for splitting a node
-``cegb_penalty_feature_lazy``:raw-html:`<a id="cegb_penalty_feature_lazy" title="Permalink to this parameter" href="#cegb_penalty_feature_lazy">🔗︎</a>`, default = ``0,0,...,0``, type = multi-double
- cost-effective gradient boosting penalty for using a feature
- applied per data point
-``cegb_penalty_feature_coupled``:raw-html:`<a id="cegb_penalty_feature_coupled" title="Permalink to this parameter" href="#cegb_penalty_feature_coupled">🔗︎</a>`, default = ``0,0,...,0``, type = multi-double
- cost-effective gradient boosting penalty for using a feature
- the weight of each node is ``w * (n / path_smooth) / (n / path_smooth + 1) + w_p / (n / path_smooth + 1)``, where ``n`` is the number of samples in the node, ``w`` is the optimal node weight to minimise the loss (approximately ``-sum_gradients / sum_hessians``), and ``w_p`` is the weight of the parent node
- note that the parent output ``w_p`` itself has smoothing applied, unless it is the root node, so that the smoothing effect accumulates with the tree depth
-``interaction_constraints``:raw-html:`<a id="interaction_constraints" title="Permalink to this parameter" href="#interaction_constraints">🔗︎</a>`, default = ``""``, type = string
- controls which features can appear in the same branch
- by default interaction constraints are disabled, to enable them you can specify
- for CLI, lists separated by commas, e.g. ``[0,1,2],[2,3]``
- for Python-package, list of lists, e.g. ``[[0, 1, 2], [2, 3]]``
- for R-package, list of character or numeric vectors, e.g. ``list(c("var1", "var2", "var3"), c("var3", "var4"))`` or ``list(c(1L, 2L, 3L), c(3L, 4L))``. Numeric vectors should use 1-based indexing, where ``1L`` is the first feature, ``2L`` is the second feature, etc
-``verbosity``:raw-html:`<a id="verbosity" title="Permalink to this parameter" href="#verbosity">🔗︎</a>`, default = ``1``, type = int, aliases: ``verbose``
-``saved_feature_importance_type``:raw-html:`<a id="saved_feature_importance_type" title="Permalink to this parameter" href="#saved_feature_importance_type">🔗︎</a>`, default = ``0``, type = int
- the feature importance type in the saved model file
-``0``: count-based feature importance (numbers of splits are counted); ``1``: gain-based feature importance (values of gain are counted)
-``snapshot_freq``:raw-html:`<a id="snapshot_freq" title="Permalink to this parameter" href="#snapshot_freq">🔗︎</a>`, default = ``-1``, type = int, aliases: ``save_period``
- frequency of saving model file snapshot
- set this to positive value to enable this function. For example, the model file will be snapshotted at each iteration if ``snapshot_freq=1``
-``use_quantized_grad``:raw-html:`<a id="use_quantized_grad" title="Permalink to this parameter" href="#use_quantized_grad">🔗︎</a>`, default = ``false``, type = bool
- whether to use gradient quantization when training
- enabling this will discretize (quantize) the gradients and hessians into bins of ``num_grad_quant_bins``
- with quantized training, most arithmetics in the training process will be integer operations
- gradient quantization can accelerate training, with little accuracy drop in most cases
-**Note**: can be used only with ``device_type = cpu``
-``num_grad_quant_bins``:raw-html:`<a id="num_grad_quant_bins" title="Permalink to this parameter" href="#num_grad_quant_bins">🔗︎</a>`, default = ``4``, type = int
- number of bins to quantization gradients and hessians
- with more bins, the quantized training will be closer to full precision training
-**Note**: can be used only with ``device_type = cpu``
-``quant_train_renew_leaf``:raw-html:`<a id="quant_train_renew_leaf" title="Permalink to this parameter" href="#quant_train_renew_leaf">🔗︎</a>`, default = ``false``, type = bool
- whether to renew the leaf values with original gradients when quantized training
- renewing is very helpful for good quantized training accuracy for ranking objectives
-**Note**: can be used only with ``device_type = cpu``
-``stochastic_rounding``:raw-html:`<a id="stochastic_rounding" title="Permalink to this parameter" href="#stochastic_rounding">🔗︎</a>`, default = ``true``, type = bool
- whether to use stochastic rounding in gradient quantization
-``linear_tree``:raw-html:`<a id="linear_tree" title="Permalink to this parameter" href="#linear_tree">🔗︎</a>`, default = ``false``, type = bool, aliases: ``linear_trees``
- fit piecewise linear gradient boosting tree
- tree splits are chosen in the usual way, but the model at each leaf is linear instead of constant
- the linear model at each leaf includes all the numerical features in that leaf's branch
- categorical features are used for splits as normal but are not used in the linear models
- missing values should not be encoded as ``0``. Use ``np.nan`` for Python, ``NA`` for the CLI, and ``NA``, ``NA_real_``, or ``NA_integer_`` for R
- it is recommended to rescale data before training so that features have similar mean and standard deviation
-**Note**: only works with CPU and ``serial`` tree learner
-**Note**: ``regression_l1`` objective is not supported with linear tree boosting
-**Note**: setting ``linear_tree=true`` significantly increases the memory use of LightGBM
-**Note**: if you specify ``monotone_constraints``, constraints will be enforced when choosing the split points, but not when fitting the linear models on leaves
-``max_bin_by_feature``:raw-html:`<a id="max_bin_by_feature" title="Permalink to this parameter" href="#max_bin_by_feature">🔗︎</a>`, default = ``None``, type = multi-int
- max number of bins for each feature
- if not specified, will use ``max_bin`` for all features
-``data_random_seed``:raw-html:`<a id="data_random_seed" title="Permalink to this parameter" href="#data_random_seed">🔗︎</a>`, default = ``1``, type = int, aliases: ``data_seed``
- set this to ``false`` to disable Exclusive Feature Bundling (EFB), which is described in `LightGBM: A Highly Efficient Gradient Boosting Decision Tree <https://papers.nips.cc/paper_files/paper/2017/hash/6449f44a102fde848669bdd9eb6b76fa-Abstract.html>`__
-``use_missing``:raw-html:`<a id="use_missing" title="Permalink to this parameter" href="#use_missing">🔗︎</a>`, default = ``true``, type = bool
-``zero_as_missing``:raw-html:`<a id="zero_as_missing" title="Permalink to this parameter" href="#zero_as_missing">🔗︎</a>`, default = ``false``, type = bool
-``feature_pre_filter``:raw-html:`<a id="feature_pre_filter" title="Permalink to this parameter" href="#feature_pre_filter">🔗︎</a>`, default = ``true``, type = bool
- as dataset object is initialized only once and cannot be changed after that, you may need to set this to ``false`` when searching parameters with ``min_data_in_leaf``, otherwise features are filtered by ``min_data_in_leaf`` firstly if you don't reconstruct dataset object
-``pre_partition``:raw-html:`<a id="pre_partition" title="Permalink to this parameter" href="#pre_partition">🔗︎</a>`, default = ``false``, type = bool, aliases: ``is_pre_partition``
- by default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed, but may cause run out of memory error when the data file is very big
-``header``:raw-html:`<a id="header" title="Permalink to this parameter" href="#header">🔗︎</a>`, default = ``false``, type = bool, aliases: ``has_header``
-``label_column``:raw-html:`<a id="label_column" title="Permalink to this parameter" href="#label_column">🔗︎</a>`, default = ``""``, type = int or string, aliases: ``label``
-``weight_column``:raw-html:`<a id="weight_column" title="Permalink to this parameter" href="#weight_column">🔗︎</a>`, default = ``""``, type = int or string, aliases: ``weight``
-**Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0, and weight is column\_1, the correct parameter is ``weight=0``
-``group_column``:raw-html:`<a id="group_column" title="Permalink to this parameter" href="#group_column">🔗︎</a>`, default = ``""``, type = int or string, aliases: ``group``, ``group_id``, ``query_column``, ``query``, ``query_id``
-**Note**: index starts from ``0`` and it doesn't count the label column when passing type is ``int``, e.g. when label is column\_0 and query\_id is column\_1, the correct parameter is ``query=0``
-``ignore_column``:raw-html:`<a id="ignore_column" title="Permalink to this parameter" href="#ignore_column">🔗︎</a>`, default = ``""``, type = multi-int or string, aliases: ``ignore_feature``, ``blacklist``
-**Note**: despite the fact that specified columns will be completely ignored during the training, they still should have a valid format allowing LightGBM to load file successfully
-``categorical_feature``:raw-html:`<a id="categorical_feature" title="Permalink to this parameter" href="#categorical_feature">🔗︎</a>`, default = ``""``, type = multi-int or string, aliases: ``cat_feature``, ``categorical_column``, ``cat_column``, ``categorical_features``
-**Note**: using large values could be memory consuming. Tree decision rule works best when categorical features are presented by consecutive integers starting from zero
-``forcedbins_filename``:raw-html:`<a id="forcedbins_filename" title="Permalink to this parameter" href="#forcedbins_filename">🔗︎</a>`, default = ``""``, type = string
- path to a ``.json`` file that specifies bin upper bounds for some or all features
-``.json`` file should contain an array of objects, each containing the word ``feature`` (integer feature index) and ``bin_upper_bound`` (array of thresholds for binning)
-``precise_float_parser``:raw-html:`<a id="precise_float_parser" title="Permalink to this parameter" href="#precise_float_parser">🔗︎</a>`, default = ``false``, type = bool
- use precise floating point number parsing for text parser (e.g. CSV, TSV, LibSVM input)
-**Note**: setting this to ``true`` may lead to much slower text parsing
-``parser_config_file``:raw-html:`<a id="parser_config_file" title="Permalink to this parameter" href="#parser_config_file">🔗︎</a>`, default = ``""``, type = string
- path to a ``.json`` file that specifies customized parser initialized configuration
- see `lightgbm-transform <https://github.com/microsoft/lightgbm-transform>`__ for usage examples
-**Note**: ``lightgbm-transform`` is not maintained by LightGBM's maintainers. Bug reports or feature requests should go to `issues page <https://github.com/microsoft/lightgbm-transform/issues>`__
-``start_iteration_predict``:raw-html:`<a id="start_iteration_predict" title="Permalink to this parameter" href="#start_iteration_predict">🔗︎</a>`, default = ``0``, type = int
- used only in ``prediction`` task
- used to specify from which iteration to start the prediction
-``num_iteration_predict``:raw-html:`<a id="num_iteration_predict" title="Permalink to this parameter" href="#num_iteration_predict">🔗︎</a>`, default = ``-1``, type = int
- used only in ``prediction`` task
- used to specify how many trained iterations will be used in prediction
-**Note**: if you want to get more explanation for your model's predictions using SHAP values like SHAP interaction values, you can install `shap package <https://github.com/shap>`__
-``predict_disable_shape_check``:raw-html:`<a id="predict_disable_shape_check" title="Permalink to this parameter" href="#predict_disable_shape_check">🔗︎</a>`, default = ``false``, type = bool
- if ``false`` (the default), a fatal error will be raised if the number of features in the dataset you predict on differs from the number seen during training
- if ``true``, LightGBM will attempt to predict on whatever data you provide. This is dangerous because you might get incorrect predictions, but you could use it in situations where it is difficult or expensive to generate some features and you are very confident that they were never chosen for splits in the model
-**Note**: be very careful setting this parameter to ``true``
-``pred_early_stop``:raw-html:`<a id="pred_early_stop" title="Permalink to this parameter" href="#pred_early_stop">🔗︎</a>`, default = ``false``, type = bool
-``pred_early_stop_freq``:raw-html:`<a id="pred_early_stop_freq" title="Permalink to this parameter" href="#pred_early_stop_freq">🔗︎</a>`, default = ``10``, type = int
-``pred_early_stop_margin``:raw-html:`<a id="pred_early_stop_margin" title="Permalink to this parameter" href="#pred_early_stop_margin">🔗︎</a>`, default = ``10.0``, type = double
-``convert_model_language``:raw-html:`<a id="convert_model_language" title="Permalink to this parameter" href="#convert_model_language">🔗︎</a>`, default = ``""``, type = string
-``convert_model``:raw-html:`<a id="convert_model" title="Permalink to this parameter" href="#convert_model">🔗︎</a>`, default = ``gbdt_prediction.cpp``, type = string, aliases: ``convert_model_file``
-``objective_seed``:raw-html:`<a id="objective_seed" title="Permalink to this parameter" href="#objective_seed">🔗︎</a>`, default = ``5``, type = int
-**Note**: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
-**Note**: while enabling this should increase the overall performance metric of your model, it will also result in poor estimates of the individual class probabilities
-``boost_from_average``:raw-html:`<a id="boost_from_average" title="Permalink to this parameter" href="#boost_from_average">🔗︎</a>`, default = ``true``, type = bool
- parameter for `Huber loss <https://en.wikipedia.org/wiki/Huber_loss>`__ and `Quantile regression <https://en.wikipedia.org/wiki/Quantile_regression>`__
- controls the number of top-results to focus on during training, refer to "truncation level" in the Sec. 3 of `LambdaMART paper <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf>`__
- this parameter is closely related to the desirable cutoff ``k`` in the metric **NDCG@k** that we aim at optimizing the ranker for. The optimal setting for this parameter is likely to be slightly higher than ``k`` (e.g., ``k + 3``) to include more pairs of documents to train on, but perhaps not too high to avoid deviating too much from the desired target metric **NDCG@k**
-``lambdarank_norm``:raw-html:`<a id="lambdarank_norm" title="Permalink to this parameter" href="#lambdarank_norm">🔗︎</a>`, default = ``true``, type = bool
-``label_gain``:raw-html:`<a id="label_gain" title="Permalink to this parameter" href="#label_gain">🔗︎</a>`, default = ``0,1,3,7,15,31,63,...,2^30-1``, type = multi-double
-``lambdarank_position_bias_regularization``:raw-html:`<a id="lambdarank_position_bias_regularization" title="Permalink to this parameter" href="#lambdarank_position_bias_regularization">🔗︎</a>`, default = ``0.0``, type = double, constraints: ``lambdarank_position_bias_regularization >= 0.0``
- used only in ``lambdarank`` application when positional information is provided and position bias is modeled. Larger values reduce the inferred position bias factors.
-``""`` (empty string or not specified) means that metric corresponding to specified ``objective`` will be used (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
-`NDCG <https://en.wikipedia.org/wiki/Discounted_cumulative_gain#Normalized_DCG>`__ and `MAP <https://makarandtapaswi.wordpress.com/2012/07/02/intuition-behind-average-precision-and-map/>`__ evaluation positions, separated by ``,``
-``multi_error_top_k``:raw-html:`<a id="multi_error_top_k" title="Permalink to this parameter" href="#multi_error_top_k">🔗︎</a>`, default = ``1``, type = int, constraints: ``multi_error_top_k > 0``
- used only with ``multi_error`` metric
- threshold for top-k multi-error metric
- the error on each sample is ``0`` if the true class is among the top ``multi_error_top_k`` predictions, and ``1`` otherwise
- more precisely, the error on a sample is ``0`` if there are at least ``num_classes - multi_error_top_k`` predictions strictly less than the prediction on the true class
- when ``multi_error_top_k=1`` this is equivalent to the usual multi-error metric
-``auc_mu_weights``:raw-html:`<a id="auc_mu_weights" title="Permalink to this parameter" href="#auc_mu_weights">🔗︎</a>`, default = ``None``, type = multi-double
- used only with ``auc_mu`` metric
- list representing flattened matrix (in row-major order) giving loss weights for classification errors
- list should have ``n * n`` elements, where ``n`` is the number of classes
- the matrix co-ordinate ``[i, j]`` should correspond to the ``i * n + j``-th element of the list
- if not specified, will use equal weights for all classes
-``gpu_platform_id``:raw-html:`<a id="gpu_platform_id" title="Permalink to this parameter" href="#gpu_platform_id">🔗︎</a>`, default = ``-1``, type = int
-``gpu_device_id``:raw-html:`<a id="gpu_device_id" title="Permalink to this parameter" href="#gpu_device_id">🔗︎</a>`, default = ``-1``, type = int
-``gpu_use_dp``:raw-html:`<a id="gpu_use_dp" title="Permalink to this parameter" href="#gpu_use_dp">🔗︎</a>`, default = ``false``, type = bool
And if the name of data file is ``train.txt``, the initial score file should be named as ``train.txt.init`` and placed in the same folder as the data file.
If binary data files exist for raw data file ``train.txt``, for example in the name ``train.txt.bin``, then the initial score file should be named as ``train.txt.bin.init``.
For example, if you have a 112-document dataset with ``group = [27, 18, 67]``, that means that you have 3 groups, where the first 27 records are in the first group, records 28-45 are in the second group, and records 46-112 are in the third group.