[doc] fix documentation for quantized training (#6528)

fix documentation for quantized training
This commit is contained in:
shiyu1994 2024-07-09 10:24:54 +08:00 коммит произвёл GitHub
Родитель a5054f7752
Коммит fc788a51b6
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
2 изменённых файлов: 9 добавлений и 6 удалений

Просмотреть файл

@ -680,7 +680,7 @@ Learning Control Parameters
- gradient quantization can accelerate training, with little accuracy drop in most cases
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
- *New in version 4.0.0*
@ -690,7 +690,7 @@ Learning Control Parameters
- with more bins, the quantized training will be closer to full precision training
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
- *New in 4.0.0*
@ -700,7 +700,7 @@ Learning Control Parameters
- renewing is very helpful for good quantized training accuracy for ranking objectives
- **Note**: can be used only with ``device_type = cpu``
- **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
- *New in 4.0.0*
@ -708,6 +708,8 @@ Learning Control Parameters
- whether to use stochastic rounding in gradient quantization
- **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
- *New in 4.0.0*
IO Parameters

Просмотреть файл

@ -619,23 +619,24 @@ struct Config {
// desc = enabling this will discretize (quantize) the gradients and hessians into bins of ``num_grad_quant_bins``
// desc = with quantized training, most arithmetics in the training process will be integer operations
// desc = gradient quantization can accelerate training, with little accuracy drop in most cases
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
// desc = *New in version 4.0.0*
bool use_quantized_grad = false;
// desc = number of bins to quantization gradients and hessians
// desc = with more bins, the quantized training will be closer to full precision training
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
// desc = *New in 4.0.0*
int num_grad_quant_bins = 4;
// desc = whether to renew the leaf values with original gradients when quantized training
// desc = renewing is very helpful for good quantized training accuracy for ranking objectives
// desc = **Note**: can be used only with ``device_type = cpu``
// desc = **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
// desc = *New in 4.0.0*
bool quant_train_renew_leaf = false;
// desc = whether to use stochastic rounding in gradient quantization
// desc = **Note**: can be used only with ``device_type = cpu`` and ``device_type=cuda``
// desc = *New in 4.0.0*
bool stochastic_rounding = true;