* Fix LightGBM models locale sensitivity and improve R/W performance.
When Java is used, the default C++ locale is broken. This is true for
Java providers that use the C API or even Python models that require JEP.
This patch solves that issue making the model reads/writes insensitive
to such settings.
To achieve it, within the model read/write codebase:
- C++ streams are imbued with the classic locale
- Calls to functions that are dependent on the locale are replaced
- The default locale is not changed!
This approach means:
- The user's locale is never tampered with, avoiding issues such as
https://github.com/microsoft/LightGBM/issues/2979 with the previous
approach https://github.com/microsoft/LightGBM/pull/2891
- Datasets can still be read according the user's locale
- The model file has a single format independent of locale
Changes:
- Add CommonC namespace which provides faster locale-independent versions of Common's methods
- Model code makes conversions through CommonC
- Cleanup unused Common methods
- Performance improvements. Use fast libraries for locale-agnostic conversion:
- value->string: https://github.com/fmtlib/fmt
- string->double: https://github.com/lemire/fast_double_parser (10x
faster double parsing according to their benchmark)
Bugfixes:
- https://github.com/microsoft/LightGBM/issues/2500
- https://github.com/microsoft/LightGBM/issues/2890
- https://github.com/ninia/jep/issues/205 (as it is related to LGBM as well)
* Align CommonC namespace
* Add new external_libs/ to python setup
* Try fast_double_parser fix#1
Testing commit e09e5aad828bcb16bea7ed0ed8322e019112fdbe
If it works it should fix more LGBM builds
* CMake: Attempt to link fmt without explicit PUBLIC tag
* Exclude external_libs from linting
* Add exernal_libs to MANIFEST.in
* Set dynamic linking option for fmt.
* linting issues
* Try to fix lint includes
* Try to pass fPIC with static fmt lib
* Try CMake P_I_C option with fmt library
* [R-package] Add CMake support for R and CRAN
* Cleanup CMakeLists
* Try fmt hack to remove stdout
* Switch to header-only mode
* Add PRIVATE argument to target_link_libraries
* use fmt in header-only mode
* Remove CMakeLists comment
* Change OpenMP to PUBLIC linking in Mac
* Update fmt submodule to 7.1.2
* Use fmt in header-only-mode
* Remove fmt from CMakeLists.txt
* Upgrade fast_double_parser to v0.2.0
* Revert "Add PRIVATE argument to target_link_libraries"
This reverts commit 3dd45dde7b92531b2530ab54522bb843c56227a7.
* Address James Lamb's comments
* Update R-package/.Rbuildignore
Co-authored-by: James Lamb <jaylamb20@gmail.com>
* Upgrade to fast_double_parser v0.3.0 - Solaris support
* Use legacy code only in Solaris
* Fix lint issues
* Fix comment
* Address StrikerRUS's comments (solaris ifdef).
* Change header guards
Co-authored-by: James Lamb <jaylamb20@gmail.com>
* TST make sklearn integration test compatible with 0.24
* remove useless import
* remove outdated comment
* order import
* use parametrize_with_checks
* change the reason
* skip constructible if != 0.23
* make tests behave the same across sklearn version
* linter
* address suggestions
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* Initial CUDA work
* redirect log to python console (#3090)
* redir log to python console
* fix pylint
* Apply suggestions from code review
* Update basic.py
* Apply suggestions from code review
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Update c_api.h
* Apply suggestions from code review
* Apply suggestions from code review
* super-minor: better wording
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: StrikerRUS <nekit94-12@hotmail.com>
* re-order includes (fixes#3132) (#3133)
* Revert "re-order includes (fixes#3132) (#3133)" (#3153)
This reverts commit 656d2676c2.
* Missing change from previous rebase
* Minor cleanup and removal of development scripts.
* Only set gpu_use_dp on by default for CUDA. Other minor change.
* Fix python lint indentation problem.
* More python lint issues.
* Big lint cleanup - more to come.
* Another large lint cleanup - more to come.
* Even more lint cleanup.
* Minor cleanup so less differences in code.
* Revert is_use_subset changes
* Another rebase from master to fix recent conflicts.
* More lint.
* Simple code cleanup - add & remove blank lines, revert unneccessary format changes, remove added dead code.
* Removed parameters added for CUDA and various bug fix.
* Yet more lint and unneccessary changes.
* Revert another change.
* Removal of unneccessary code.
* temporary appveyor.yml for building and testing
* Remove return value in ReSize
* Removal of unused variables.
* Code cleanup from reviewers suggestions.
* Removal of FIXME comments and unused defines.
* More reviewers comments cleanup.
* More reviewers comments cleanup.
* More reviewers comments cleanup.
* Fix config variables.
* Attempt to fix check-docs failure
* Update Paramster.rst for num_gpu
* Removing test appveyor.yml
* Add CUDA_RESOLVE_DEVICE_SYMBOLS to libraries to fix linking issue.
* Fixed handling of data elements less than 2K.
* More reviewers comments cleanup.
* Removal of TODO and fix printing of int64_t
* Add cuda change for CI testing and remove cuda from device_type in python.
* Missed one change form previous check-in
* Removal AdditionConfig and fix settings.
* Limit number of GPUs to one for now in CUDA.
* Update Parameters.rst for previous check-in
* Whitespace removal.
* Cleanup unused code.
* Changed uint/ushort/ulong to unsigned int/short/long to help Windows based CUDA compiler work.
* Lint change from previous check-in.
* Changes based on reviewers comments.
* More reviewer comment changes.
* Adding warning for is_sparse. Revert tmp_subset code. Only return FeatureGroupData if not is_multi_val_
* Fix so that CUDA code will compile even if you enable the SCORE_T_USE_DOUBLE define.
* Reviewer comment cleanup.
* Replace warning with Log message. Removal of some of the USE_CUDA. Fix typo and removal of pragma once.
* Remove PRINT debug for CUDA code.
* Allow to use of multiple GPUs for CUDA.
* More multi-GPUs enablement for CUDA.
* More code cleanup based on reviews comments.
* Update docs with latest config changes.
Co-authored-by: Gordon Fossum <fossum@us.ibm.com>
Co-authored-by: ChipKerchner <ckerchne@linux.vnet.ibm.com>
Co-authored-by: Guolin Ke <guolin.ke@outlook.com>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: StrikerRUS <nekit94-12@hotmail.com>
Co-authored-by: James Lamb <jaylamb20@gmail.com>
* Refactors sklearn API to allow a list of evaluation metrics in the parameter eval_metric of the class (and subclasses of) LGBMModel. Also adds unit tests for this functionality
* Simplify expression to check whether the user passed one or multiple metrics to eval_metric parameter
* Simplify new tests by using custom metrics already defined in the test file
* Update docstring to reflect the fact that the parameter "feval" from the "train" and "cv" functions can also receive a list of callables
* Remove oxford comma from docstrings
Apply suggestions from code review
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Use named-parameters to make sure code is compatible with future versions of scikit-learn
Apply suggestions from code review
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Remove throwaway return value to make code more succinct
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Move statement to group together the code related to feval
* Avoid modifying original args as it causes errors in scikit-learn tools
For details see: https://github.com/microsoft/LightGBM/pull/2619
* Consolidate multiple eval-metrics unit-tests into one test
Co-authored-by: German I Ramirez-Espinoza <gire@home>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* simplify start_iteration param for predict in Python and some code cleanup for start_iteration
* revert docs changes about the prediction result shape
* [python] add return_cvbooster flag to cv function and rename _CVBooster to make public (#283,#2105)
* [python] Reduce expected metric of unit testing
* [docs] add the CVBooster to the documentation
* [python] reflect the review comments
- Add some clarifications to the documentation
- Rename CVBooster.append to make private
- Decrease iteration rounds of testing to save CI time
- Use CVBooster as root member of lgb
* [python] add more checks in testing for cv
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* [python] add docstring for instance attributes of CVBooster
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* [python] fix docstring
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Fixes a bug that prevented using multiple eval_metrics in LGBMClassifier
* Move bug-fix test to the test_metrics unit-test
* Fix test to avoid issues with existing tests
* Fix coding-style error
Co-authored-by: German I Ramirez-Espinoza <gire@home>
* adding sparse support to TreeSHAP in lightgbm
* updating based on comments
* updated based on comments, used fromiter instead of frombuffer
* updated based on comments
* fixed limits import order
* fix sparse feature contribs to work with more than int32 max rows
* really fixed int64 max error and build warnings
* added sparse test with >int32 max rows
* fixed python side reshape check on sparse data
* updated based on latest comments
* fixed comments
* added CSC INT32_MAX validation to test, fixed comments
* modify attribute and include stacking tests
* backwards compatibility
* check sklearn version
* move stacking import
* Number of input features (#3173)
* Number of input features (#3173)
* Number of input features (#3173)
* Number of input features (#3173)
Split number of features and stacking tests.
* Number of input features (#3173)
Modify test name.
* Number of input features (#3173)
Update stacking tests for review comments.
* Number of input features (#3173)
* Number of input features (#3173)
* Number of input features (#3173)
* Number of input features (#3173)
Modify classifier test.
* Number of input features (#3173)
* Number of input features (#3173)
Check score.
* Add interaction constraints functionality.
* Minor fixes.
* Minor fixes.
* Change lambda to function.
* Fix gpu bug, remove extra blank lines.
* Fix gpu bug.
* Fix style issues.
* Try to fix segfault on MACOS.
* Fix bug.
* Fix bug.
* Fix bugs.
* Change parameter format for R.
* Fix R style issues.
* Change string formatting code.
* Change docs to say R package not supported.
* Remove R functionality, moving to separate PR.
* Keep track of branch features in tree object.
* Only track branch features when feature interactions are enabled.
* Fix lint error.
* Update docs and simplify tests.
* Support UTF-8 characters in feature name again
This commit reverts 0d59859c67.
Also see:
- https://github.com/microsoft/LightGBM/issues/2226
- https://github.com/microsoft/LightGBM/issues/2478
- https://github.com/microsoft/LightGBM/pull/2229
I reproduced the issue and as @kidotaka gave us a great survey in #2226,
I don't conclude that the cause is UTF-8, but "an empty string (character)".
Therefore, I revert "throw error when meet non ascii (#2229)" whose commit hash
is 0d59859c67, and add support feture names as UTF-8 again.
* add tests
* fix check-docs tests
* update
* fix tests
* update .travis.yml
* fix tests
* update test_r_package.sh
* update test_r_package.sh
* update test_r_package.sh
* add a test for R-package
* update test_r_package.sh
* update test_r_package.sh
* update test_r_package.sh
* fix test for R-package
* update test_r_package.sh
* update test_r_package.sh
* update test_r_package.sh
* update test_r_package.sh
* update
* updte
* update
* remove unneeded comments
* Revert "specify the last supported version of scikit-learn (#2637)"
This reverts commit d100277649.
* ban scikit-learn 0.22.0 and skip broken test
* fix updated test
* fix lint test
* Revert "fix lint test"
This reverts commit 8b4db0805f.
* [swig] Fix SWIG methods that return char** with StringArray.
+ [new] Add StringArray class to manage and manipulate arrays of fixed-length strings:
This class is now used to wrap any char** parameters, manage memory and
manipulate the strings.
Such class is defined at swig/StringArray.hpp and wrapped in StringArray.i.
+ [API+fix] Wrap LGBM_BoosterGetFeatureNames it resulted in segfault before:
Added wrapper LGBM_BoosterGetFeatureNamesSWIG(BoosterHandle) that
only receives the booster handle and figures how much memory to allocate
for strings and returns a StringArray which can be easily converted to String[].
+ [API+safety] For consistency, LGBM_BoosterGetEvalNamesSWIG was wrapped as well:
* Refactor to detect any kind of errors and removed all the parameters
besides the BoosterHandle (much simpler API to use in Java).
* No assumptions are made about the required string space necessary (128 before).
* The amount of required string memory is computed internally
+ [safety] No possibility of undefined behaviour
The two methods wrapped above now compute the necessary string storage space
prior to allocation, as the low-level C API calls would crash the process
irreversibly if they write more memory than which is passed to them.
* Changes to C API and wrappers support char**
To support the latest SWIG changes that enable proper char**
return support that is safe, the C API was changed.
The respecive wrappers in R and Python were changed too.
* Cleanup indentation in new lightgbm_R.cpp code
* Adress review code-style comments.
* Update swig/StringArray.hpp
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update python-package/lightgbm/basic.py
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update src/lightgbm_R.cpp
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: alberto.ferreira <alberto.ferreira@feedzai.com>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* Add handling of RandomState object, which is standard for sklearn methods.
LightGBM expects an integer seed instead of an object.
If passed object is RandomState, we choose random integer based on its state to seed the underlying low level code.
While chosen random integer is only in the range between 1 and 1e10 I expect it to have enough entropy (?) to not matter in practice.
* Add RandomState object to random_state docstring.
* remove blank line
* Use property to handle setting random_state.
This enables setting cloned estimators with the set_params method in sklearn.
* Add docstring to attribute.
* Fix and simplify docstring.
* Add test case.
* Use maximal int for datatype in seed derivation.
* Replace random_state property with interfacing in fit method.
Derives int seed for C code only when fitting and keeps RandomState object as param.
* Adapt unit test to property change.
* Extended test case and docstring
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Add more equality checks (feature importance, best iteration/score).
* Add equality comparison of boosters represented by strings.
Remove useless best_iteration_ comparison (we do not use early_stopping).
* fix whitespace
* Test if two subsequent fits produce different models
* Apply suggestions from code review
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
* fix the bug when use different params with reference
* fix
* Update basic.py
* Apply suggestions from code review
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update basic.py
* add test
* Apply suggestions from code review
* added asserts in test
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>
Co-authored-by: StrikerRUS <nekit94-12@hotmail.com>
* save all param values into model file
* revert storing predict params
* do not save params for predict and convert tasks
* fixed test: 10 is found successfully for default 100
* specify more params as no-save
* Add capability to get possible max and min values for a model
* Change implementation to have return value in tree.cpp, change naming to upper and lower bound, move implementation to gdbt.cpp
* Update include/LightGBM/c_api.h
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Change iteration to avoid potential overflow, add bindings to R and Python and a basic test
* Adjust test values
* Consider const correctness and multithreading protection
* Update test values
* Update test values
* Add test to check that model is exactly the same in all platforms
* Try to parse the model to get the expected values
* Try to parse the model to get the expected values
* Fix implementation, num_leaves can be lower than the leaf_value_ size
* Do not check for num_leaves to be smaller than actual size and get back to test with hardcoded value
* Change test order
* Add gpu_use_dp option in test
* Remove helper test method
* Update src/c_api.cpp
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update src/io/tree.cpp
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update src/io/tree.cpp
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Update tests/python_package_test/test_basic.py
Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
* Remoove imports
Co-authored-by: Nikita Titov <nekit94-08@mail.ru>