* zero-shot AutoML in readme

* use pydoc-markdown 4.5.0 to avoid error in 4.6.0
This commit is contained in:
Chi Wang 2022-03-05 11:49:39 -08:00 коммит произвёл GitHub
Родитель 31ac984c4b
Коммит f0b0cae682
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
2 изменённых файлов: 16 добавлений и 6 удалений

4
.github/workflows/deploy-website.yml поставляемый
Просмотреть файл

@ -28,7 +28,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown
pip install pydoc-markdown==4.5.0
- name: pydoc-markdown run
run: |
pydoc-markdown
@ -64,7 +64,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown
pip install pydoc-markdown==4.5.0
- name: pydoc-markdown run
run: |
pydoc-markdown

Просмотреть файл

@ -33,7 +33,7 @@ FLAML requires **Python version >= 3.6**. It can be installed from pip:
pip install flaml
```
To run the [`notebook example`](https://github.com/microsoft/FLAML/tree/main/notebook),
To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook),
install flaml with the [notebook] option:
```bash
@ -43,7 +43,7 @@ pip install flaml[notebook]
## Quickstart
* With three lines of code, you can start using this economical and fast
AutoML engine as a scikit-learn style estimator.
AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML).
```python
from flaml import AutoML
@ -52,19 +52,29 @@ automl.fit(X_train, y_train, task="classification")
```
* You can restrict the learners and use FLAML as a fast hyperparameter tuning
tool for XGBoost, LightGBM, Random Forest etc. or a customized learner.
tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space).
```python
automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"])
```
* You can also run generic hyperparameter tuning for a custom function.
* You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function).
```python
from flaml import tune
tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600)
```
* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task.
```python
from flaml.default import LGBMRegressor
# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor.
estimator = LGBMRegressor()
# The hyperparameters are automatically set according to the training data.
estimator.fit(X_train, y_train)
```
## Documentation
You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples.