FLAML/notebook/zeroshot_lightgbm.ipynb

619 строки
28 KiB
Plaintext
Исходник Обычный вид История

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/zeroshot_lightgbm.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Copyright (c) FLAML authors. All rights reserved. \n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"# Zero-shot AutoML with FLAML\n",
"\n",
"\n",
"## Introduction\n",
"\n",
"In this notebook, we demonstrate a basic use case of zero-shot AutoML with FLAML.\n",
"\n",
"FLAML requires `Python>=3.8`. To run this notebook example, please install the [autozero] option:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# %pip install flaml[autozero] lightgbm openml;"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"## What is zero-shot AutoML?\n",
"\n",
"Zero-shot automl means automl systems without expensive tuning. But it does adapt to data.\n",
"A zero-shot automl system will recommend a data-dependent default configuration for a given dataset.\n",
"\n",
"Think about what happens when you use a `LGBMRegressor`. When you initialize a `LGBMRegressor` without any argument, it will set all the hyperparameters to the default values preset by the lightgbm library.\n",
"There is no doubt that these default values have been carefully chosen by the library developers.\n",
"But they are static. They are not adaptive to different datasets.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'boosting_type': 'gbdt', 'class_weight': None, 'colsample_bytree': 1.0, 'importance_type': 'split', 'learning_rate': 0.1, 'max_depth': -1, 'min_child_samples': 20, 'min_child_weight': 0.001, 'min_split_gain': 0.0, 'n_estimators': 100, 'n_jobs': -1, 'num_leaves': 31, 'objective': None, 'random_state': None, 'reg_alpha': 0.0, 'reg_lambda': 0.0, 'silent': 'warn', 'subsample': 1.0, 'subsample_for_bin': 200000, 'subsample_freq': 0}\n"
]
}
],
"source": [
"from lightgbm import LGBMRegressor\n",
"estimator = LGBMRegressor()\n",
"print(estimator.get_params())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is unlikely that 100 trees with 31 leaves each is the best hyperparameter setting for every dataset.\n",
"\n",
"So, we propose to recommend data-dependent default configurations at runtime. \n",
"All you need to do is to import the `LGBMRegressor` from flaml.default instead of from lightgbm.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from flaml.default import LGBMRegressor"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other parts of code remain the same. The new `LGBMRegressor` will automatically choose a configuration according to the training data.\n",
"For different training data the configuration could be different.\n",
"The recommended configuration can be either the same as the static default configuration from the library, or different.\n",
"It is expected to be no worse than the static default configuration in most cases.\n",
"\n",
"For example, let's download [houses dataset](https://www.openml.org/d/537) from OpenML. The task is to predict median price of the house in the region based on demographic composition and a state of housing market in the region."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"download dataset from openml\n",
"Dataset name: houses\n",
"X_train.shape: (15480, 8), y_train.shape: (15480,);\n",
"X_test.shape: (5160, 8), y_test.shape: (5160,)\n"
]
}
],
"source": [
"from flaml.automl.data import load_openml_dataset\n",
"X_train, X_test, y_train, y_test = load_openml_dataset(dataset_id=537, data_dir='./')"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" median_income housing_median_age total_rooms total_bedrooms \\\n",
"19226 7.3003 19 4976.0 711.0 \n",
"14549 5.9547 18 1591.0 268.0 \n",
"9093 3.2125 19 552.0 129.0 \n",
"12213 6.9930 13 270.0 42.0 \n",
"12765 2.5162 21 3260.0 763.0 \n",
"... ... ... ... ... \n",
"13123 4.4125 20 1314.0 229.0 \n",
"19648 2.9135 27 1118.0 195.0 \n",
"9845 3.1977 31 1431.0 370.0 \n",
"10799 5.6315 34 2125.0 498.0 \n",
"2732 1.3882 15 1171.0 328.0 \n",
"\n",
" population households latitude longitude \n",
"19226 1926.0 625.0 38.46 -122.68 \n",
"14549 547.0 243.0 32.95 -117.24 \n",
"9093 314.0 106.0 34.68 -118.27 \n",
"12213 120.0 42.0 33.51 -117.18 \n",
"12765 1735.0 736.0 38.62 -121.41 \n",
"... ... ... ... ... \n",
"13123 712.0 219.0 38.27 -121.26 \n",
"19648 647.0 209.0 37.48 -120.89 \n",
"9845 704.0 393.0 36.58 -121.90 \n",
"10799 1052.0 468.0 33.62 -117.93 \n",
"2732 1024.0 298.0 32.80 -115.56 \n",
"\n",
"[15480 rows x 8 columns]\n"
]
}
],
"source": [
"print(X_train)"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"We fit the `flaml.default.LGBMRegressor` on this dataset."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:flaml.default.suggest:metafeature distance: 0.02197989436019765\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'boosting_type': 'gbdt', 'class_weight': None, 'colsample_bytree': 0.7019911744574896, 'importance_type': 'split', 'learning_rate': 0.022635758411078528, 'max_depth': -1, 'min_child_samples': 2, 'min_child_weight': 0.001, 'min_split_gain': 0.0, 'n_estimators': 4797, 'n_jobs': -1, 'num_leaves': 122, 'objective': None, 'random_state': None, 'reg_alpha': 0.004252223402511765, 'reg_lambda': 0.11288241427227624, 'silent': 'warn', 'subsample': 1.0, 'subsample_for_bin': 200000, 'subsample_freq': 0, 'max_bin': 511, 'verbose': -1}\n"
]
}
],
"source": [
"estimator = LGBMRegressor() # imported from flaml.default\n",
"estimator.fit(X_train, y_train)\n",
"print(estimator.get_params())"
]
},
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"The configuration is adapted as shown here. \n",
"The number of trees is 4797, the number of leaves is 122.\n",
"Does it work better than the static default configuration?\n",
"Lets compare.\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"0.8537444671194614"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"estimator.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data-dependent configuration has a $r^2$ metric 0.8537 on the test data. What about static default configuration from lightgbm?"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"data": {
"text/plain": [
"0.8296179648694404"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from lightgbm import LGBMRegressor\n",
"estimator = LGBMRegressor()\n",
"estimator.fit(X_train, y_train)\n",
"estimator.score(X_test, y_test)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The static default configuration gets $r^2=0.8296$, much lower than 0.8537 by the data-dependent configuration using `flaml.default`.\n",
"Again, the only difference in the code is from where you import the `LGBMRegressor`.\n",
"The adaptation to the training dataset is under the hood.\n",
"\n",
"You might wonder, how is it possible to find the data-dependent configuration without tuning?\n",
"The answer is that,\n",
"flaml can recommend good data-dependent default configurations at runtime without tuning only because it mines the hyperparameter configurations across different datasets offline as a preparation step.\n",
"So basically, zero-shot automl shifts the tuning cost from online to offline.\n",
"In the offline preparation stage, we applied `flaml.AutoML`.\n",
"\n",
"### Benefit of zero-shot AutoML\n",
"Now, what is the benefit of zero-shot automl? Or what is the benefit of shifting tuning from online to offline?\n",
"The first benefit is the online computational cost. That is the cost paid by the final consumers of automl. They only need to train one model.\n",
"They get the hyperparameter configuration right away. There is no overhead to worry about.\n",
"Another big benefit is that your code doesnt need to change. So if you currently have a workflow without the setup for tuning, you can use zero-shot automl without breaking that workflow.\n",
"Compared to tuning-based automl, zero-shot automl requires less input. For example, it doesnt need a tuning budget, resampling strategy, validation dataset etc.\n",
"A related benefit is that you dont need to worry about holding a subset of the training data for validation, which the tuning process might overfit.\n",
"As there is no tuning, you can use all the training data to train your model.\n",
"Finally, you can customize the offline preparation for a domain, and leverage the past tuning experience for better adaptation to similar tasks.\n",
"\n",
"## How to use at runtime\n",
"The easiest way to leverage this technique is to import a \"flamlized\" learner of your favorite choice and use it just as how you use the learner before. \n",
"The automation is done behind the scene.\n",
"The current list of “flamlized” learners are:\n",
"* LGBMClassifier, LGBMRegressor (inheriting LGBMClassifier, LGBMRegressor from lightgbm)\n",
"* XGBClassifier, XGBRegressor (inheriting LGBMClassifier, LGBMRegressor from xgboost)\n",
"* RandomForestClassifier, RandomForestRegressor (inheriting from scikit-learn)\n",
"* ExtraTreesClassifier, ExtraTreesRegressor (inheriting from scikit-learn)\n",
"They work for classification or regression tasks.\n",
"\n",
"### What's the magic behind the scene?\n",
"`flaml.default.LGBMRegressor` inherits `lightgbm.LGBMRegressor`, so all the methods and attributes in `lightgbm.LGBMRegressor` are still valid in `flaml.default.LGBMRegressor`.\n",
"The difference is, `flaml.default.LGBMRegressor` decides the hyperparameter configurations based on the training data. It would use a different configuration if it is predicted to outperform the original data-independent default. If you inspect the params of the fitted estimator, you can find what configuration is used. If the original default configuration is used, then it is equivalent to the original estimator.\n",
"The recommendation of which configuration should be used is based on offline AutoML run results. Information about the training dataset, such as the size of the dataset will be used to recommend a data-dependent configuration. The recommendation is done instantly in negligible time. The training can be faster or slower than using the original default configuration depending on the recommended configuration. \n",
"\n",
"### Can I check the configuration before training?\n",
"Yes. You can use `suggest_hyperparams()` method to find the suggested configuration.\n",
"For example, when you run the following code with the houses dataset, it will return the hyperparameter configuration instantly, without training the model."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:flaml.default.suggest:metafeature distance: 0.02197989436019765\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'n_estimators': 4797, 'num_leaves': 122, 'min_child_samples': 2, 'learning_rate': 0.022635758411078528, 'colsample_bytree': 0.7019911744574896, 'reg_alpha': 0.004252223402511765, 'reg_lambda': 0.11288241427227624, 'max_bin': 511, 'verbose': -1}\n"
]
}
],
"source": [
"from flaml.default import LGBMRegressor\n",
"\n",
"estimator = LGBMRegressor()\n",
"hyperparams, _, _, _ = estimator.suggest_hyperparams(X_train, y_train)\n",
"print(hyperparams)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can print the configuration as a dictionary, in case you want to check it before you use it for training.\n",
"\n",
"This brings up an equivalent, open-box way for zero-shot AutoML if you would like more control over the training. \n",
"Import the function `preprocess_and_suggest_hyperparams` from `flaml.default`.\n",
"This function takes the task name, the training dataset, and the estimator name as input:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:flaml.default.suggest:metafeature distance: 0.02197989436019765\n"
]
}
],
"source": [
"from flaml.default import preprocess_and_suggest_hyperparams\n",
"(\n",
" hyperparams,\n",
" estimator_class,\n",
" X_transformed,\n",
" y_transformed,\n",
" feature_transformer,\n",
" label_transformer,\n",
") = preprocess_and_suggest_hyperparams(\"regression\", X_train, y_train, \"lgbm\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It outputs the hyperparameter configurations, estimator class, transformed data, feature transformer and label transformer.\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'lightgbm.sklearn.LGBMRegressor'>\n"
]
}
],
"source": [
"print(estimator_class)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this case, the estimator name is “lgbm”. The corresponding estimator class is `lightgbm.LGBMRegressor`.\n",
"This line initializes a LGBMClassifier with the recommended hyperparameter configuration:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [],
"source": [
"model = estimator_class(**hyperparams)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we can fit the model on the transformed data."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"slideshow": {
"slide_type": "slide"
},
"tags": []
},
"outputs": [
{
"data": {
"text/html": [
"<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: \"▸\";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: \"▾\";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: \"\";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: \"\";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id=\"sk-container-id-1\" class=\"sk-top-container\
" learning_rate=0.022635758411078528, max_bin=511,\n",
" min_child_samples=2, n_estimators=4797, num_leaves=122,\n",
" reg_alpha=0.004252223402511765, reg_lambda=0.11288241427227624,\n",
" verbose=-1)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class=\"sk-container\" hidden><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"sk-estimator-id-1\" type=\"checkbox\" checked><label for=\"sk-estimator-id-1\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">LGBMRegressor</label><div class=\"sk-toggleable__content\"><pre>LGBMRegressor(colsample_bytree=0.7019911744574896,\n",
" learning_rate=0.022635758411078528, max_bin=511,\n",
" min_child_samples=2, n_estimators=4797, num_leaves=122,\n",
" reg_alpha=0.004252223402511765, reg_lambda=0.11288241427227624,\n",
" verbose=-1)</pre></div></div></div></div></div>"
],
"text/plain": [
"LGBMRegressor(colsample_bytree=0.7019911744574896,\n",
" learning_rate=0.022635758411078528, max_bin=511,\n",
" min_child_samples=2, n_estimators=4797, num_leaves=122,\n",
" reg_alpha=0.004252223402511765, reg_lambda=0.11288241427227624,\n",
" verbose=-1)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.fit(X_transformed, y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The feature transformer needs to be applied to the test data before prediction."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"X_test_transformed = feature_transformer.transform(X_test)\n",
"y_pred = model.predict(X_test_transformed)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"These are automated when you use the \"flamlized\" learner. So you dont need to know these details when you dont need to open the box.\n",
"We demonstrate them here to help you understand whats going on. And in case you need to modify some steps, you know what to do.\n",
"\n",
"(Note that some classifiers like XGBClassifier require the labels to be integers, while others do not. So you can decide whether to use the transformed labels y_transformed and the label transformer label_transformer. Also, each estimator may require specific preprocessing of the data.)\n",
"\n",
"## Combine Zero-shot AutoML and HPO\n",
"\n",
"Zero Shot AutoML is fast and simple to use. It is very useful if speed and simplicity are the primary concerns. \n",
"If you are not satisfied with the accuracy of the zero shot model, you may want to spend extra time to tune the model.\n",
"You can use `flaml.AutoML` to do that. Everything is the same as your normal `AutoML.fit()`, except to set `starting_points=\"data\"`.\n",
"This tells AutoML to start the tuning from the data-dependent default configurations. You can set the tuning budget in the same way as before.\n",
"Note that if you set `max_iter=0` and `time_budget=None`, you are effectively using zero-shot AutoML. \n",
"When `estimator_list` is omitted, the most promising estimator together with its hyperparameter configuration will be tried first, which are both decided by zero-shot automl."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[flaml.automl.logger: 04-28 02:51:45] {1663} INFO - task = regression\n",
"[flaml.automl.logger: 04-28 02:51:45] {1670} INFO - Data split method: uniform\n",
"[flaml.automl.logger: 04-28 02:51:45] {1673} INFO - Evaluation method: cv\n",
"[flaml.automl.logger: 04-28 02:51:45] {1771} INFO - Minimizing error metric: 1-r2\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:flaml.default.suggest:metafeature distance: 0.02197989436019765\n",
"INFO:flaml.default.suggest:metafeature distance: 0.006677018633540373\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[flaml.automl.logger: 04-28 02:51:45] {1881} INFO - List of ML learners in AutoML Run: ['lgbm']\n",
"[flaml.automl.logger: 04-28 02:51:45] {2191} INFO - iteration 0, current learner lgbm\n",
"[flaml.automl.logger: 04-28 02:53:39] {2317} INFO - Estimated sufficient time budget=1134156s. Estimated necessary time budget=1134s.\n",
"[flaml.automl.logger: 04-28 02:53:39] {2364} INFO - at 113.5s,\testimator lgbm's best error=0.1513,\tbest estimator lgbm's best error=0.1513\n",
"[flaml.automl.logger: 04-28 02:53:39] {2191} INFO - iteration 1, current learner lgbm\n",
"[flaml.automl.logger: 04-28 02:55:32] {2364} INFO - at 226.6s,\testimator lgbm's best error=0.1513,\tbest estimator lgbm's best error=0.1513\n",
"[flaml.automl.logger: 04-28 02:55:54] {2600} INFO - retrain lgbm for 22.3s\n",
"[flaml.automl.logger: 04-28 02:55:54] {2603} INFO - retrained model: LGBMRegressor(colsample_bytree=0.7019911744574896,\n",
" learning_rate=0.02263575841107852, max_bin=511,\n",
" min_child_samples=2, n_estimators=4797, num_leaves=122,\n",
" reg_alpha=0.004252223402511765, reg_lambda=0.11288241427227624,\n",
" verbose=-1)\n",
"[flaml.automl.logger: 04-28 02:55:54] {1911} INFO - fit succeeded\n",
"[flaml.automl.logger: 04-28 02:55:54] {1912} INFO - Time taken to find the best model: 113.4601559638977\n"
]
}
],
"source": [
"from flaml import AutoML\n",
"\n",
"automl = AutoML()\n",
"settings = {\n",
" \"task\": \"regression\",\n",
" \"starting_points\": \"data\",\n",
" \"estimator_list\": [\"lgbm\"],\n",
" \"time_budget\": 300,\n",
"}\n",
"automl.fit(X_train, y_train, **settings)"
]
}
],
"metadata": {
"interpreter": {
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
},
"kernelspec": {
"display_name": "Python 3.9.9 64-bit",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.15"
}
},
"nbformat": 4,
"nbformat_minor": 2
}