"Zero-shot automl means automl systems without expensive tuning. But it does adapt to data.\n",
"A zero-shot automl system will recommend a data-dependent default configuration for a given dataset.\n",
"\n",
"Think about what happens when you use a `LGBMRegressor`. When you initialize a `LGBMRegressor` without any argument, it will set all the hyperparameters to the default values preset by the lightgbm library.\n",
"There is no doubt that these default values have been carefully chosen by the library developers.\n",
"But they are static. They are not adaptive to different datasets.\n"
"Other parts of code remain the same. The new `LGBMRegressor` will automatically choose a configuration according to the training data.\n",
"For different training data the configuration could be different.\n",
"The recommended configuration can be either the same as the static default configuration from the library, or different.\n",
"It is expected to be no worse than the static default configuration in most cases.\n",
"\n",
"For example, let's download [houses dataset](https://www.openml.org/d/537) from OpenML. The task is to predict median price of the house in the region based on demographic composition and a state of housing market in the region."
"The static default configuration gets $r^2=0.8296$, much lower than 0.8537 by the data-dependent configuration using `flaml.default`.\n",
"Again, the only difference in the code is from where you import the `LGBMRegressor`.\n",
"The adaptation to the training dataset is under the hood.\n",
"\n",
"You might wonder, how is it possible to find the data-dependent configuration without tuning?\n",
"The answer is that,\n",
"flaml can recommend good data-dependent default configurations at runtime without tuning only because it mines the hyperparameter configurations across different datasets offline as a preparation step.\n",
"So basically, zero-shot automl shifts the tuning cost from online to offline.\n",
"In the offline preparation stage, we applied `flaml.AutoML`.\n",
"\n",
"### Benefit of zero-shot AutoML\n",
"Now, what is the benefit of zero-shot automl? Or what is the benefit of shifting tuning from online to offline?\n",
"The first benefit is the online computational cost. That is the cost paid by the final consumers of automl. They only need to train one model.\n",
"They get the hyperparameter configuration right away. There is no overhead to worry about.\n",
"Another big benefit is that your code doesn’t need to change. So if you currently have a workflow without the setup for tuning, you can use zero-shot automl without breaking that workflow.\n",
"Compared to tuning-based automl, zero-shot automl requires less input. For example, it doesn’t need a tuning budget, resampling strategy, validation dataset etc.\n",
"A related benefit is that you don’t need to worry about holding a subset of the training data for validation, which the tuning process might overfit.\n",
"As there is no tuning, you can use all the training data to train your model.\n",
"Finally, you can customize the offline preparation for a domain, and leverage the past tuning experience for better adaptation to similar tasks.\n",
"\n",
"## How to use at runtime\n",
"The easiest way to leverage this technique is to import a \"flamlized\" learner of your favorite choice and use it just as how you use the learner before. \n",
"The automation is done behind the scene.\n",
"The current list of “flamlized” learners are:\n",
"* LGBMClassifier, LGBMRegressor (inheriting LGBMClassifier, LGBMRegressor from lightgbm)\n",
"* XGBClassifier, XGBRegressor (inheriting LGBMClassifier, LGBMRegressor from xgboost)\n",
"* RandomForestClassifier, RandomForestRegressor (inheriting from scikit-learn)\n",
"* ExtraTreesClassifier, ExtraTreesRegressor (inheriting from scikit-learn)\n",
"They work for classification or regression tasks.\n",
"\n",
"### What's the magic behind the scene?\n",
"`flaml.default.LGBMRegressor`inherits`lightgbm.LGBMRegressor`, so all the methods and attributes in`lightgbm.LGBMRegressor`are still valid in`flaml.default.LGBMRegressor`.\n",
"The difference is,`flaml.default.LGBMRegressor`decides the hyperparameter configurations based on the training data. It would use a different configuration if it is predicted to outperform the original data-independent default. If you inspect the params of the fitted estimator, you can find what configuration is used. If the original default configuration is used, then it is equivalent to the original estimator.\n",
"The recommendation of which configuration should be used is based on offline AutoML run results. Information about the training dataset, such as the size of the dataset will be used to recommend a data-dependent configuration. The recommendation is done instantly in negligible time. The training can be faster or slower than using the original default configuration depending on the recommended configuration. \n",
"\n",
"### Can I check the configuration before training?\n",
"Yes. You can use`suggest_hyperparams()` method to find the suggested configuration.\n",
"For example, when you run the following code with the houses dataset, it will return the hyperparameter configuration instantly, without training the model."
" verbose=-1)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class=\"sk-container\" hidden><div class=\"sk-item\"><div class=\"sk-estimator sk-toggleable\"><input class=\"sk-toggleable__control sk-hidden--visually\" id=\"sk-estimator-id-1\" type=\"checkbox\" checked><label for=\"sk-estimator-id-1\" class=\"sk-toggleable__label sk-toggleable__label-arrow\">LGBMRegressor</label><div class=\"sk-toggleable__content\"><pre>LGBMRegressor(colsample_bytree=0.7019911744574896,\n",
"These are automated when you use the \"flamlized\" learner.So you don’t need to know these details when you don’t need to open the box.\n",
"We demonstrate them here to help you understand what’s going on. And in case you need to modify some steps, you know what to do.\n",
"\n",
"(Note that some classifiers like XGBClassifier require the labels to be integers, while others do not. So you can decide whether to use the transformed labelsy_transformedand the label transformerlabel_transformer. Also, each estimator may require specific preprocessing of the data.)\n",
"\n",
"## Combine Zero-shot AutoML and HPO\n",
"\n",
"Zero Shot AutoML is fast and simple to use. It is very useful if speed and simplicity are the primary concerns. \n",
"If you are not satisfied with the accuracy of the zero shot model, you may want to spend extra time to tune the model.\n",
"You can use`flaml.AutoML`to do that. Everything is the same as your normal `AutoML.fit()`, except to set`starting_points=\"data\"`.\n",
"This tells AutoML to start the tuning from the data-dependent default configurations. You can set the tuning budget in the same way as before.\n",
"Note that if you set`max_iter=0`and`time_budget=None`, you are effectively using zero-shot AutoML. \n",
"When`estimator_list`is omitted, the most promising estimator together with its hyperparameter configuration will be tried first, which are both decided by zero-shot automl."