2023-04-08 06:04:01 +03:00
{
"cells": [
2023-05-25 02:55:04 +03:00
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-06-09 21:40:04 +03:00
"<a href=\"https://colab.research.google.com/github/microsoft/FLAML/blob/main/notebook/autogen_openai_completion.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
2023-05-25 02:55:04 +03:00
]
},
2023-04-08 06:04:01 +03:00
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Copyright (c) Microsoft Corporation. All rights reserved. \n",
"\n",
"Licensed under the MIT License.\n",
"\n",
"# Use FLAML to Tune OpenAI Models\n",
"\n",
2023-08-14 10:09:45 +03:00
"`flaml.autogen` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. The research study finds that tuning hyperparameters can significantly improve the utility of LLMs.\n",
"Please find documentation about this feature [here](/docs/Use-Cases/AutoGen#enhanced-inference).\n",
2023-04-08 06:04:01 +03:00
"\n",
2023-08-14 10:09:45 +03:00
"In this notebook, we tune OpenAI models for code generation. We use [the HumanEval benchmark](https://huggingface.co/datasets/openai_humaneval) released by OpenAI for synthesizing programs from docstrings.\n",
2023-04-08 06:04:01 +03:00
"\n",
"## Requirements\n",
"\n",
2023-08-01 05:22:30 +03:00
"FLAML requires `Python>=3.8`. To run this notebook example, please install flaml with the [autogen,blendsearch] option:\n",
2023-04-08 06:04:01 +03:00
"```bash\n",
2023-05-25 02:55:04 +03:00
"pip install flaml[autogen,blendsearch]\n",
2023-04-08 06:04:01 +03:00
"```"
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 1,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:36.910966Z",
"iopub.status.busy": "2023-02-24T23:25:36.910473Z",
"iopub.status.idle": "2023-02-24T23:25:36.914554Z",
"shell.execute_reply": "2023-02-24T23:25:36.914030Z"
}
},
"outputs": [],
"source": [
2023-08-14 10:09:45 +03:00
"# %pip install flaml[autogen,blendsearch]~=2.0.0 datasets"
2023-04-08 06:04:01 +03:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-07-15 01:52:45 +03:00
"## Set your API Endpoint\n",
"\n",
2023-07-23 16:23:09 +03:00
"* The [`config_list_openai_aoai`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_openai_aoai) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
" - OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
" - Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
" - Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
"* The [`config_list_from_json`](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for environment variable `env_or_file` which needs to be a valid json string. If that variable is not found, it then looks for a json file with the same name. It filters the configs by filter_dict.\n",
2023-07-15 01:52:45 +03:00
"\n",
2023-07-23 16:23:09 +03:00
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base. If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n"
2023-04-08 06:04:01 +03:00
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 2,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:36.917301Z",
"iopub.status.busy": "2023-02-24T23:25:36.917011Z",
"iopub.status.idle": "2023-02-24T23:25:36.923156Z",
"shell.execute_reply": "2023-02-24T23:25:36.922619Z"
}
},
"outputs": [],
"source": [
2023-08-01 05:22:30 +03:00
"from flaml import autogen\n",
2023-04-08 06:04:01 +03:00
"\n",
2023-08-01 05:22:30 +03:00
"endpoint_list = autogen.config_list_openai_aoai()\n",
2023-07-23 16:23:09 +03:00
"# the endpoint_list looks like this:\n",
"# endpoint_list = [\n",
"# {\n",
"# 'api_key': '<your OpenAI API key here>',\n",
"# }, # OpenAI API endpoint for gpt-4\n",
"# {\n",
"# 'api_key': '<your first Azure OpenAI API key here>',\n",
"# 'api_base': '<your first Azure OpenAI API base here>',\n",
"# 'api_type': 'azure',\n",
"# 'api_version': '2023-03-15-preview',\n",
"# }, # Azure OpenAI API endpoint for gpt-4\n",
"# {\n",
"# 'api_key': '<your second Azure OpenAI API key here>',\n",
"# 'api_base': '<your second Azure OpenAI API base here>',\n",
"# 'api_type': 'azure',\n",
"# 'api_version': '2023-03-15-preview',\n",
"# }, # another Azure OpenAI API endpoint for gpt-4\n",
"# ]\n",
"\n",
2023-08-01 05:22:30 +03:00
"config_list = autogen.config_list_from_json(\n",
2023-07-23 16:23:09 +03:00
" env_or_file=\"OAI_CONFIG_LIST\",\n",
" filter_dict={\n",
" \"model\": {\n",
" \"gpt-3.5-turbo\",\n",
" \"gpt-3.5-turbo-16k\",\n",
" \"gpt-3.5-turbo-0301\",\n",
" \"chatgpt-35-turbo-0301\",\n",
" \"gpt-35-turbo-v0301\",\n",
2023-08-14 10:09:45 +03:00
" \"gpt\",\n",
2023-07-23 16:23:09 +03:00
" },\n",
" },\n",
")\n",
"# the config_list looks like this:\n",
"# config_list = [\n",
"# {\n",
"# 'model': 'gpt-3.5-turbo',\n",
"# 'api_key': '<your OpenAI API key here>',\n",
"# }, # OpenAI API endpoint for gpt-3.5-turbo\n",
"# {\n",
"# 'model': 'gpt-3.5-turbo',\n",
"# 'api_key': '<your first Azure OpenAI API key here>',\n",
"# 'api_base': '<your first Azure OpenAI API base here>',\n",
"# 'api_type': 'azure',\n",
"# 'api_version': '2023-06-01-preview',\n",
"# }, # Azure OpenAI API endpoint for gpt-3.5-turbo\n",
"# {\n",
"# 'model': 'gpt-35-turbo-v0301',\n",
"# 'api_key': '<your second Azure OpenAI API key here>',\n",
"# 'api_base': '<your second Azure OpenAI API base here>',\n",
"# 'api_type': 'azure',\n",
"# 'api_version': '2023-06-01-preview',\n",
"# }, # another Azure OpenAI API endpoint for gpt-3.5-turbo with deployment name gpt-35-turbo-v0301\n",
"# ]\n"
2023-04-08 06:04:01 +03:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-07-23 16:23:09 +03:00
"If you don't use the two provided utility functions above, you can define the lists in other ways you prefer.\n",
"\n",
2023-04-08 06:04:01 +03:00
"## Load dataset\n",
"\n",
"First, we load the humaneval dataset. The dataset contains 164 examples. We use the first 20 for tuning the generation hyperparameters and the remaining for evaluation. In each example, the \"prompt\" is the prompt string for eliciting the code generation (renamed into \"definition\"), \"test\" is the Python code for unit test for the example, and \"entry_point\" is the function name to be tested."
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 3,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:36.931255Z",
"iopub.status.busy": "2023-02-24T23:25:36.930838Z",
"iopub.status.idle": "2023-02-24T23:25:39.148799Z",
"shell.execute_reply": "2023-02-24T23:25:39.148113Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Found cached dataset openai_humaneval (/home/vscode/.cache/huggingface/datasets/openai_humaneval/openai_humaneval/1.0.0/2955cebd73602e828fa8c0a424c594e5fab4ec863b316ca98f3d8fdb6a626e75)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
2023-08-01 05:22:30 +03:00
"model_id": "8e08cc907707418a86a3da668e45326b",
2023-04-08 06:04:01 +03:00
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/1 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Loading cached shuffled indices for dataset at /home/vscode/.cache/huggingface/datasets/openai_humaneval/openai_humaneval/1.0.0/2955cebd73602e828fa8c0a424c594e5fab4ec863b316ca98f3d8fdb6a626e75/cache-1e8448101c1b32e8.arrow\n"
]
}
],
"source": [
"import datasets\n",
"\n",
"seed = 41\n",
2024-08-12 05:19:11 +03:00
"data = datasets.load_dataset(\"openai_humaneval\", trust_remote_code=True)[\"test\"].shuffle(seed=seed)\n",
2023-04-08 06:04:01 +03:00
"n_tune_data = 20\n",
"tune_data = [\n",
" {\n",
" \"definition\": data[x][\"prompt\"],\n",
" \"test\": data[x][\"test\"],\n",
" \"entry_point\": data[x][\"entry_point\"],\n",
" }\n",
" for x in range(n_tune_data)\n",
"]\n",
"test_data = [\n",
" {\n",
" \"definition\": data[x][\"prompt\"],\n",
" \"test\": data[x][\"test\"],\n",
" \"entry_point\": data[x][\"entry_point\"],\n",
" }\n",
" for x in range(n_tune_data, len(data))\n",
"]\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"Check a tuning example:"
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 4,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:39.152156Z",
"iopub.status.busy": "2023-02-24T23:25:39.151531Z",
"iopub.status.idle": "2023-02-24T23:25:39.155313Z",
"shell.execute_reply": "2023-02-24T23:25:39.154731Z"
},
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"def compare(game,guess):\n",
" \"\"\"I think we all remember that feeling when the result of some long-awaited\n",
" event is finally known. The feelings and thoughts you have at that moment are\n",
" definitely worth noting down and comparing.\n",
" Your task is to determine if a person correctly guessed the results of a number of matches.\n",
" You are given two arrays of scores and guesses of equal length, where each index shows a match. \n",
" Return an array of the same length denoting how far off each guess was. If they have guessed correctly,\n",
" the value is 0, and if not, the value is the absolute difference between the guess and the score.\n",
" \n",
" \n",
" example:\n",
"\n",
" compare([1,2,3,4,5,1],[1,2,3,4,2,-2]) -> [0,0,0,0,3,3]\n",
" compare([0,5,0,0,0,4],[4,1,1,0,0,-2]) -> [4,4,1,0,0,6]\n",
" \"\"\"\n",
"\n"
]
}
],
"source": [
"print(tune_data[1][\"definition\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is one example of the unit test code for verifying the correctness of the generated code:"
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 5,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:39.158398Z",
"iopub.status.busy": "2023-02-24T23:25:39.157766Z",
"iopub.status.idle": "2023-02-24T23:25:39.161396Z",
"shell.execute_reply": "2023-02-24T23:25:39.160797Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"def check(candidate):\n",
"\n",
" # Check some simple cases\n",
" assert candidate([1,2,3,4,5,1],[1,2,3,4,2,-2])==[0,0,0,0,3,3], \"This prints if this assert fails 1 (good for debugging!)\"\n",
" assert candidate([0,0,0,0,0,0],[0,0,0,0,0,0])==[0,0,0,0,0,0], \"This prints if this assert fails 1 (good for debugging!)\"\n",
" assert candidate([1,2,3],[-1,-2,-3])==[2,4,6], \"This prints if this assert fails 1 (good for debugging!)\"\n",
" assert candidate([1,2,3,5],[-1,2,3,4])==[2,0,0,1], \"This prints if this assert fails 1 (good for debugging!)\"\n",
"\n",
" # Check some edge cases that are easy to work out by hand.\n",
" assert True, \"This prints if this assert fails 2 (also good for debugging!)\"\n",
"\n",
"\n"
]
}
],
"source": [
"print(tune_data[1][\"test\"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define Success Metric\n",
"\n",
"Before we start tuning, we need to define the success metric we want to optimize. For each code generation task, we can use the model to generate multiple candidates, and then select one from them. If the final selected response can pass a unit test, we consider the task as successfully solved. Then we can define the mean success rate of a collection of tasks."
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 6,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:39.164187Z",
"iopub.status.busy": "2023-02-24T23:25:39.163867Z",
"iopub.status.idle": "2023-02-24T23:25:39.169009Z",
"shell.execute_reply": "2023-02-24T23:25:39.168427Z"
}
},
"outputs": [],
"source": [
"from functools import partial\n",
"\n",
2023-04-23 14:50:29 +03:00
"eval_with_generated_assertions = partial(\n",
2023-08-01 05:22:30 +03:00
" autogen.code_utils.eval_function_completions,\n",
" assertions=partial(autogen.code_utils.generate_assertions, config_list=config_list),\n",
2023-04-23 14:50:29 +03:00
" use_docker=False,\n",
" # Please set use_docker=True if you have docker available to run the generated code.\n",
" # Using docker is safer than running the generated code directly.\n",
")\n"
2023-04-08 06:04:01 +03:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"This function will first generate assertion statements for each problem. Then, it uses the assertions to select the generated responses.\n",
"\n",
"## Use the tuning data to find a good configuration\n",
"\n",
2023-08-01 05:22:30 +03:00
"FLAML has provided an API for hyperparameter optimization of OpenAI models: `autogen.Completion.tune` and to make a request with the tuned config: `autogen.Completion.create`.\n",
2023-07-15 01:52:45 +03:00
"\n",
"For (local) reproducibility and cost efficiency, we cache responses from OpenAI with a controllable seed."
2023-04-08 06:04:01 +03:00
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 7,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:40.587815Z",
"iopub.status.busy": "2023-02-24T23:25:40.587283Z",
"iopub.status.idle": "2023-02-24T23:25:40.590826Z",
"shell.execute_reply": "2023-02-24T23:25:40.590158Z"
},
"slideshow": {
"slide_type": "slide"
}
},
"outputs": [],
"source": [
2023-08-01 05:22:30 +03:00
"autogen.Completion.set_cache(seed)"
2023-04-08 06:04:01 +03:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
2023-07-15 01:52:45 +03:00
"This will create a disk cache in \".cache/{seed}\". You can change `cache_path_root` from \".cache\" to a different path in `set_cache()`. The cache for different seeds are stored separately.\n",
2023-04-08 06:04:01 +03:00
"\n",
"### Perform tuning\n",
"\n",
"The tuning will take a while to finish, depending on the optimization budget. The tuning will be performed under the specified optimization budgets.\n",
"\n",
"* `inference_budget` is the target average inference budget per instance in the benchmark. For example, 0.02 means the target inference budget is 0.02 dollars, which translates to 1000 tokens (input + output combined) if the text Davinci model is used.\n",
"* `optimization_budget` is the total budget allowed to perform the tuning. For example, 5 means 5 dollars are allowed in total, which translates to 250K tokens for the text Davinci model.\n",
"* `num_sumples` is the number of different hyperparameter configurations which is allowed to try. The tuning will stop after either num_samples trials or after optimization_budget dollars spent, whichever happens first. -1 means no hard restriction in the number of trials and the actual number is decided by `optimization_budget`.\n",
"\n",
"Users can specify tuning data, optimization metric, optimization mode, evaluation function, search spaces etc.. The default search space is:\n",
"\n",
"```python\n",
"default_search_space = {\n",
" \"model\": tune.choice([\n",
" \"text-ada-001\",\n",
" \"text-babbage-001\",\n",
" \"text-davinci-003\",\n",
" \"gpt-3.5-turbo\",\n",
" \"gpt-4\",\n",
" ]),\n",
" \"temperature_or_top_p\": tune.choice(\n",
" [\n",
" {\"temperature\": tune.uniform(0, 1)},\n",
" {\"top_p\": tune.uniform(0, 1)},\n",
" ]\n",
" ),\n",
" \"max_tokens\": tune.lograndint(50, 1000),\n",
" \"n\": tune.randint(1, 100),\n",
" \"prompt\": \"{prompt}\",\n",
"}\n",
"```\n",
"\n",
"The default search space can be overridden by users' input.\n",
"For example, the following code specifies three choices for the prompt and two choices of stop sequences. For hyperparameters which don't appear in users' input, the default search space will be used. If you don't have access to gpt-4 or would like to modify the choice of models, you can provide a different search space for model."
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 8,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:25:40.593603Z",
"iopub.status.busy": "2023-02-24T23:25:40.593269Z",
"iopub.status.idle": "2023-02-24T23:26:38.349191Z",
"shell.execute_reply": "2023-02-24T23:26:38.348392Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
2024-08-12 05:19:11 +03:00
"\u001B[32m[I 2023-07-30 04:19:08,150]\u001B[0m A new study created in memory with name: optuna\u001B[0m\n",
"\u001B[32m[I 2023-07-30 04:19:08,153]\u001B[0m A new study created in memory with name: optuna\u001B[0m\n"
2023-04-08 06:04:01 +03:00
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
2023-08-01 05:22:30 +03:00
"[flaml.tune.tune: 07-30 04:19:08] {805} INFO - trial 1 config: {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-ada-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}\n",
"[flaml.tune.tune: 07-30 04:22:35] {197} INFO - result: {'index_selected': 26.0, 'succeed_assertions': 0.0, 'success': 0.0, 'gen_cost': 0.000460625, 'assertions': 'assert vowels_count(\"abcde\") == 2\\nassert vowels_count(\"ACEDY\") == 3', 'total_cost': 0.010514800000000003, 'cost': 0.010514800000000003, 'inference_cost': 0.00023534000000000003, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-ada-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'text-ada-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 207.29033374786377}\n",
"[flaml.tune.tune: 07-30 04:22:35] {805} INFO - trial 2 config: {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-babbage-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}\n",
"[flaml.tune.tune: 07-30 04:23:18] {197} INFO - result: {'index_selected': 26.0, 'succeed_assertions': 0.0, 'success': 0.0, 'gen_cost': 0.000460625, 'assertions': 'assert vowels_count(\"abcde\") == 2\\nassert vowels_count(\"ACEDY\") == 3', 'total_cost': 0.0300243, 'cost': 0.019509500000000003, 'inference_cost': 0.0009754750000000001, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-babbage-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'text-babbage-001', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 42.417603969573975}\n",
"[flaml.tune.tune: 07-30 04:23:18] {805} INFO - trial 3 config: {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-davinci-003', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}\n",
"[flaml.tune.tune: 07-30 04:24:20] {197} INFO - result: {'index_selected': 2.35, 'succeed_assertions': 0.95, 'success': 0.65, 'gen_cost': 0.000460625, 'assertions': 'assert vowels_count(\"abcde\") == 2\\nassert vowels_count(\"ACEDY\") == 3', 'total_cost': 0.8658043000000002, 'cost': 0.8357800000000002, 'inference_cost': 0.04093000000000001, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-davinci-003', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'text-davinci-003', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 62.81497287750244}\n",
"[flaml.tune.tune: 07-30 04:24:20] {805} INFO - trial 4 config: {'prompt': 1, 'stop': 0, 'subspace': {'model': 'gpt-3.5-turbo', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}\n",
"[flaml.tune.tune: 07-30 04:25:39] {197} INFO - result: {'index_selected': 13.95, 'succeed_assertions': 0.55, 'success': 0.5, 'gen_cost': 0.000460625, 'assertions': 'assert vowels_count(\"abcde\") == 2\\nassert vowels_count(\"ACEDY\") == 3', 'total_cost': 0.9462703000000001, 'cost': 0.08046600000000001, 'inference_cost': 0.00399515, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'gpt-3.5-turbo', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'gpt-3.5-turbo', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 79.03474521636963}\n",
"[flaml.tune.tune: 07-30 04:25:39] {805} INFO - trial 5 config: {'prompt': 1, 'stop': 0, 'subspace': {'model': 'gpt-4', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}\n",
"[flaml.tune.tune: 07-30 04:25:50] {197} INFO - result: {'success': 0, 'total_cost': 1.0053703, 'cost': 0.0591, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'gpt-4', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'gpt-4', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 10.245523691177368}\n",
"[flaml.tune.tune: 07-30 04:25:50] {828} WARNING - fail to sample a trial for 100 times in a row, stopping.\n"
2023-04-08 06:04:01 +03:00
]
}
],
"source": [
2023-08-01 05:22:30 +03:00
"config, analysis = autogen.Completion.tune(\n",
2023-04-08 06:04:01 +03:00
" data=tune_data, # the data for tuning\n",
" metric=\"success\", # the metric to optimize\n",
" mode=\"max\", # the optimization mode\n",
" eval_func=eval_with_generated_assertions, # the evaluation function to return the success metrics\n",
" # log_file_name=\"logs/humaneval.log\", # the log file name\n",
" inference_budget=0.05, # the inference budget (dollar per instance)\n",
" optimization_budget=1, # the optimization budget (dollar in total)\n",
" # num_samples can further limit the number of trials for different hyperparameter configurations;\n",
" # -1 means decided by the optimization budget only\n",
" num_samples=-1,\n",
" prompt=[\n",
" \"{definition}\",\n",
" \"# Python 3{definition}\",\n",
" \"Complete the following Python function:{definition}\",\n",
" ], # the prompt templates to choose from\n",
" stop=[[\"\\nclass\", \"\\ndef\", \"\\nif\", \"\\nprint\"], None], # the stop sequences\n",
2023-07-23 16:23:09 +03:00
" config_list=endpoint_list, # optional: a list of endpoints to use\n",
2023-08-03 12:17:20 +03:00
" allow_format_str_template=True, # whether to allow format string template\n",
2023-04-08 06:04:01 +03:00
")\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Output tuning results\n",
"\n",
2023-08-01 05:22:30 +03:00
"After the tuning, we can print out the config and the result found by autogen:"
2023-04-08 06:04:01 +03:00
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 9,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:26:38.352710Z",
"iopub.status.busy": "2023-02-24T23:26:38.352378Z",
"iopub.status.idle": "2023-02-24T23:26:38.356939Z",
"shell.execute_reply": "2023-02-24T23:26:38.356217Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"optimized config {'prompt': '# Python 3{definition}', 'stop': ['\\nclass', '\\ndef', '\\nif', '\\nprint'], 'model': 'text-davinci-003', 'max_tokens': 148, 'n': 27, 'top_p': 0.755486898036596}\n",
2023-08-01 05:22:30 +03:00
"best result on tuning data {'index_selected': 2.35, 'succeed_assertions': 0.95, 'success': 0.65, 'gen_cost': 0.000460625, 'assertions': 'assert vowels_count(\"abcde\") == 2\\nassert vowels_count(\"ACEDY\") == 3', 'total_cost': 0.8658043000000002, 'cost': 0.8357800000000002, 'inference_cost': 0.04093000000000001, 'training_iteration': 0, 'config': {'prompt': 1, 'stop': 0, 'subspace': {'model': 'text-davinci-003', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}}, 'config/prompt': 1, 'config/stop': 0, 'config/subspace': {'model': 'text-davinci-003', 'max_tokens': 148, 'temperature_or_top_p': {'top_p': 0.755486898036596}, 'n': 27}, 'experiment_tag': 'exp', 'time_total_s': 62.81497287750244}\n"
2023-04-08 06:04:01 +03:00
]
}
],
"source": [
"print(\"optimized config\", config)\n",
"print(\"best result on tuning data\", analysis.best_result)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"### Make a request with the tuned config\n",
"\n",
"We can apply the tuned config on the request for an example task:"
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 10,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:26:38.359902Z",
"iopub.status.busy": "2023-02-24T23:26:38.359506Z",
"iopub.status.idle": "2023-02-24T23:26:39.343921Z",
"shell.execute_reply": "2023-02-24T23:26:39.343051Z"
},
"slideshow": {
"slide_type": "subslide"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
2023-08-01 05:22:30 +03:00
" \"id\": \"cmpl-7hsFhPX6faeWYaT4y3C7IkQAgNbZR\",\n",
" \"warning\": \"This model version is deprecated. Migrate before January 4, 2024 to avoid disruption of service. Learn more https://platform.openai.com/docs/deprecations\",\n",
2023-07-23 16:23:09 +03:00
" \"object\": \"text_completion\",\n",
2023-08-01 05:22:30 +03:00
" \"created\": 1690691005,\n",
2023-07-23 16:23:09 +03:00
" \"model\": \"text-davinci-003\",\n",
2023-04-08 06:04:01 +03:00
" \"choices\": [\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" results = []\\n for i in range(len(game)):\\n if game[i] == guess[i]:\\n results.append(0)\\n else:\\n results.append(abs(game[i]-guess[i]))\\n return results\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 0,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 1,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i]-guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 2,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" res = []\\n for i in range(len(game)):\\n res.append(abs(game[i]-guess[i]))\\n return res\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 3,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 4,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-07-23 16:23:09 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 5,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 6,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" results = []\\n for i in range(len(game)):\\n results.append(abs(game[i] - guess[i]))\\n return results\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 7,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" res = []\\n for i in range(len(game)):\\n res.append(abs(game[i]-guess[i]))\\n return res\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 8,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i]-guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 9,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 10,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 11,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n if game[i] == guess[i]:\\n result.append(0)\\n else:\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 12,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" # set up empty list to store differences\\n diff = []\\n # iterate through the game list and guess list\\n for i in range(len(game)):\\n # check if the guess is equal to the game\\n if game[i] == guess[i]:\\n # if so, append 0 to the diff list\\n diff.append(0)\\n # otherwise, calculate the difference between the guess and the game\\n else:\\n diff.append(abs(game[i]-guess[i]))\\n # return the diff list\\n return diff\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 13,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i]-guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 14,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n if game[i] == guess[i]:\\n result.append(0)\\n else:\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 15,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 16,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 17,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 18,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 19,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 20,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 21,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 22,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" # your code here\\n result = []\\n for i in range(len(game)):\\n if game[i] == guess[i]:\\n result.append(0)\\n else:\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 23,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" results = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n results.append(diff)\\n return results\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 24,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-07-23 16:23:09 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n diff = abs(game[i] - guess[i])\\n result.append(diff)\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 25,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" },\n",
" {\n",
2023-08-01 05:22:30 +03:00
" \"text\": \" result = []\\n for i in range(len(game)):\\n result.append(abs(game[i] - guess[i]))\\n return result\",\n",
2023-04-08 06:04:01 +03:00
" \"index\": 26,\n",
" \"logprobs\": null,\n",
2023-07-23 16:23:09 +03:00
" \"finish_reason\": \"stop\"\n",
2023-04-08 06:04:01 +03:00
" }\n",
" ],\n",
" \"usage\": {\n",
" \"prompt_tokens\": 243,\n",
2023-08-01 05:22:30 +03:00
" \"completion_tokens\": 1264,\n",
" \"total_tokens\": 1507\n",
2023-07-23 16:23:09 +03:00
" },\n",
2023-08-01 05:22:30 +03:00
" \"cost\": 0.03014,\n",
2023-07-23 16:23:09 +03:00
" \"config_id\": 0,\n",
" \"pass_filter\": true\n",
2023-04-08 06:04:01 +03:00
"}\n",
2023-08-01 05:22:30 +03:00
"{'index_selected': 0, 'succeed_assertions': True, 'success': True, 'gen_cost': 0.000702, 'assertions': 'assert compare([1,2,3,4,5,1],[1,2,3,4,2,-2]) == [0,0,0,0,3,3]\\nassert compare([0,5,0,0,0,4],[4,1,1,0,0,-2]) == [4,4,1,0,0,6]'}\n"
2023-04-08 06:04:01 +03:00
]
}
],
"source": [
2023-08-01 05:22:30 +03:00
"response = autogen.Completion.create(context=tune_data[1], config_list=endpoint_list, **config)\n",
2023-04-08 06:04:01 +03:00
"print(response)\n",
2023-08-01 05:22:30 +03:00
"print(eval_with_generated_assertions(autogen.Completion.extract_text(response), **tune_data[1]))\n"
2023-04-08 06:04:01 +03:00
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Evaluate the success rate on the test data\n",
"\n",
2023-08-01 05:22:30 +03:00
"You can use `autogen.Completion.test` to evaluate the performance of an entire dataset with the tuned config. The following code will take a while to evaluate all the 144 test data instances. The cost is about $6 if you uncomment it and run it."
2023-04-08 06:04:01 +03:00
]
},
{
"cell_type": "code",
2023-08-01 05:22:30 +03:00
"execution_count": 12,
2023-04-08 06:04:01 +03:00
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-24T23:26:39.347295Z",
"iopub.status.busy": "2023-02-24T23:26:39.346994Z",
"iopub.status.idle": "2023-02-24T23:29:27.160335Z",
"shell.execute_reply": "2023-02-24T23:29:27.159519Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
2023-08-01 05:22:30 +03:00
"performance on test data with the tuned config: {'index_selected': 5.222222222222222, 'succeed_assertions': 0.8402777777777778, 'success': 0.7569444444444444, 'gen_cost': 0.00044632638888888885, 'cost': 5.704979999999999, 'inference_cost': 0.03961791666666666}\n"
2023-04-08 06:04:01 +03:00
]
}
],
"source": [
2023-08-01 05:22:30 +03:00
"# result = autogen.Completion.test(test_data, config_list=endpoint_list, **config)\n",
2023-04-08 06:04:01 +03:00
"# print(\"performance on test data with the tuned config:\", result)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The result will vary with the inference budget and optimization budget.\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
},
"vscode": {
"interpreter": {
"hash": "949777d72b0d2535278d3dc13498b2535136f6dfe0678499012e853ee9abcab1"
}
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {
"24dd93300e0442788ee6cc1310e5bf14": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"35cd066a31b242bb87b2c106ee72e5f2": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_8e7ee7687a99410d88a98a74ecfcea99",
"IPY_MODEL_421e02a11a974b40b3ddb75382b3b640",
"IPY_MODEL_77db9797e78b49438d21c5c8da34b4cb"
],
"layout": "IPY_MODEL_47d3046236a54b0e8f9ae455a82c7e0b",
"tabbable": null,
"tooltip": null
}
},
"3d5d106a38954af2bb3bde5777702f4e": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"background": null,
"description_width": "",
"font_size": null,
"text_color": null
}
},
"3e1ebb31412443b0bca86a301cbdac11": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"421e02a11a974b40b3ddb75382b3b640": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_e6398d4027c9459a97965b9d91ae484f",
"max": 1,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_3e1ebb31412443b0bca86a301cbdac11",
"tabbable": null,
"tooltip": null,
"value": 1
}
},
"47d3046236a54b0e8f9ae455a82c7e0b": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"754800f7feb04acea977696e4787d1ff": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"77db9797e78b49438d21c5c8da34b4cb": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_7b6c4e1c11e249409a1edcd63be450d8",
"placeholder": " ",
"style": "IPY_MODEL_3d5d106a38954af2bb3bde5777702f4e",
"tabbable": null,
"tooltip": null,
"value": " 1/1 [00:00<00:00, 44.40it/s]"
}
},
"7b6c4e1c11e249409a1edcd63be450d8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"8e7ee7687a99410d88a98a74ecfcea99": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "2.0.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "2.0.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "2.0.0",
"_view_name": "HTMLView",
"description": "",
"description_allow_html": false,
"layout": "IPY_MODEL_754800f7feb04acea977696e4787d1ff",
"placeholder": " ",
"style": "IPY_MODEL_24dd93300e0442788ee6cc1310e5bf14",
"tabbable": null,
"tooltip": null,
"value": "100%"
}
},
"e6398d4027c9459a97965b9d91ae484f": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "2.0.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "2.0.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "2.0.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border_bottom": null,
"border_left": null,
"border_right": null,
"border_top": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}