MLHyperparameterTuning/01_Training_Script.ipynb

546 строки
21 KiB
Plaintext
Исходник Обычный вид История

2018-08-15 21:16:06 +03:00
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-09-19 20:18:44 +03:00
"# Training Script\n",
2018-10-11 17:12:13 +03:00
"In this notebook, we create the training script whose hyperparameters will be tuned. The notebook cells are each appended in turn in the training script, so it is essential that you run the notebook's cells _in order_ for the script to run correctly. \n",
2018-08-15 21:16:06 +03:00
"\n",
2018-09-19 20:18:44 +03:00
"*Note:* If you edit this notebook's cells, be sure to preserve the blank lines at the start and end of the cells, as they prevent the contents of consecutive cells from being improperly concatenated.\n",
"\n",
2018-10-11 17:12:13 +03:00
"The script sections are\n",
2018-10-11 17:43:49 +03:00
"- [import libraries](#import),\n",
2018-10-11 17:12:13 +03:00
"- [define utility functions and classes](#utility),\n",
"- [define the script input parameters](#parameters),\n",
"- [load and prepare the training data](#data),\n",
"- [define the training pipeline](#pipeline),\n",
"- [train the model](#train),\n",
"- [score the test data](#score), and\n",
"- [compute the test data performance](#performance).\n",
"\n",
"[The final cell](#run) runs the script using the training data created by [the first notebook](00_Data_Prep.ipynb).\n",
"\n",
2018-10-11 17:43:49 +03:00
"## Load libraries <a id='import'></a>"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile TrainTestClassifier.py\n",
"\n",
2018-08-15 21:16:06 +03:00
"from __future__ import print_function\n",
"import os\n",
"import warnings\n",
2018-09-17 19:03:42 +03:00
"import logging\n",
"import argparse\n",
2018-08-31 16:48:47 +03:00
"import numpy as np\n",
2018-08-15 21:16:06 +03:00
"import pandas as pd\n",
"import lightgbm as lgb\n",
"from sklearn.feature_extraction import text\n",
"from sklearn.pipeline import Pipeline, FeatureUnion, make_pipeline\n",
"from sklearn.externals import joblib\n",
2018-09-19 20:18:44 +03:00
"from sklearn.base import BaseEstimator, TransformerMixin\n"
2018-08-31 16:48:47 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Define utility functions and classes <a id='utility'></a>"
2018-08-31 16:48:47 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-31 16:48:47 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-31 16:48:47 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
"\n",
"class ItemSelector(BaseEstimator, TransformerMixin):\n",
" \"\"\"For data grouped by feature, select subset of data at provided\n",
" key(s).\n",
"\n",
" The data are expected to be stored in a 2D data structure, where\n",
" the first index is over features and the second is over samples,\n",
" i.e.\n",
"\n",
" >> len(data[keys]) == n_samples\n",
"\n",
" Please note that this is the opposite convention to scikit-learn\n",
" feature matrixes (where the first index corresponds to sample).\n",
"\n",
" ItemSelector only requires that the collection implement getitem\n",
" (data[keys]). Examples include: a dict of lists, 2D numpy array,\n",
" Pandas DataFrame, numpy record array, etc.\n",
"\n",
" >> data = {'a': [1, 5, 2, 5, 2, 8],\n",
" 'b': [9, 4, 1, 4, 1, 3]}\n",
" >> ds = ItemSelector(key='a')\n",
" >> data['a'] == ds.transform(data)\n",
"\n",
" ItemSelector is not designed to handle data grouped by sample\n",
" (e.g. a list of dicts). If your data are structured this way,\n",
" consider a transformer along the lines of\n",
" `sklearn.feature_extraction.DictVectorizer`.\n",
"\n",
" Parameters\n",
" ----------\n",
" keys : hashable or list of hashable, required\n",
" The key(s) corresponding to the desired value(s) in a mappable.\n",
"\n",
" \"\"\"\n",
"\n",
" def __init__(self, keys):\n",
" self.keys = keys\n",
"\n",
" def fit(self, x, *args, **kwargs):\n",
" if type(self.keys) is list:\n",
" assert all([key in x for key in self.keys]), 'Not all keys in data'\n",
" else:\n",
" assert self.keys in x, 'key not in data'\n",
" return self\n",
"\n",
" def transform(self, data_dict, *args, **kwargs):\n",
" return data_dict[self.keys]\n",
"\n",
" \n",
"def score_rank(scores):\n",
2018-09-18 23:34:15 +03:00
" \"\"\"Compute the ranks of the scores.\"\"\"\n",
2018-08-31 16:48:47 +03:00
" return pd.Series(scores).rank(ascending=False)\n",
"\n",
"\n",
"def label_index(label, label_order):\n",
2018-09-18 23:34:15 +03:00
" \"\"\"Compute the index of label in label_order.\"\"\"\n",
2018-08-31 16:48:47 +03:00
" loc = np.where(label == label_order)[0]\n",
" if loc.shape[0] == 0:\n",
" return None\n",
" return loc[0]\n",
2018-08-15 21:16:06 +03:00
"\n",
2018-08-31 16:48:47 +03:00
"\n",
"def label_rank(label, scores, label_order):\n",
2018-09-18 23:34:15 +03:00
" \"\"\"Compute the rank of label using the scores.\"\"\"\n",
2018-08-31 16:48:47 +03:00
" loc = label_index(label, label_order)\n",
" if loc is None:\n",
" return len(scores) + 1\n",
" return score_rank(scores)[loc]\n",
"\n",
"\n",
"warnings.filterwarnings(action='ignore', category=UserWarning, module='lightgbm')\n"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Define the input parameters <a id='parameters'></a>\n",
"One of the most important parameters is `estimators`, the number of estimators that allows you to trade-off accuracy, modeling time, and model size. The table below should give you an idea of the relationships between the number of estimators and the metrics. The default value is 100.\n",
2018-08-15 21:16:06 +03:00
"\n",
"| Estimators | Run time (s) | Size (MB) | Accuracy@1 | Accuracy@2 | Accuracy@3 |\n",
"|------------|--------------|-----------|------------|------------|------------|\n",
"| 100 | 40 | 2 | 25.02% | 38.72% | 47.83% |\n",
"| 1000 | 177 | 4 | 46.79% | 60.80% | 69.11% |\n",
"| 2000 | 359 | 7 | 51.38% | 65.93% | 73.09% |\n",
"| 4000 | 628 | 12 | 53.39% | 67.40% | 74.74% |\n",
"| 8000 | 904 | 22 | 54,62% | 67.77% | 75.35% |\n",
2018-08-15 21:16:06 +03:00
"\n",
"Other parameters that may be useful to tune include the following:\n",
"* `ngrams`: the maximum n-gram size for features, an integer ranging from 1 (default 1),\n",
"* `min_child_samples`: the minimum number of samples in a leaf, an integer ranging from 1 (default 20),\n",
"* `match`: the maximum number of training examples per duplicate question, an integer ranging from 2 (default 10), and\n",
"* `unweighted`: whether to use sample weights to compensate for unbalanced data, a boolean (default weighted).\n",
2018-08-23 21:53:39 +03:00
"\n",
2018-08-31 16:48:47 +03:00
"The performance of the estimator is estimated on held-aside test data, and the statistic reported is how far down the list of sorted results is the correct result found. The `rank` parameter controls the maximum distance down the list for which the statistic is reported."
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
2018-08-15 21:16:06 +03:00
"\n",
"if __name__ == '__main__':\n",
" \n",
" parser = argparse.ArgumentParser(description='Fit and evaluate a model'\n",
" ' based on train-test datasets.')\n",
" parser.add_argument('--data', help='the training dataset name',\n",
" default='balanced_pairs_train.tsv')\n",
" parser.add_argument('--test', help='the test dataset name',\n",
" default='balanced_pairs_test.tsv')\n",
" parser.add_argument('--estimators',\n",
" help='the number of learner estimators',\n",
2018-08-29 21:25:57 +03:00
" type=int, default=100)\n",
" parser.add_argument('--min_child_samples',\n",
" help='the minimum number of samples in a child(leaf)',\n",
" type=int, default=20)\n",
" parser.add_argument('--ngrams',\n",
" help='the maximum size of word ngrams',\n",
" type=int, default=1)\n",
" parser.add_argument('--match',\n",
" help='the maximum number of duplicate matches',\n",
" type=int, default=20)\n",
" parser.add_argument('--unweighted',\n",
" help='do not use instance weights',\n",
" action='store_true')\n",
2018-08-23 21:53:39 +03:00
" parser.add_argument('--rank',\n",
" help='the maximum rank of correct answers',\n",
" type=int, default=3)\n",
" parser.add_argument('--inputs', help='the inputs directory',\n",
" default='.')\n",
2018-08-29 21:25:57 +03:00
" parser.add_argument('--outputs', help='the outputs directory',\n",
" default='.')\n",
" parser.add_argument('--save', help='save the model',\n",
" action='store_true')\n",
2018-09-17 19:03:42 +03:00
" parser.add_argument('--log', help='the log file',\n",
" default='TrainTestClassifier.log')\n",
" parser.add_argument('--model', help='the model file',\n",
" default='model.pkl')\n",
" parser.add_argument('--instances', help='the instances file',\n",
" default='instances.csv')\n",
" parser.add_argument('--labels', help='the labels file',\n",
" default='labels.csv')\n",
" parser.add_argument('--verbose',\n",
" help='the verbosity of the estimator',\n",
" type=int, default=-1)\n",
" args = parser.parse_args()\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Load and prepare the training data <a id='data'></a>"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
2018-08-15 21:16:06 +03:00
"\n",
2018-09-27 19:31:01 +03:00
" outputs_path = args.outputs\n",
" log_path = os.path.join(outputs_path, args.log)\n",
" logging.basicConfig(filename=log_path, level=logging.DEBUG)\n",
2018-09-17 19:03:42 +03:00
"\n",
2018-08-31 16:48:47 +03:00
" print('Prepare the training data.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('Prepare the training data.')\n",
2018-08-31 16:48:47 +03:00
" \n",
" # Paths to the input data.\n",
" inputs_path = args.inputs\n",
" data_path = os.path.join(inputs_path, args.data)\n",
" test_path = os.path.join(inputs_path, args.test)\n",
2018-08-15 21:16:06 +03:00
"\n",
" # Paths for the output data.\n",
" outputs_path = args.outputs\n",
" model_path = os.path.join(outputs_path, args.model)\n",
" instances_path = os.path.join(outputs_path, args.instances)\n",
" labels_path = os.path.join(outputs_path, args.labels)\n",
"\n",
" # Create the outputs folder.\n",
" os.makedirs(outputs_path, exist_ok=True)\n",
"\n",
" # Load the data.\n",
" print('Reading {}'.format(data_path))\n",
2018-09-17 19:03:42 +03:00
" logging.info('Reading {}'.format(data_path))\n",
" train = pd.read_csv(data_path, sep='\\t', encoding='latin1')\n",
"\n",
" # Limit the number of training duplicate matches.\n",
" train = train[train.n < args.match]\n",
"\n",
" # Define the input data columns.\n",
" feature_columns = ['Text_x', 'Text_y']\n",
" label_column = 'Label'\n",
" group_column = 'Id_x'\n",
" answerid_column = 'AnswerId_y'\n",
" name_columns = ['Id_x', 'Id_y']\n",
" weight_column = 'Weight'\n",
2018-08-31 16:48:47 +03:00
"\n",
" # Report on the dataset.\n",
" print('train: {:,} rows with {:.2%} matches'.format(\n",
" train.shape[0], train[label_column].mean()))\n",
2018-09-17 19:03:42 +03:00
" logging.info('train: {:,} rows with {:.2%} matches'.format(\n",
" train.shape[0], train[label_column].mean()))\n",
" \n",
" # Compute instance weights.\n",
" if args.unweighted:\n",
2018-08-31 16:48:47 +03:00
" print('No sample weights.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('No sample weights.')\n",
" labels = train[label_column].unique()\n",
" weight = pd.Series([1.0] * labels.shape[0], labels)\n",
" else:\n",
2018-08-31 16:48:47 +03:00
" print('Using sample weights.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('Using sample weights.')\n",
" label_counts = train[label_column].value_counts()\n",
" weight = train.shape[0]/(label_counts.shape[0]*label_counts)\n",
2018-08-31 16:48:47 +03:00
" print(weight)\n",
2018-09-17 19:03:42 +03:00
" logging.info(weight)\n",
" train[weight_column] = train[label_column].apply(lambda x: weight[x])\n",
"\n",
" # Select and format the training data.\n",
" train_X = train[feature_columns]\n",
" train_y = train[label_column]\n",
" sample_weight = train[weight_column]\n",
" groups = train[group_column]\n",
" names = train[name_columns]\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Define the featurization and estimator <a id='pipeline'></a>"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
"\n",
2018-08-31 16:48:47 +03:00
" print('Define the model pipeline.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('Define the model pipeline.')\n",
2018-08-31 16:48:47 +03:00
"\n",
" # Select the training hyperparameters.\n",
" n_estimators = args.estimators\n",
" min_child_samples = args.min_child_samples\n",
" if args.ngrams > 0:\n",
" ngram_range = (1, args.ngrams)\n",
" else:\n",
" ngram_range = None\n",
"\n",
" # Verify that the hyperparameter values are valid.\n",
" assert n_estimators > 0\n",
" assert min_child_samples > 1\n",
" assert ngram_range is not None\n",
" assert type(ngram_range) is tuple and len(ngram_range) == 2\n",
" assert ngram_range[0] > 0 and ngram_range[0] <= ngram_range[1]\n",
"\n",
" # Define the featurization pipeline.\n",
" featurization = [\n",
" (column,\n",
" make_pipeline(ItemSelector(column),\n",
" text.TfidfVectorizer(ngram_range=ngram_range)))\n",
" for column in feature_columns]\n",
" features = FeatureUnion(featurization)\n",
"\n",
" # Define the estimator.\n",
" estimator = lgb.LGBMClassifier(n_estimators=n_estimators,\n",
" min_child_samples=min_child_samples,\n",
" verbose=args.verbose)\n",
"\n",
" # Put them together into the model pipeline.\n",
" model = Pipeline([\n",
" ('features', features),\n",
" ('model', estimator)\n",
" ])\n",
2018-08-31 16:48:47 +03:00
" \n",
" # Report the featurization.\n",
" print('Estimators={:,}'.format(n_estimators))\n",
" print('Ngram range={}'.format(ngram_range))\n",
" print('Min child samples={}'.format(min_child_samples))\n",
2018-09-17 19:03:42 +03:00
" logging.info('Estimators={:,}'.format(n_estimators))\n",
" logging.info('Ngram range={}'.format(ngram_range))\n",
" logging.info('Min child samples={}'.format(min_child_samples))\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Train the model <a id='train'></a>"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
2018-08-15 21:16:06 +03:00
"\n",
2018-08-31 16:48:47 +03:00
" print('Fitting the model.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('Fitting the model.')\n",
2018-08-31 16:48:47 +03:00
"\n",
" # Fit the model.\n",
2018-08-31 16:48:47 +03:00
" model.fit(train_X, train_y, model__sample_weight=sample_weight)\n",
" \n",
" # Collect the ordered label for computing scores.\n",
" labels = sorted(train[answerid_column].unique())\n",
" label_order = pd.DataFrame({'label': labels})\n",
"\n",
" # Write the model to file.\n",
" if args.save:\n",
" print('Saving the model to {}'.format(model_path))\n",
" logging.info('Saving the model to {}'.format(model_path))\n",
" joblib.dump(model, model_path)\n",
" print('{}: {:.2f} MB'.format(\n",
" model_path, os.path.getsize(model_path)/(2**20)))\n",
2018-09-17 19:03:42 +03:00
" logging.info('{}: {:.2f} MB'.format(\n",
" model_path, os.path.getsize(model_path)/(2**20)))\n",
" print('Saving the labels to {}'.format(labels_path))\n",
" logging.info('Saving the labels to {}'.format(labels_path))\n",
" label_order.to_csv(labels_path, sep='\\t', index=False)\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Score the test data using the model <a id='score'></a>\n",
"This produces a dataframe of scores with one row per duplicate question."
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
"\n",
2018-08-31 16:48:47 +03:00
" print('Scoring the test data.')\n",
2018-09-17 19:03:42 +03:00
" logging.info('Scoring the test data.')\n",
2018-08-31 16:48:47 +03:00
"\n",
" # Read the test data.\n",
" print('Reading {}'.format(test_path))\n",
2018-09-17 19:03:42 +03:00
" logging.info('Reading {}'.format(test_path))\n",
" test = pd.read_csv(test_path, sep='\\t', encoding='latin1')\n",
" print('test: {:,} rows with {:.2%} matches'.format(\n",
" test.shape[0], test[label_column].mean()))\n",
2018-09-17 19:03:42 +03:00
" logging.info('test {:,} rows with {:.2%} matches'.format(\n",
" test.shape[0], test[label_column].mean()))\n",
"\n",
" # Collect the model predictions.\n",
" test_X = test[feature_columns]\n",
2018-08-31 16:48:47 +03:00
" test['probabilities'] = model.predict_proba(test_X)[:, 1]\n",
"\n",
" # Order the testing data by dupe Id and question AnswerId.\n",
" test.sort_values([group_column, answerid_column], inplace=True)\n",
"\n",
" # Extract the ordered probabilities.\n",
" probabilities = (\n",
" test.probabilities\n",
" .groupby(test[group_column], sort=False)\n",
" .apply(lambda x: tuple(x.values)))\n",
"\n",
" # Get the individual records.\n",
" output_columns_x = ['Id_x', 'AnswerId_x', 'Text_x']\n",
" test_score = (test[output_columns_x]\n",
" .drop_duplicates()\n",
" .set_index(group_column))\n",
" test_score['probabilities'] = probabilities\n",
" test_score.reset_index(inplace=True)\n",
" test_score.columns = ['Id', 'AnswerId', 'Text', 'probabilities']\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Report the model's performance statistics on the test data <a id='performance'></a>"
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-15 21:16:06 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-15 21:16:06 +03:00
"source": [
"%%writefile --append TrainTestClassifier.py\n",
"\n",
2018-08-31 16:48:47 +03:00
" print(\"Evaluating the model's performance.\")\n",
2018-09-17 19:03:42 +03:00
" logging.info(\"Evaluating the model's performance.\")\n",
2018-08-31 16:48:47 +03:00
" \n",
" # Collect the ordered AnswerId for computing the scores.\n",
" labels = sorted(train[answerid_column].unique())\n",
" label_order = pd.DataFrame({'label': labels})\n",
"\n",
" # Rank the correct answers.\n",
" test_score['Ranks'] = test_score.apply(lambda x:\n",
" label_rank(x.AnswerId,\n",
" x.probabilities,\n",
" label_order.label),\n",
" axis=1)\n",
"\n",
" # Compute the number of correctly ranked answers\n",
" for i in range(1, args.rank+1):\n",
" print('Accuracy @{} = {:.2%}'.format(\n",
" i, (test_score['Ranks'] <= i).mean()))\n",
2018-09-17 19:03:42 +03:00
" logging.info('Accuracy @{} = {:.2%}'.format(\n",
" i, (test_score['Ranks'] <= i).mean()))\n",
" mean_rank = test_score['Ranks'].mean()\n",
" print('Mean Rank {:.4f}'.format(mean_rank))\n",
2018-09-17 19:03:42 +03:00
" logging.info('Mean Rank {:.4f}'.format(mean_rank))\n",
"\n",
" # Write the scored instances.\n",
" if args.save:\n",
" print('Saving the scored instances to {}'.format(instances_path))\n",
" logging.info('Saving the scored instances to {}'.format(instances_path)) \n",
" test_score.to_csv(instances_path, sep='\\t', index=False,\n",
" encoding='latin1')\n",
" "
2018-08-15 21:16:06 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
2018-10-11 17:12:13 +03:00
"## Run the script to see that it works <a id='run'></a>\n",
2018-09-19 21:40:45 +03:00
"This should take around five minutes."
2018-08-31 16:48:47 +03:00
]
},
{
"cell_type": "code",
2018-09-17 19:03:42 +03:00
"execution_count": null,
2018-08-31 16:48:47 +03:00
"metadata": {},
2018-09-17 19:03:42 +03:00
"outputs": [],
2018-08-31 16:48:47 +03:00
"source": [
"%run -t TrainTestClassifier.py --estimators 1000 --match 5 --ngrams 2 --min_child_samples 10"
2018-08-31 16:48:47 +03:00
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In [the next notebook](02_Docker_Image.ipynb), we create the docker image used by Batch AI to run the training script."
2018-08-15 21:16:06 +03:00
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python [conda env:MLBatchAIHyperparameterTuning]",
2018-08-15 21:16:06 +03:00
"language": "python",
"name": "conda-env-MLBatchAIHyperparameterTuning-py"
2018-08-15 21:16:06 +03:00
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}