* Initial refactor

- All python scripts under utils_cv are clean
- All files under classification (notebooks and tools-scripts)
  and tests need fix

* refactor gpu util

* datapath

* test refactor

* widgets

* fix tests. fix 00 10 21 notebooks

* Update 22 notebook. Add nbconverted files

* python version

* Root readme.md

* faq

* change test filenames otherwise they fail

* conftest update

* Fix result widget import

* Fix 01 notebook widget import

* Refactor widget.py and misc.py

* azure pipelines update

* azure pipelines update

* update tests to skip deployment notebook

* Removed links to services + added a small comment on existing AKS compute resources
This commit is contained in:
Jun Ki Min 2019-04-25 15:35:25 -04:00 коммит произвёл GitHub
Родитель 1539654541
Коммит 446dd77e61
109 изменённых файлов: 3453 добавлений и 3304 удалений

Просмотреть файл

@ -34,13 +34,12 @@ steps:
- bash: |
source deactivate cvbp
conda remove -q -n cvbp --all -y
conda env create -f image_classification/environment.yml
conda env create -f classification/environment.yml
conda env list
source activate cvbp
displayName: 'Build Configuration'
- bash: |
cd image_classification
source activate cvbp
python -m ipykernel install --user --name cvbp --display-name "cvbp"
pytest tests/unit --junitxml=junit/test-unitttest.xml

Просмотреть файл

@ -23,13 +23,12 @@ steps:
- bash: |
source deactivate cvbp
conda remove -q -n cvbp --all -y
conda env create -f image_classification/environment.yml
conda env create -f classification/environment.yml
conda env list
source activate cvbp
displayName: 'Build Configuration'
- bash: |
cd image_classification
source activate cvbp
python -m ipykernel install --user --name cvbp --display-name "cvbp"
pytest tests/unit --junitxml=junit/test-unitttest.xml

16
.gitignore поставляемый
Просмотреть файл

@ -90,6 +90,9 @@ ENV/
env.bak/
venv.bak/
# IDE
.idea/
# Spyder project settings
.spyderproject
.spyproject
@ -117,3 +120,16 @@ image_classification/data/*
# don't save .csv files
*.csv
# don't save data dir
data
# don't save pickles
*.pkl
# aml notebooks outputs
*/notebooks/aml_config*
*/notebooks/azureml-models
*/notebooks/myenv.yml
*/notebooks/outputs
*/notebooks/score.py

Просмотреть файл

@ -36,19 +36,19 @@ Most applications in Computer Vision fall into one of these 4 categories:
- **Image classification**: Given an input image, predict what objects are present. This is typically the easiest CV problem to solve, however requires objects to be reasonably large in the image.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="https://cvbp.blob.core.windows.net/public/images/document_images/intro_ic_vis.jpg" height="150" alt="Image classification visualization"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="./media/intro_ic_vis.jpg" height="150" alt="Image classification visualization"/>
- **Object Detection**: Given an input image, predict what objects are present and where the objects are (using rectangular coordinates). Object detection approaches work even if the object is small. However model training takes longer than image classification, and manually annotating images is more time-consuming as both labels and rectangular coordinates must be provided.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="https://cvbp.blob.core.windows.net/public/images/document_images/intro_od_vis.jpg" height="150" alt="Object detect visualization"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="./media/intro_od_vis.jpg" height="150" alt="Object detect visualization"/>
- **Image Similarity** Given an input image, find all similar images in a reference dataset. Here, rather than predicting a label or a rectangle, the task is to sort a reference dataset by their similarity to the query image.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="https://cvbp.blob.core.windows.net/public/images/document_images/intro_is_vis.jpg" height="150" alt="Image similarity visualization"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="./media/intro_is_vis.jpg" height="150" alt="Image similarity visualization"/>
- **Image Segmentation** Given an input image, assign a label to all pixels e.g. background, bottle, hand, sky, etc. In practice, this problem is less common in industry, in big parts due to the (time-consuming to annotate) ground truth segmentation required during training.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="https://cvbp.blob.core.windows.net/public/images/document_images/intro_iseg_vis.jpg" height="150" alt="Image segmentation visualization"/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img align="center" src="./media/intro_iseg_vis.jpg" height="150" alt="Image segmentation visualization"/>
## Contributing

Просмотреть файл

Просмотреть файл

Просмотреть файл

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 314 KiB

После

Ширина:  |  Высота:  |  Размер: 314 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 173 KiB

После

Ширина:  |  Высота:  |  Размер: 173 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 81 KiB

После

Ширина:  |  Высота:  |  Размер: 81 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 636 KiB

После

Ширина:  |  Высота:  |  Размер: 636 KiB

Просмотреть файл

@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
@ -62,7 +62,7 @@
],
"source": [
"import sys\n",
"sys.path.append(\"../\")\n",
"sys.path.append(\"../../\")\n",
"import io\n",
"import os\n",
"import time\n",
@ -73,12 +73,10 @@
"from ipywebrtc import CameraStream, ImageRecorder\n",
"from ipywidgets import HBox, Label, Layout, Widget\n",
"\n",
"from utils_ic.common import data_path\n",
"from utils_ic.constants import IMAGENET_IM_SIZE\n",
"from utils_ic.datasets import imagenet_labels\n",
"from utils_ic.gpu_utils import which_processor\n",
"from utils_ic.imagenet_models import model_to_learner\n",
"\n",
"from utils_cv.common.data import data_path\n",
"from utils_cv.common.gpu import which_processor\n",
"from utils_cv.classification.data import imagenet_labels\n",
"from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner\n",
"\n",
"print(f\"Fast.ai: {fastai.__version__}\")\n",
"which_processor()"
@ -186,7 +184,7 @@
"output_type": "stream",
"text": [
"Predicted label: coffee_mug (conf = 0.68)\n",
"Took 3.1665852069854736 sec\n"
"Took 1.5879731178283691 sec\n"
]
}
],

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -69,7 +69,7 @@
{
"data": {
"text/plain": [
"'1.0.48'"
"'1.0.47'"
]
},
"execution_count": 1,
@ -92,9 +92,7 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"%reload_ext autoreload\n",
@ -112,16 +110,15 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"sys.path.append(\"../\")\n",
"sys.path.append(\"../../\")\n",
"import os\n",
"from pathlib import Path\n",
"from utils_ic.datasets import downsize_imagelist, unzip_url, Urls\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"from fastai.vision import *\n",
"from fastai.metrics import accuracy"
]
@ -155,7 +152,6 @@
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true,
"tags": [
"parameters"
]
@ -185,9 +181,7 @@
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"assert MODEL_TYPE in [\"high_accuracy\", \"fast_inference\", \"small_size\"]"
@ -203,9 +197,7 @@
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"if MODEL_TYPE == \"high_accuracy\":\n",
@ -228,7 +220,10 @@
"### Pre-processing <a name=\"preprocessing\"></a>\n",
"\n",
"JPEG decoding represents a bottleneck on systems with powerful GPUs and can slow training significantly, often by a factor of 2-3x, and sometimes by much more. We therefore recommend creating a down-sized copy of the dataset if training otherwise takes too long, or if running training multiple times e.g. to evaluate different parameters. After running the following function, update the `DATA_PATH` variable (to `out_dir`) so that this notebook uses the resized images. \n",
"\n",
"```python\n",
"from utils_cv.classification.data import downsize_imagelist\n",
"\n",
"downsize_imagelist(im_list = ImageList.from_folder(Path(DATA_PATH)),\n",
" out_dir = \"downsized_images\", \n",
" max_dim = IM_SIZE)\n",
@ -254,9 +249,7 @@
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"data = (ImageList.from_folder(Path(DATA_PATH)) \n",
@ -277,9 +270,7 @@
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"learn = cnn_learner(data, ARCHITECTURE, metrics=accuracy)"
@ -300,7 +291,7 @@
{
"data": {
"text/html": [
"Total time: 02:54 <p><table border=\"1\" class=\"dataframe\">\n",
"Total time: 00:06 <p><table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: left;\">\n",
" <th>epoch</th>\n",
@ -313,31 +304,31 @@
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>1.673840</td>\n",
" <td>1.403416</td>\n",
" <td>0.230769</td>\n",
" <td>01:03</td>\n",
" <td>1.786654</td>\n",
" <td>1.476271</td>\n",
" <td>0.346154</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>1</td>\n",
" <td>1.744148</td>\n",
" <td>1.255747</td>\n",
" <td>0.538462</td>\n",
" <td>00:36</td>\n",
" <td>1.744271</td>\n",
" <td>1.430859</td>\n",
" <td>0.307692</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>2</td>\n",
" <td>1.568543</td>\n",
" <td>1.174403</td>\n",
" <td>0.538462</td>\n",
" <td>00:35</td>\n",
" <td>1.714430</td>\n",
" <td>1.391601</td>\n",
" <td>0.346154</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>3</td>\n",
" <td>1.493223</td>\n",
" <td>1.179191</td>\n",
" <td>0.538462</td>\n",
" <td>00:39</td>\n",
" <td>1.686874</td>\n",
" <td>1.413052</td>\n",
" <td>0.307692</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
@ -364,9 +355,7 @@
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"learn.unfreeze()"
@ -387,7 +376,7 @@
{
"data": {
"text/html": [
"Total time: 06:51 <p><table border=\"1\" class=\"dataframe\">\n",
"Total time: 00:19 <p><table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: left;\">\n",
" <th>epoch</th>\n",
@ -400,87 +389,87 @@
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>1.339784</td>\n",
" <td>1.112754</td>\n",
" <td>0.500000</td>\n",
" <td>00:40</td>\n",
" <td>1.623785</td>\n",
" <td>1.409176</td>\n",
" <td>0.307692</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>1</td>\n",
" <td>1.279583</td>\n",
" <td>0.972493</td>\n",
" <td>0.653846</td>\n",
" <td>00:36</td>\n",
" <td>1.457880</td>\n",
" <td>1.230384</td>\n",
" <td>0.423077</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>2</td>\n",
" <td>1.207357</td>\n",
" <td>0.755614</td>\n",
" <td>1.346284</td>\n",
" <td>0.825346</td>\n",
" <td>0.769231</td>\n",
" <td>00:32</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>3</td>\n",
" <td>1.115637</td>\n",
" <td>0.502069</td>\n",
" <td>1.222301</td>\n",
" <td>0.543954</td>\n",
" <td>0.884615</td>\n",
" <td>00:30</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>4</td>\n",
" <td>0.933775</td>\n",
" <td>0.370555</td>\n",
" <td>0.923077</td>\n",
" <td>00:30</td>\n",
" <td>1.059379</td>\n",
" <td>0.393587</td>\n",
" <td>0.961538</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>5</td>\n",
" <td>0.824386</td>\n",
" <td>0.331390</td>\n",
" <td>0.920777</td>\n",
" <td>0.315344</td>\n",
" <td>0.961538</td>\n",
" <td>00:34</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>6</td>\n",
" <td>0.724463</td>\n",
" <td>0.270143</td>\n",
" <td>0.961538</td>\n",
" <td>00:39</td>\n",
" <td>0.807599</td>\n",
" <td>0.258829</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>7</td>\n",
" <td>0.651404</td>\n",
" <td>0.249424</td>\n",
" <td>0.961538</td>\n",
" <td>00:35</td>\n",
" <td>0.712808</td>\n",
" <td>0.239849</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>8</td>\n",
" <td>0.576793</td>\n",
" <td>0.247248</td>\n",
" <td>0.961538</td>\n",
" <td>00:32</td>\n",
" <td>0.634236</td>\n",
" <td>0.231437</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>9</td>\n",
" <td>0.509830</td>\n",
" <td>0.242544</td>\n",
" <td>0.961538</td>\n",
" <td>00:30</td>\n",
" <td>0.570075</td>\n",
" <td>0.237903</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>10</td>\n",
" <td>0.454136</td>\n",
" <td>0.246468</td>\n",
" <td>0.961538</td>\n",
" <td>00:30</td>\n",
" <td>0.511892</td>\n",
" <td>0.240423</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" <tr>\n",
" <td>11</td>\n",
" <td>0.418457</td>\n",
" <td>0.240774</td>\n",
" <td>0.961538</td>\n",
" <td>00:37</td>\n",
" <td>0.470356</td>\n",
" <td>0.234572</td>\n",
" <td>1.000000</td>\n",
" <td>00:01</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
@ -534,7 +523,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy on validation set: 0.9615384340286255\n"
"Accuracy on validation set: 1.0\n"
]
}
],
@ -555,9 +544,7 @@
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"im = open_image(f\"{(Path(DATA_PATH)/learn.data.classes[0]).ls()[0]}\")"
@ -572,7 +559,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"61.9 ms ± 3.02 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
"12.6 ms ± 375 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
]
}
],
@ -593,9 +580,7 @@
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"learn.export(f\"{MODEL_TYPE}\")"
@ -996,7 +981,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python (cvbp)",
"display_name": "cvbp",
"language": "python",
"name": "cvbp"
},

Просмотреть файл

@ -41,15 +41,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"sys.path.append(\"../\")\n",
"from utils_ic.anno_utils import AnnotationWidget\n",
"from utils_ic.datasets import unzip_url, Urls"
"sys.path.append(\"../../\")\n",
"\n",
"from utils_cv.classification.widget import AnnotationWidget\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url"
]
},
{
@ -72,7 +74,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Using images in directory: C:\\Users\\pabuehle\\Desktop\\ComputerVisionBestPractices\\image_classification\\data\\fridgeObjectsTiny\\can.\n"
"Using images in directory: C:\\Users\\jumin\\git\\cvbp\\data\\fridgeObjectsTiny\\can.\n"
]
}
],
@ -99,7 +101,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a9d0446b6504413b91a805cf57d14b0f",
"model_id": "b2febdd461004c6485bdbcf0bf323456",
"version_major": 2,
"version_minor": 0
},
@ -139,7 +141,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting example_annotation.csv\n"
"Writing example_annotation.csv\n"
]
}
],
@ -244,15 +246,15 @@
"x: ImageList\n",
"Image (3, 665, 499),Image (3, 665, 499),Image (3, 665, 499)\n",
"y: MultiCategoryList\n",
"can,can;carton,carton;milk_bottle\n",
"Path: C:\\Users\\pabuehle\\Desktop\\ComputerVisionBestPractices\\image_classification\\data\\fridgeObjectsTiny\\can;\n",
"can;carton,carton;milk_bottle,can\n",
"Path: C:\\Users\\jumin\\git\\cvbp\\data\\fridgeObjectsTiny\\can;\n",
"\n",
"Valid: LabelList (3 items)\n",
"x: ImageList\n",
"Image (3, 665, 499),Image (3, 665, 499),Image (3, 665, 499)\n",
"y: MultiCategoryList\n",
"can,carton,can\n",
"Path: C:\\Users\\pabuehle\\Desktop\\ComputerVisionBestPractices\\image_classification\\data\\fridgeObjectsTiny\\can;\n",
"Path: C:\\Users\\jumin\\git\\cvbp\\data\\fridgeObjectsTiny\\can;\n",
"\n",
"Test: None\n"
]
@ -260,7 +262,9 @@
],
"source": [
"import pandas as pd\n",
"from fastai.vision import ImageList,ImageDataBunch\n",
"\n",
"from fastai.vision import ImageList, ImageDataBunch\n",
"\n",
"\n",
"# Load annotation, discard excluded images, and convert to format fastai expects\n",
"data = []\n",
@ -279,11 +283,18 @@
" .label_from_df(cols='label', label_delim=','))\n",
"print(data)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (cvbp)",
"display_name": "cvbp",
"language": "python",
"name": "cvbp"
},

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -68,7 +68,7 @@
"For this notebook to run properly on our machine, the following should already be in place:\n",
"\n",
"* Local machine setup\n",
" * We need to set up the \"cvbp\" conda environment. [These instructions](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/README.md#getting-started) explain how to do that.\n",
" * We need to set up the \"cvbp\" conda environment. [These instructions](https://github.com/Microsoft/ComputerVision/blob/master/classification/README.md#getting-started) explain how to do that.\n",
"\n",
"\n",
"* Azure subscription setup\n",
@ -113,14 +113,13 @@
"from azureml.exceptions import ProjectSystemException, UserErrorException\n",
"\n",
"# Computer Vision repository\n",
"sys.path.extend([\".\", \"..\", \"../..\"])\n",
"sys.path.extend([\".\", \"../..\"])\n",
"# This \"sys.path.extend()\" statement allows us to move up the directory hierarchy \n",
"# and access the utils_ic and utils_cv packages\n",
"from utils_cv.generate_deployment_env import generate_yaml\n",
"from utils_ic.common import data_path, ic_root_path\n",
"from utils_ic.constants import IMAGENET_IM_SIZE\n",
"from utils_ic.image_conversion import ims2strlist\n",
"from utils_ic.imagenet_models import model_to_learner"
"from utils_cv.common.deployment import generate_yaml\n",
"from utils_cv.common.data import data_path, root_path \n",
"from utils_cv.common.image import ims2strlist\n",
"from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner"
]
},
{
@ -172,8 +171,8 @@
"# Let's define these variables here - These pieces of information can be found on the portal\n",
"subscription_id = os.getenv(\"SUBSCRIPTION_ID\", default=\"<our_subscription_id>\")\n",
"resource_group = os.getenv(\"RESOURCE_GROUP\", default=\"<our_resource_group>\")\n",
"workspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<our_workspace_name>\") # (e.g. \"myworkspace\")\n",
"workspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"<our_workspace_region>\") # (e.g. \"westus2\") \n",
"workspace_name = os.getenv(\"WORKSPACE_NAME\", default=\"<our_workspace_name>\")\n",
"workspace_region = os.getenv(\"WORKSPACE_REGION\", default=\"<our_workspace_region>\")\n",
"\n",
"try:\n",
" # Let's load the workspace from the configuration file\n",
@ -248,7 +247,7 @@
"source": [
"## 5. Model retrieval and export <a id=\"model\"></a>\n",
"\n",
"For demonstration purposes, we will use here a ResNet18 model, pretrained on ImageNet. The following steps would be the same if we had trained a model locally (cf. [**01_training_introduction.ipynb**](https://github.com/Microsoft/ComputerVisionBestPractices/blob/staging/image_classification/notebooks/01_training_introduction.ipynb) notebook for details).\n",
"For demonstration purposes, we will use here a ResNet18 model, pretrained on ImageNet. The following steps would be the same if we had trained a model locally (cf. [**01_training_introduction.ipynb**](01_training_introduction.ipynb) notebook for details).\n",
"\n",
"Let's first retrieve the model."
]
@ -414,7 +413,7 @@
{
"data": {
"text/plain": [
"<azureml._restclient.models.batch_artifact_content_information_dto.BatchArtifactContentInformationDto at 0x12b965b39b0>"
"<azureml._restclient.models.batch_artifact_content_information_dto.BatchArtifactContentInformationDto at 0x2a334711860>"
]
},
"execution_count": 10,
@ -478,8 +477,8 @@
"text": [
"Model:\n",
" --> Name: im_classif_resnet18\n",
" --> ID: im_classif_resnet18:68\n",
" --> Path:azureml-models\\im_classif_resnet18\\68\\im_classif_resnet18.pkl\n"
" --> ID: im_classif_resnet18:76\n",
" --> Path:azureml-models\\im_classif_resnet18\\76\\im_classif_resnet18.pkl\n"
]
}
],
@ -568,11 +567,11 @@
{
"data": {
"text/html": [
"<table style=\"width:100%\"><tr><th>Experiment</th><th>Id</th><th>Type</th><th>Status</th><th>Details Page</th><th>Docs Page</th></tr><tr><td>image-classifier-webservice</td><td>ff4a55d5-1916-4131-ac92-8658789c4f8f</td><td></td><td>Completed</td><td><a href=\"https://mlworkspace.azure.ai/portal/subscriptions/b8c23406-f9b5-4ccb-8a65-a8cb5dcd6a5a/resourceGroups/alteste-rg/providers/Microsoft.MachineLearningServices/workspaces/ws2_tutorials2/experiments/image-classifier-webservice/runs/ff4a55d5-1916-4131-ac92-8658789c4f8f\" target=\"_blank\" rel=\"noopener\">Link to Azure Portal</a></td><td><a href=\"https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run.Run?view=azure-ml-py\" target=\"_blank\" rel=\"noopener\">Link to Documentation</a></td></tr></table>"
"<table style=\"width:100%\"><tr><th>Experiment</th><th>Id</th><th>Type</th><th>Status</th><th>Details Page</th><th>Docs Page</th></tr><tr><td>image-classifier-webservice</td><td>39ddcc11-40dc-455d-89a3-2311424df1a5</td><td></td><td>Completed</td><td><a href=\"https://mlworkspace.azure.ai/portal/subscriptions/b8c23406-f9b5-4ccb-8a65-a8cb5dcd6a5a/resourceGroups/alteste-rg/providers/Microsoft.MachineLearningServices/workspaces/ws2_tutorials2/experiments/image-classifier-webservice/runs/39ddcc11-40dc-455d-89a3-2311424df1a5\" target=\"_blank\" rel=\"noopener\">Link to Azure Portal</a></td><td><a href=\"https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.run.Run?view=azure-ml-py\" target=\"_blank\" rel=\"noopener\">Link to Documentation</a></td></tr></table>"
],
"text/plain": [
"Run(Experiment: image-classifier-webservice,\n",
"Id: ff4a55d5-1916-4131-ac92-8658789c4f8f,\n",
"Id: 39ddcc11-40dc-455d-89a3-2311424df1a5,\n",
"Type: None,\n",
"Status: Completed)"
]
@ -714,7 +713,7 @@
"source": [
"# Create a deployment-specific yaml file from image_classification/environment.yml\n",
"generate_yaml(\n",
" directory=ic_root_path(), \n",
" directory=os.path.join(root_path(), 'classification'), \n",
" ref_filename='environment.yml',\n",
" needed_libraries=['pytorch', 'spacy', 'fastai', 'dataclasses'],\n",
" conda_filename='myenv.yml'\n",
@ -784,9 +783,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Running................................................................................................\n",
"SucceededImage creation operation finished for image image-classif-resnet18-f48:28, operation \"Succeeded\"\n",
"Wall time: 9min 1s\n"
"Running...................................................................................................................\n",
"SucceededImage creation operation finished for image image-classif-resnet18-f48:30, operation \"Succeeded\"\n",
"Wall time: 10min 45s\n"
]
}
],
@ -833,7 +832,7 @@
"\n",
"To set them up properly, we need to indicate the number of CPU cores and the amount of memory we want to allocate to our web service. Optional tags and descriptions are also available for us to identify the instances in AzureML when looking at the `Compute` tab in the Azure Portal.\n",
"\n",
"<i><b>Note:</b> For production workloads, it is better to use [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/) (AKS) instead. We will demonstrate how to do this in the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb).<i>"
"<i><b>Note:</b> For production workloads, it is better to use [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/) (AKS) instead. We will demonstrate how to do this in the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb).<i>"
]
},
{
@ -911,7 +910,7 @@
"An alternative way of deploying the service is to deploy from the model directly. In that case, we would need to provide the docker image configuration object (image_config), and our list of models (just one of them here).\n",
"The advantage of `deploy_from_image` over <a href=\"https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py#deploy-from-model-workspace--name--models--image-config--deployment-config-none--deployment-target-none-\">deploy_from_model</a> is that the former allows us\n",
"to re-use the same Docker image in case the deployment of this service fails, or even for other\n",
"types of deployments, as we will see in the next notebook (to be pushlished)."
"types of deployments, as we will see in the next notebook."
]
},
{
@ -1128,7 +1127,7 @@
"\n",
"For production requirements, i.e. when &gt; 100 requests per second are expected, we recommend deploying models to Azure Kubernetes Service (AKS). It is a convenient infrastructure as it manages hosted Kubernetes environments, and makes it easy to deploy and manage containerized applications without container orchestration expertise. It also supports deployments with CPU clusters and deployments with GPU clusters, the latter of which are [more economical and efficient](https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-deployment-of-deep-learning-models/) when serving complex models such as deep neural networks, and/or when traffic to the endpoint is high.\n",
"\n",
"We will see an example of this in the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb)."
"We will see an example of this in the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb)."
]
},
{
@ -1163,7 +1162,7 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
@ -1222,8 +1221,15 @@
"source": [
"## 9. Next steps <a id=\"next-steps\"></a>\n",
"\n",
"In the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb), we will leverage the same Docker image, and deploy our model on AKS. In our [third tutorial](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/23_web_service_testing.ipynb), we will then learn how a Flask app, with an interactive user interface, can be used to call our web service."
"In the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb), we will leverage the same Docker image, and deploy our model on AKS. In our [third tutorial](23_web_service_testing.ipynb), we will then learn how a Flask app, with an interactive user interface, can be used to call our web service."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

Просмотреть файл

@ -30,7 +30,7 @@
"\n",
"## 1. Introduction <a id=\"intro\"/>\n",
"\n",
"In many real life scenarios, trained machine learning models need to be deployed to production. As we saw in the [first](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/notebooks/21_deployment_on_azure_container_instances.ipynb) deployment notebook, this can be done by deploying on Azure Container Instances. In this tutorial, we will get familiar with another way of implementing a model into a production environment, this time using [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads) (AKS).\n",
"In many real life scenarios, trained machine learning models need to be deployed to production. As we saw in the [first](21_deployment_on_azure_container_instances.ipynb) deployment notebook, this can be done by deploying on Azure Container Instances. In this tutorial, we will get familiar with another way of implementing a model into a production environment, this time using [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads) (AKS).\n",
"\n",
"AKS manages hosted Kubernetes environments. It makes it easy to deploy and manage containerized applications without container orchestration expertise. It also supports deployments with CPU clusters and deployments with GPU clusters. The latter have been shown to be [more economical and efficient](https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-deployment-of-deep-learning-models/) when serving complex models such as deep neural networks, and/or when traffic to the web service is high (&gt; 100 requests/second).\n",
"\n",
@ -47,7 +47,7 @@
"source": [
"## 2. Pre-requisites <a id=\"pre-reqs\"/>\n",
"\n",
"This notebook relies on resources we created in [21_deployment_on_azure_container_instances.ipynb](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/notebooks/21_deployment_on_azure_container_instances.ipynb):\n",
"This notebook relies on resources we created in [21_deployment_on_azure_container_instances.ipynb](21_deployment_on_azure_container_instances.ipynb):\n",
"- Our local conda environment and Azure Machine Learning workspace\n",
"- The Docker image that contains the model and scoring script needed for the web service to work.\n",
"\n",
@ -65,7 +65,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@ -93,7 +93,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@ -110,7 +110,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@ -134,9 +134,22 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 7,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Docker images:\n",
" --> Name: image-classif-resnet18-f48\n",
" --> ID: image-classif-resnet18-f48:30\n",
" --> Tags: {'training set': 'ImageNet', 'architecture': 'CNN ResNet18', 'type': 'Pretrained'}\n",
" --> Creation time: 2019-04-25 18:18:33.724424+00:00\n",
"\n"
]
}
],
"source": [
"print(\"Docker images:\")\n",
"for docker_im in ws.images: \n",
@ -156,7 +169,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
@ -174,7 +187,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
@ -183,7 +196,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 10,
"metadata": {},
"outputs": [
{
@ -192,10 +205,10 @@
"text": [
"Existing model:\n",
" --> Name: im_classif_resnet18\n",
" --> Version: 68\n",
" --> ID: im_classif_resnet18:68 \n",
" --> Creation time: 2019-03-28 22:54:45.853251+00:00\n",
" --> URL: aml://asset/be4f2794e0204f1ca60d4aff671fc7dc\n"
" --> Version: 76\n",
" --> ID: im_classif_resnet18:76 \n",
" --> Creation time: 2019-04-25 18:17:27.688750+00:00\n",
" --> URL: aml://asset/ccf6f55b203a4fc69b0f0e18ec6f72a1\n"
]
}
],
@ -222,7 +235,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@ -230,10 +243,10 @@
"output_type": "stream",
"text": [
"List of compute resources associated with our workspace:\n",
" --> imgclass-aks-gpu: <azureml.core.compute.aks.AksCompute object at 0x0000021D59F88F28>\n",
" --> imgclass-aks-cpu: <azureml.core.compute.aks.AksCompute object at 0x0000021D59F8F240>\n",
" --> cpucluster: <azureml.core.compute.amlcompute.AmlCompute object at 0x0000021D59F895F8>\n",
" --> gpuclusternc12: <azureml.core.compute.amlcompute.AmlCompute object at 0x0000021D59F89A20>\n"
" --> imgclass-aks-gpu: <azureml.core.compute.aks.AksCompute object at 0x000001EF61DEB278>\n",
" --> imgclass-aks-cpu: <azureml.core.compute.aks.AksCompute object at 0x000001EF61DEE0F0>\n",
" --> cpucluster: <azureml.core.compute.amlcompute.AmlCompute object at 0x000001EF61DEE7B8>\n",
" --> gpuclusternc12: <azureml.core.compute.amlcompute.AmlCompute object at 0x000001EF61DEEE10>\n"
]
}
],
@ -270,15 +283,14 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"We retrieved the imgclass-aks-cpu AKS compute target\n",
"Wall time: 940 ms\n"
"We retrieved the imgclass-aks-cpu AKS compute target\n"
]
}
],
@ -320,6 +332,12 @@
"```\n",
"Creating ...\n",
"SucceededProvisioning operation finished, operation \"Succeeded\"\n",
"```\n",
"\n",
"In the case when our cluster already exists, we get the following message:\n",
"\n",
"```\n",
"We retrieved the <aks_cluster_name> AKS compute target\n",
"```"
]
},
@ -355,7 +373,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 13,
"metadata": {},
"outputs": [
{
@ -384,7 +402,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
@ -405,9 +423,20 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 15,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Creating service\n",
"Running.........................\n",
"SucceededAKS service creation operation finished, operation \"Succeeded\"\n",
"The web service is Healthy\n"
]
}
],
"source": [
"if aks_target.provisioning_state== \"Succeeded\": \n",
" aks_service_name ='aks-cpu-image-classif-web-svc'\n",

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 41 KiB

После

Ширина:  |  Высота:  |  Размер: 41 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 40 KiB

После

Ширина:  |  Высота:  |  Размер: 40 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 37 KiB

После

Ширина:  |  Высота:  |  Размер: 37 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 67 KiB

После

Ширина:  |  Высота:  |  Размер: 67 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 30 KiB

После

Ширина:  |  Высота:  |  Размер: 30 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 228 KiB

После

Ширина:  |  Высота:  |  Размер: 228 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 124 KiB

После

Ширина:  |  Высота:  |  Размер: 124 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 83 KiB

После

Ширина:  |  Высота:  |  Размер: 83 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 128 KiB

После

Ширина:  |  Высота:  |  Размер: 128 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 62 KiB

После

Ширина:  |  Высота:  |  Размер: 62 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 52 KiB

После

Ширина:  |  Высота:  |  Размер: 52 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 125 KiB

После

Ширина:  |  Высота:  |  Размер: 125 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 118 KiB

После

Ширина:  |  Высота:  |  Размер: 118 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 103 KiB

После

Ширина:  |  Высота:  |  Размер: 103 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 151 KiB

После

Ширина:  |  Высота:  |  Размер: 151 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 92 KiB

После

Ширина:  |  Высота:  |  Размер: 92 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 114 KiB

После

Ширина:  |  Высота:  |  Размер: 114 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 13 KiB

После

Ширина:  |  Высота:  |  Размер: 13 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 11 KiB

После

Ширина:  |  Высота:  |  Размер: 11 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 154 KiB

После

Ширина:  |  Высота:  |  Размер: 154 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 104 KiB

После

Ширина:  |  Высота:  |  Размер: 104 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 44 KiB

После

Ширина:  |  Высота:  |  Размер: 44 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 140 KiB

После

Ширина:  |  Высота:  |  Размер: 140 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 181 KiB

После

Ширина:  |  Высота:  |  Размер: 181 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 84 KiB

После

Ширина:  |  Высота:  |  Размер: 84 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 130 KiB

После

Ширина:  |  Высота:  |  Размер: 130 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 50 KiB

После

Ширина:  |  Высота:  |  Размер: 50 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 49 KiB

После

Ширина:  |  Высота:  |  Размер: 49 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 55 KiB

После

Ширина:  |  Высота:  |  Размер: 55 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 38 KiB

После

Ширина:  |  Высота:  |  Размер: 38 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 47 KiB

После

Ширина:  |  Высота:  |  Размер: 47 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 81 KiB

После

Ширина:  |  Высота:  |  Размер: 81 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 41 KiB

После

Ширина:  |  Высота:  |  Размер: 41 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 80 KiB

После

Ширина:  |  Высота:  |  Размер: 80 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 63 KiB

После

Ширина:  |  Высота:  |  Размер: 63 KiB

Просмотреть файл

@ -32,7 +32,7 @@ get_ipython().run_line_magic('matplotlib', 'inline')
import sys
sys.path.append("../")
sys.path.append("../../")
import io
import os
import time
@ -43,12 +43,10 @@ from fastai.vision import models, open_image
from ipywebrtc import CameraStream, ImageRecorder
from ipywidgets import HBox, Label, Layout, Widget
from utils_ic.common import data_path
from utils_ic.constants import IMAGENET_IM_SIZE
from utils_ic.datasets import imagenet_labels
from utils_ic.gpu_utils import which_processor
from utils_ic.imagenet_models import model_to_learner
from utils_cv.common.data import data_path
from utils_cv.common.gpu import which_processor
from utils_cv.classification.data import imagenet_labels
from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner
print(f"Fast.ai: {fastai.__version__}")
which_processor()
@ -165,8 +163,8 @@ HBox([w_cam, w_imrecorder, w_label])
# Now, click the **capture button** of the image recorder widget to start classification. Labels show the most probable class along with the confidence predicted by the model for an image snapshot.
#
# <img src="https://cvbp.blob.core.windows.net/public/images/cvbp_webcam.png" width="400" />
# <center>
# <img src="https://cvbp.blob.core.windows.net/public/images/cvbp_webcam.png" style="width: 400px;"/>
# <i>Webcam image classification example</i>
# </center>

Просмотреть файл

@ -2,7 +2,7 @@
# coding: utf-8
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
#
# <i>Licensed under the MIT License.</i>
# # Introduction to Training Image Classification Models
@ -13,9 +13,9 @@
# Ensure edits to libraries are loaded and plotting is shown in the notebook.
get_ipython().run_line_magic("reload_ext", "autoreload")
get_ipython().run_line_magic("autoreload", "2")
get_ipython().run_line_magic("matplotlib", "inline")
get_ipython().run_line_magic('reload_ext', 'autoreload')
get_ipython().run_line_magic('autoreload', '2')
get_ipython().run_line_magic('matplotlib', 'inline')
# Import fastai. For now, we'll import all (`from fastai.vision import *`) so that we can easily use different utilies provided by the fastai library.
@ -24,8 +24,8 @@ get_ipython().run_line_magic("matplotlib", "inline")
import sys
sys.path.append("../../")
sys.path.append("../")
import numpy as np
from pathlib import Path
@ -35,10 +35,12 @@ from fastai.vision import *
from fastai.metrics import accuracy
# local modules
from utils_ic.fastai_utils import TrainMetricsRecorder
from utils_ic.gpu_utils import which_processor
from utils_ic.plot_utils import ResultsWidget, plot_pr_roc_curves
from utils_ic.datasets import Urls, unzip_url
from utils_cv.classification.model import TrainMetricsRecorder
from utils_cv.classification.plot import plot_pr_roc_curves
from utils_cv.classification.results_widget import ResultsWidget
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from utils_cv.common.gpu import which_processor
print(f"Fast.ai version = {fastai.__version__}")
which_processor()
@ -48,15 +50,15 @@ which_processor()
# Set some parameters. We'll use the `unzip_url` helper function to download and unzip our data.
# In[4]:
# In[13]:
DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)
EPOCHS = 5
DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)
EPOCHS = 5
LEARNING_RATE = 1e-4
IMAGE_SIZE = 299
BATCH_SIZE = 16
ARCHITECTURE = models.resnet50
IMAGE_SIZE = 299
BATCH_SIZE = 16
ARCHITECTURE = models.resnet50
# ---
@ -64,10 +66,10 @@ ARCHITECTURE = models.resnet50
# ## 1. Prepare Image Classification Dataset
# In this notebook, we'll use a toy dataset called *Fridge Objects*, which consists of 134 images of can, carton, milk bottle and water bottle photos taken with different backgrounds. With our helper function, the data set will be downloaded and unzip to `image_classification/data`.
#
#
# Let's set that directory to our `path` variable, which we'll use throughout the notebook, and checkout what's inside:
# In[5]:
# In[14]:
path = Path(DATA_PATH)
@ -81,7 +83,7 @@ path.ls()
# - `/can`
# The most common data format for multiclass image classification is to have a folder titled the label with the images inside:
#
#
# ```
# /images
# +-- can (class 1)
@ -94,48 +96,46 @@ path.ls()
# | +-- ...
# +-- ...
# ```
#
#
# and our data is already structured in that format!
# ## 2. Load Images
# To use fastai, we want to create `ImageDataBunch` so that the library can easily use multiple images (mini-batches) during training time. We create an ImageDataBunch by using fastai's [data_block apis](https://docs.fast.ai/data_block.html).
#
# For training and validation, we randomly split the data by 8:2, where 80% of the data is for training and the rest for validation.
#
# For training and validation, we randomly split the data by 8:2, where 80% of the data is for training and the rest for validation.
# In[6]:
# In[15]:
data = (
ImageList.from_folder(path)
.split_by_rand_pct(valid_pct=0.2, seed=10)
.label_from_folder()
.transform(size=IMAGE_SIZE)
.databunch(bs=BATCH_SIZE)
.normalize(imagenet_stats)
)
data = (ImageList.from_folder(path)
.split_by_rand_pct(valid_pct=0.2, seed=10)
.label_from_folder()
.transform(size=IMAGE_SIZE)
.databunch(bs=BATCH_SIZE)
.normalize(imagenet_stats))
# Lets take a look at our data using the databunch we created.
# In[7]:
# In[16]:
data.show_batch(rows=3, figsize=(15, 11))
data.show_batch(rows=3, figsize=(15,11))
# Lets see all available classes:
# In[8]:
# In[17]:
print(f"number of classes: {data.c}")
print(f'number of classes: {data.c}')
print(data.classes)
# We can also see how many images we have in our training and validation set.
# In[9]:
# In[18]:
data.batch_stats
@ -146,25 +146,25 @@ data.batch_stats
# ## 3. Train a Model
# For the model, we use a convolutional neural network (CNN). Specifically, we'll use **ResNet50** architecture. You can find more details about ResNet from [here](https://arxiv.org/abs/1512.03385).
#
#
# When training a model, there are many hypter parameters to select, such as the learning rate, the model architecture, layers to tune, and many more. With fastai, we can use the `create_cnn` function that allows us to specify the model architecture and performance indicator (metric). At this point, we already benefit from transfer learning since we download the parameters used to train on [ImageNet](http://www.image-net.org/).
#
#
# Note, we use a custom callback `TrainMetricsRecorder` to track the accuracy on the training set during training, since fast.ai's default [recorder class](https://docs.fast.ai/basic_train.html#Recorder) only supports tracking accuracy on the validation set.
# In[10]:
# In[19]:
learn = cnn_learner(
data,
ARCHITECTURE,
metrics=[accuracy],
callback_fns=[partial(TrainMetricsRecorder, show_graph=True)],
callback_fns=[partial(TrainMetricsRecorder, show_graph=True)]
)
# Unfreeze our CNN since we're training all the layers.
# In[11]:
# In[20]:
learn.unfreeze()
@ -172,13 +172,13 @@ learn.unfreeze()
# We can call the `fit` function to train the dnn.
# In[12]:
# In[21]:
learn.fit(EPOCHS, LEARNING_RATE)
# In[13]:
# In[22]:
# You can plot loss by using the default callback Recorder.
@ -189,16 +189,16 @@ learn.recorder.plot_losses()
# To evaluate our model, lets take a look at the accuracy on the validation set.
# In[14]:
# In[23]:
_, metric = learn.validate(learn.data.valid_dl, metrics=[accuracy])
print(f"Accuracy on validation set: {100*float(metric):3.2f}")
print(f'Accuracy on validation set: {100*float(metric):3.2f}')
# Now, analyze the classification results by using `ClassificationInterpretation` module.
# In[15]:
# In[24]:
interp = ClassificationInterpretation.from_learner(learn)
@ -207,24 +207,24 @@ pred_scores = to_np(interp.probs)
# To see details of each sample and prediction results, we use our widget helper class `ResultsWidget`. The widget shows each test image along with its ground truth label and model's prediction scores. We can use this tool to see how our model predicts each image and debug the model if needed.
#
#
# <img src="https://cvbp.blob.core.windows.net/public/images/ic_widget.png" width="600"/>
# <center><i>Image Classification Result Widget</i></center>
# In[16]:
# In[25]:
w_results = ResultsWidget(
dataset=learn.data.valid_ds,
y_score=pred_scores,
y_label=[data.classes[x] for x in np.argmax(pred_scores, axis=1)],
y_label=[data.classes[x] for x in np.argmax(pred_scores, axis=1)]
)
display(w_results.show())
# We can plot precision-recall and ROC curves for each class as well. Please note that these plots are not too interesting here, since the dataset is easy and thus the accuracy is close to 100%.
# In[17]:
# In[26]:
# True labels of the validation set. We convert to numpy array for plotting.
@ -234,7 +234,7 @@ plot_pr_roc_curves(true_labels, pred_scores, data.classes)
# Let's take a close look how our model confused some of the samples (if any). The most common way to do that is to use a confusion matrix.
# In[18]:
# In[27]:
interp.plot_confusion_matrix()
@ -242,12 +242,10 @@ interp.plot_confusion_matrix()
# When evaluating our results, we want to see where the model messes up, and whether or not we can do better. So we're interested in seeing images where the model predicted the image incorrectly but with high confidence (images with the highest loss).
# In[19]:
# In[28]:
interp.plot_top_losses(9, figsize=(15, 11))
interp.plot_top_losses(9, figsize=(15,11))
# That's pretty much it! Now you can bring your own dataset and train your model on them easily.
# In[ ]:
# That's pretty much it! Now you can bring your own dataset and train your model on them easily.

Просмотреть файл

@ -1,16 +1,20 @@
#!/usr/bin/env python
# coding: utf-8
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # Building Models for Accuracy VS Speed
#
#
# The goal of this notebook is to understand how to train a model with different parameters to achieve either a highly accurate but slow model, or a model with fast inference speed but with lower accuracy.
#
# As practitioners of computer vision, we want to be able to control what to optimize when building our models. Unless you are building a model for a Kaggle competition, it is unlikely that you can build your model with only its accuracy in mind.
#
# For example, in an IoT setting, where the inferencing device has limited computational capabilities, we need to design our models to have a small memory footprint. In contrast, medical situations often require the highest possible accuracy because the cost of mis-classification could impact the well-being of a patient. In this scenario, the accuracy of the model can not be compromised.
#
#
# As practitioners of computer vision, we want to be able to control what to optimize when building our models. Unless you are building a model for a Kaggle competition, it is unlikely that you can build your model with only its accuracy in mind.
#
# For example, in an IoT setting, where the inferencing device has limited computational capabilities, we need to design our models to have a small memory footprint. In contrast, medical situations often require the highest possible accuracy because the cost of mis-classification could impact the well-being of a patient. In this scenario, the accuracy of the model can not be compromised.
#
# We have conducted various experiments on multiple diverse datasets to find parameters which work well on a wide variety of settings, for e.g. high accuracy or fast inference. In this notebook, we provide these parameters, so that your initial models can be trained without any parameter tuning. For most datasets, these parameters are close to optimal, so there won't need to change them much. In the second part of the notebook, we will give guidelines as to what parameters could be fine-tuned and how they impact the model, and which parameters typically do not have a big influence
#
#
# It is recommended that you first train your model with the default parameters, evaluate the results, and then only as needed, try fine tuning parameters to achieve better results.
# ## Table of Contents:
@ -37,6 +41,7 @@
import fastai
fastai.__version__
@ -45,9 +50,9 @@ fastai.__version__
# In[2]:
get_ipython().run_line_magic('reload_ext', 'autoreload')
get_ipython().run_line_magic('autoreload', '2')
get_ipython().run_line_magic('matplotlib', 'inline')
get_ipython().run_line_magic("reload_ext", "autoreload")
get_ipython().run_line_magic("autoreload", "2")
get_ipython().run_line_magic("matplotlib", "inline")
# Import fastai. For now, we'll import all (import *) so that we can easily use different utilies provided by the fastai library.
@ -56,10 +61,12 @@ get_ipython().run_line_magic('matplotlib', 'inline')
import sys
sys.path.append("../")
sys.path.append("../../")
import os
from pathlib import Path
from utils_ic.datasets import downsize_imagelist, unzip_url, Urls
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from fastai.vision import *
from fastai.metrics import accuracy
@ -69,9 +76,9 @@ from fastai.metrics import accuracy
# ### Choosing between two types of models <a name="choosing"></a>
# For most scenarios, computer vision practitioners want to create a high accuracy model, a fast-inference model or a small size model. Set your `MODEL_TYPE` variable to one of the following: `"high_accuracy"`, `"fast_inference"`, or `"small_size"`.
#
#
# For this notebook, we'll be using the FridgeObjects dataset as we did in the [previous notebook](01_training_introduction.ipynb). You can replace the `DATA_PATH` variable with your own data by passing its path.
#
#
# When choosing your batch size, its worth noting that even mid-level GPUs run out of memory when training a deeper resnet models at larger image resolutions. If you get an _out of memory_ error, try reducing the batch size by a factor of 2, and try again.
# In[4]:
@ -87,7 +94,7 @@ DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)
EPOCHS_HEAD = 4
EPOCHS_BODY = 12
LEARNING_RATE = 1e-4
BATCH_SIZE = 16
BATCH_SIZE = 16
# Make sure that only one is set to True
@ -105,28 +112,31 @@ assert MODEL_TYPE in ["high_accuracy", "fast_inference", "small_size"]
if MODEL_TYPE == "high_accuracy":
ARCHITECTURE = models.resnet50
IM_SIZE = 500
IM_SIZE = 500
if MODEL_TYPE == "fast_inference":
ARCHITECTURE = models.resnet18
IM_SIZE = 300
IM_SIZE = 300
if MODEL_TYPE == "small_size":
ARCHITECTURE = models.squeezenet1_1
IM_SIZE = 300
IM_SIZE = 300
# ### Pre-processing <a name="preprocessing"></a>
#
# JPEG decoding represents a bottleneck on systems with powerful GPUs and can slow training significantly, often by a factor of 2-3x, and sometimes by much more. We therefore recommend creating a down-sized copy of the dataset if training otherwise takes too long, or if running training multiple times e.g. to evaluate different parameters. After running the following function, update the `DATA_PATH` variable (to `out_dir`) so that this notebook uses the resized images.
#
# JPEG decoding represents a bottleneck on systems with powerful GPUs and can slow training significantly, often by a factor of 2-3x, and sometimes by much more. We therefore recommend creating a down-sized copy of the dataset if training otherwise takes too long, or if running training multiple times e.g. to evaluate different parameters. After running the following function, update the `DATA_PATH` variable (to `out_dir`) so that this notebook uses the resized images.
#
# ```python
# from utils_cv.classification.data import downsize_imagelist
#
# downsize_imagelist(im_list = ImageList.from_folder(Path(DATA_PATH)),
# out_dir = "downsized_images",
# out_dir = "downsized_images",
# max_dim = IM_SIZE)
# ```
# ### Training <a name="training"></a>
#
#
# We'll now re-apply the same steps we did in the [training introduction](01_training_introduction.ipynb) notebook here.
# Load our data.
@ -134,12 +144,14 @@ if MODEL_TYPE == "small_size":
# In[7]:
data = (ImageList.from_folder(Path(DATA_PATH))
.split_by_rand_pct(valid_pct=0.2, seed=10)
.label_from_folder()
.transform(tfms=get_transforms(), size=IM_SIZE)
.databunch(bs=16)
.normalize(imagenet_stats))
data = (
ImageList.from_folder(Path(DATA_PATH))
.split_by_rand_pct(valid_pct=0.2, seed=10)
.label_from_folder()
.transform(tfms=get_transforms(), size=IM_SIZE)
.databunch(bs=16)
.normalize(imagenet_stats)
)
# Create our learner.
@ -180,22 +192,22 @@ learn.fit_one_cycle(EPOCHS_BODY, LEARNING_RATE)
# - accuracy
# - inference speed
# - parameter export size / memory footprint required
#
#
#
#
# Refer back to the [training introduction](01_training_introduction.ipynb) to learn about other ways to evaluate the model.
# #### Accuracy
# #### Accuracy
# To keep things simple, we just a look at the final accuracy on the validation set.
# In[12]:
_, metric = learn.validate(learn.data.valid_dl, metrics=[accuracy])
print(f'Accuracy on validation set: {float(metric)}')
print(f"Accuracy on validation set: {float(metric)}")
# #### Inference speed
#
#
# Use the model to inference and time how long it takes.
# In[13]:
@ -207,11 +219,11 @@ im = open_image(f"{(Path(DATA_PATH)/learn.data.classes[0]).ls()[0]}")
# In[14]:
get_ipython().run_cell_magic('timeit', '', 'learn.predict(im)')
get_ipython().run_cell_magic("timeit", "", "learn.predict(im)")
# #### Memory footprint
#
#
# Export our model and inspect the size of the file.
# In[15]:
@ -223,44 +235,44 @@ learn.export(f"{MODEL_TYPE}")
# In[16]:
size_in_mb = os.path.getsize(Path(DATA_PATH)/MODEL_TYPE) / (1024*1024.)
size_in_mb = os.path.getsize(Path(DATA_PATH) / MODEL_TYPE) / (1024 * 1024.0)
print(f"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.")
# ---
# ## Fine tuning parameters <a name="finetuning"></a>
#
#
# If you use the parameters provided in the repo along with the defaults that Fastai provides, you can get good results across a wide variety of datasets. However, as is true for most machine learning projects, getting the best possible results for a new dataset requires tuning the parameters that you use. The following section provides guidelines on how to optimize for accuracy, inference speed, or model size on a given dataset. We'll go through the parameters that will make the largest impact on your model as well as the parameters that may not be worth tweaking.
#
# Generally speaking, models for image classification comes with a trade-off between training time versus model accuracy. The four parameters that most affect this trade-off are the DNN architecture, image resolution, learning rate, and number of epochs. DNN architecture and image resolution will additionally affect the model's inference time and memory footprint. As a rule of thumb, deeper networks with high image resolution will achieve higher accuracy at the cost of large model sizes and low training and inference speeds. Shallow networks with low image resolution will result in models with fast inference speed, fast training speeds and low model sizes at the cost of the model's accuracy.
#
# Generally speaking, models for image classification comes with a trade-off between training time versus model accuracy. The four parameters that most affect this trade-off are the DNN architecture, image resolution, learning rate, and number of epochs. DNN architecture and image resolution will additionally affect the model's inference time and memory footprint. As a rule of thumb, deeper networks with high image resolution will achieve higher accuracy at the cost of large model sizes and low training and inference speeds. Shallow networks with low image resolution will result in models with fast inference speed, fast training speeds and low model sizes at the cost of the model's accuracy.
# ### DNN Architectures <a name="dnn"></a>
#
# When choosing at an architecture, we want to make sure it fits our requirements for accuracy, memory footprint, inference speed and training speeds. Some DNNs have hundreds of layers and end up having quite a large memory footprint with millions of parameters to tune, while others are compact and small enough to fit onto memory limited edge devices.
#
#
# When choosing at an architecture, we want to make sure it fits our requirements for accuracy, memory footprint, inference speed and training speeds. Some DNNs have hundreds of layers and end up having quite a large memory footprint with millions of parameters to tune, while others are compact and small enough to fit onto memory limited edge devices.
#
# Lets take a __squeezenet1_1__ model, a __resnet18__ model and __resnet50__ model and compare the differences based on our experiment that is based of a diverse set of 6 different datasets. (More about the datasets in the appendix below)
#
#
# ![architecture_comparisons](media/architecture_comparisons.png)
#
# As you can see from the graphs above, there is a clear trade-off when deciding between the models.
#
#
# As you can see from the graphs above, there is a clear trade-off when deciding between the models.
#
# In terms of accuracy, __resnet50__ out-performs the rest, but it also suffers from having the highest memory footprint, and the longest training and inference times. On the other end of the spectrum, __squeezenet1_1__ performs the worst in terms fo accuracy, but has by far the smallest memory footprint.
#
#
# Generally speaking, given enough data, the deeper DNN and the higher the image resolution, the higher the accuracy you'll be able to achieve with your model.
#
#
# ---
#
#
# <details><summary>See the code to generate the graphs</summary>
# <p>
#
#
# #### Code snippet to generate graphs in this cell
#
#
# ```python
# import pandas as pd
# from utils_ic.parameter_sweeper import add_value_labels
# %matplotlib inline
#
#
# df = pd.DataFrame({
# "accuracy": [.9472, .9190, .8251],
# "training_duration": [385.3, 280.5, 272.5],
@ -268,136 +280,136 @@ print(f"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.")
# "memory": [99, 45, 4.9],
# "model": ['resnet50', 'resnet18', 'squeezenet1_1'],
# }).set_index("model")
#
#
# ax1, ax2, ax3, ax4 = df.plot.bar(
# rot=90, subplots=True, legend=False, figsize=(8,10)
# )
#
#
# for ax in [ax1, ax2, ax3, ax4]:
# for i in [0, 1, 2]:
# if i==0: ax.get_children()[i].set_color('r')
# if i==1: ax.get_children()[i].set_color('g')
# if i==2: ax.get_children()[i].set_color('b')
#
# if i==0: ax.get_children()[i].set_color('r')
# if i==1: ax.get_children()[i].set_color('g')
# if i==2: ax.get_children()[i].set_color('b')
#
# ax1.set_title("Accuracy (%)")
# ax2.set_title("Training Duration (seconds)")
# ax3.set_title("Inference Time (seconds)")
# ax4.set_title("Memory Footprint (mb)")
#
#
# ax1.set_ylabel("%")
# ax2.set_ylabel("seconds")
# ax3.set_ylabel("seconds")
# ax4.set_ylabel("mb")
#
#
# ax1.set_ylim(top=df["accuracy"].max() * 1.3)
# ax2.set_ylim(top=df["training_duration"].max() * 1.3)
# ax3.set_ylim(top=df["inference_duration"].max() * 1.3)
# ax4.set_ylim(top=df["memory"].max() * 1.3)
#
#
# add_value_labels(ax1, percentage=True)
# add_value_labels(ax2)
# add_value_labels(ax3)
# add_value_labels(ax4)
# ```
#
#
# </p>
# </details>
#
#
# ### Key Parameters <a name="key-parameters"></a>
# This section examines some of the key parameters when training a deep learning model for image classification. The table below shows default parameters we recommend using.
#
#
# | Parameter | Default Value |
# | --- | --- |
# | Learning Rate | 1e-4 |
# | Epochs | 15 |
# | Batch Size | 16 |
# | Image Size | 300 X 300 |
#
# __Learning rate__
#
#
# __Learning rate__
#
# Learning rate step size is used when optimizing your model with gradient descent and tends to be one of the most important parameters to set when training your model. If your learning rate is set too low, training will progress very slowly since we're only making tiny updates to the weights in your network. However, if your learning rate is too high, it can cause undesirable divergent behavior in your loss function. Generally speaking, choosing a learning rate of 1e-4 was shown to work pretty well for most datasets. If you want to reduce training time (by training for fewer epochs), you can try setting the learning rate to 5e-3, but if you notice a spike in the training or validation loss, you may want to try reducing your learning rate.
#
#
# You can learn more about learning rate in the [appendix below](#appendix-learning-rate).
#
#
# __Epochs__
#
#
# When it comes to choosing the number of epochs, a common question is - _Won't too many epochs will cause overfitting_? It turns out that the accuracy on the test set typically does not get worse, even if training for too many epochs. Unless your are working with small datasets, using around 15 epochs tends to work pretty well in most cases.
#
#
#
#
# __Batch Size__
#
#
# Batch size is the number of training samples you use in order to make one update to the model parameters. A batch size of 16 or 32 works well for most cases. The higher the batch size, the faster training will be, but at the expense of an increased DNN memory consumption. Depending on your dataset and the GPU you have, you can start with a batch size of 32, and move down to 16 if your GPU doesn't have enough memory. After a certain increase in batch size, improvments to training speed become marginal, hence we found 16 (or 32) to be a good trade-off between training speed and memory consumption.If you reduce the batch size, you may also have to reduce the learning rate.
#
# __Image size__
#
#
# __Image size__
#
# The default image size is __300 X 300__ pixels. Using higher image resolution of, for example, __500 X 500__ or even higher, can improve the accuracy of the model but at the cost of longer training and inference times.
#
#
# You can learn more about the impact of image resolution in the [appendix below](#appendix-imsize).
#
#
# ### Other Parameters <a name="other-parameters"></a>
#
# In this section, we examine some of the other common hyperparameters when dealing with DNNs. The key take-away is that that exact value of these parameters do not have a big impact on the model's performance, training/inference speed, or memory footprint.
#
#
# In this section, we examine some of the other common hyperparameters when dealing with DNNs. The key take-away is that that exact value of these parameters do not have a big impact on the model's performance, training/inference speed, or memory footprint.
#
# | Parameter | Good Default Value |
# | --- | --- |
# | Dropout | 0.5 or (0.5 on the final layer and 0.25 on all previous layers) |
# | Weight Decay | 0.01 |
# | Momentum | 0.9 or (min=0.85 and max=0.95 when using cyclical momentum) |
#
#
# __Dropout__
#
#
# Dropout is a way to discard activations at random when training your model. It is a way to keep the model from over-fitting on the training data. In Fastai, dropout is by default set to 0.5 on the final layer, and 0.25 on all previous layer. Unless there is clear evidence of over-fitting, drop out tends to work well at this default so there is no need to change it much.
#
#
# __Weight decay (L2 regularization)__
#
# Weight decay is a regularization term applied when minimizing the network's loss. We can think of it as a penalty applied to the weights after an update. This will help prevent the weights from growing too big. In Fastai, the default weight decay is 0.1, which is what we should leave it at.
#
#
# Weight decay is a regularization term applied when minimizing the network's loss. We can think of it as a penalty applied to the weights after an update. This will help prevent the weights from growing too big. In Fastai, the default weight decay is 0.1, which is what we should leave it at.
#
# __Momentum__
#
# Momentum is a way to reach convergence faster when training our model. It is a way to incorporate a weighted average of the most recent updates to the current update. Fastai implements cyclical momentum when calling `fit_one_cycle()`, so the momentum will fluctuate over the course of the training cycle, hence we need a min and max value for momentum.
#
#
# Momentum is a way to reach convergence faster when training our model. It is a way to incorporate a weighted average of the most recent updates to the current update. Fastai implements cyclical momentum when calling `fit_one_cycle()`, so the momentum will fluctuate over the course of the training cycle, hence we need a min and max value for momentum.
#
# When using `fit_one_cycle()`, the default value of max=0.95 and min=0.85 is shown to work well. If using `fit()`, the default value of 0.9 has been shown to work well. These defaults provided by Fastai represent a good trade-off between training speed and the ability of the model to converge to a good solution
# ### Testing Parameters <a name="testing-parameters"></a>
# If you want to fine tune parameters and test different parameters, you can use the ParameterSweeper module the find the best parameter. See the [exploring hyperparameters notebook](./11_exploring_hyperparameters.ipynb) for more information.
# If you want to fine tune parameters and test different parameters, you can use the ParameterSweeper module the find the best parameter. See the [exploring hyperparameters notebook](./11_exploring_hyperparameters.ipynb) for more information.
# ---
# # Appendix <a name="appendix"></a>
# ### Learning Rate <a name="appendix-learning-rate"></a>
#
# One way to mitigate against a low learning rate is to make sure that you're training for many epochs. But this can take a long time.
#
#
# One way to mitigate against a low learning rate is to make sure that you're training for many epochs. But this can take a long time.
#
# So, to efficiently build a model, we need to make sure that our learning rate is in the correct range so that we can train for as few epochs as possible. To find a good default learning rate, we've tested various learning rates on 6 different datasets, training the full network for 3 or 15 epochs.
#
#
# ![lr_comparisons](media/lr_comparisons.png)
#
#
# <details><summary><em>Understanding the diagram</em></summary>
# <p>
#
# > In the figure on the left which shows the results of the different learning rates on different datasets at 15 epochs, we can see that a learning rate of 1e-4 does the best overall. But this may not be the case for every dataset. If you look carefully, there is a pretty significant variance between the datasets and it may be possible that a learning rate of 1-e3 works better than a learning rate of 1e-4 for some datasets. In the figure on the right, both 1e-4 and 1e-3 seem to work well. At 15 epochs, the results of 1e-4 are only slightly better than that of 1e-3. However, at 3 epochs, a learning rate of 1e-3 out performs the learning rate at 1e-4. This makes sense since we're limiting the training to only 3 epochs, the model that can update its weights more quickly will perform better. As a result, we may learn towards using higher learning rates (such as 1e-3) if we want to minimize the training time, and lower learning rates (such as 1e-4) if training time is not constrained.
#
#
# > In the figure on the left which shows the results of the different learning rates on different datasets at 15 epochs, we can see that a learning rate of 1e-4 does the best overall. But this may not be the case for every dataset. If you look carefully, there is a pretty significant variance between the datasets and it may be possible that a learning rate of 1-e3 works better than a learning rate of 1e-4 for some datasets. In the figure on the right, both 1e-4 and 1e-3 seem to work well. At 15 epochs, the results of 1e-4 are only slightly better than that of 1e-3. However, at 3 epochs, a learning rate of 1e-3 out performs the learning rate at 1e-4. This makes sense since we're limiting the training to only 3 epochs, the model that can update its weights more quickly will perform better. As a result, we may learn towards using higher learning rates (such as 1e-3) if we want to minimize the training time, and lower learning rates (such as 1e-4) if training time is not constrained.
#
# </p>
# </details>
#
# In both figures, we can see that a learning rate of 1e-3 and 1e-4 tends to work the best across the different datasets and the two settings for epochs. We observe that training using only 3 epochs gives inferior results compared to 15 epochs. Generally speaking, choosing a learning rate of 5e-3 (the mean of 1e-3 and 1e-4) was shown to work pretty well for most datasets. However, for some datasets, a learning rate of 5-e3 will cause the training to diverge. In those cases, try a lower epoch, like 1e-4.
#
#
# In both figures, we can see that a learning rate of 1e-3 and 1e-4 tends to work the best across the different datasets and the two settings for epochs. We observe that training using only 3 epochs gives inferior results compared to 15 epochs. Generally speaking, choosing a learning rate of 5e-3 (the mean of 1e-3 and 1e-4) was shown to work pretty well for most datasets. However, for some datasets, a learning rate of 5-e3 will cause the training to diverge. In those cases, try a lower epoch, like 1e-4.
#
# Fastai has implemented [one cycle policy with cyclical momentum](https://arxiv.org/abs/1803.09820) which requires a maximum learning rate since the learning rate will shift up and down over its training duration. Instead of calling `fit()`, we simply call `fit_one_cycle()`.
#
#
# ---
#
#
# <details><summary>See the code to generate the graphs</summary>
# <p>
#
#
# #### Code snippet to generate graphs in this cell
#
#
# ```python
# import matplotlib.pyplot as plt
# %matplotlib inline
#
#
# df_dataset_comp = pd.DataFrame({
# "fashionTexture": [0.8749, 0.8481, 0.2491, 0.670318, 0.1643],
# "flickrLogos32Subset": [0.9069, 0.9064, 0.2179, 0.7175, 0.1073],
@ -407,19 +419,19 @@ print(f"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.")
# "recycle_v3": [0.9527, 0.9581, 0.766, 0.8591, 0.2876],
# "learning_rate": [0.000100, 0.001000, 0.010000, 0.000010, 0.000001]
# }).set_index("learning_rate")
#
#
# df_epoch_comp = pd.DataFrame({
# "3_epochs": [0.823808, 0.846394, 0.393808, 0.455115, 0.229120],
# "15_epochs": [0.920367, 0.918067, 0.471138, 0.764786, 0.301474],
# "learning_rate": [0.000100, 0.001000, 0.010000, 0.000010, 0.000001]
# }).set_index("learning_rate")
#
#
# plt.figure(1)
# ax1 = plt.subplot(121)
# ax2 = plt.subplot(122)
#
#
# vals = ax2.get_yticks()
#
#
# df_dataset_comp.sort_index().plot(kind='bar', rot=0, figsize=(15, 6), ax=ax1)
# vals = ax1.get_yticks()
# ax1.set_yticklabels(['{:,.2%}'.format(x) for x in vals])
@ -427,52 +439,52 @@ print(f"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.")
# ax1.set_ylabel("Accuracy (%)")
# ax1.set_title("Accuracy of Learning Rates by Datasets @ 15 Epochs")
# ax1.legend(loc=2)
#
#
# df_epoch_comp.sort_index().plot(kind='bar', rot=0, figsize=(15, 6), ax=ax2)
# ax2.set_yticklabels(['{:,.2%}'.format(x) for x in vals])
# ax2.set_ylim(0,1)
# ax2.set_title("Accuracy of Learning Rates by Epochs")
# ax2.legend(loc=2)
# ```
#
#
# </p>
# </details>
# ### Image Resolution <a name="appendix-imsize"></a>
#
#
# A model's input image resolution tends to affect its accuracy. Usually, convolutional neural networks are able to take advantage of higher resolution images. This is especially true is the object-of-interest is small in the image.
#
# But how does it impact some of the other aspects of the model?
#
#
# But how does it impact some of the other aspects of the model?
#
# It turns out that the image size doesn't affect the model's memory footprint, but it has a huge effect on GPU memory. Image size also has a direct impact on training and inference speeds. An increase in image size will result in slower inference speeds.
#
#
# ![imsize_comparisons](media/imsize_comparisons.png)
#
#
# From the results, we can see that an increase in image resolution from __300 X 300__ to __500 X 500__ will increase the performance marginally at the cost of a longer training duration and slower inference speed.
#
#
# ---
#
#
# <details><summary>See the code to generate the graphs</summary>
# <p>
#
#
# #### Code snippet to generate graphs in this cell
#
#
# ```python
# import pandas as pd
# from utils_ic.parameter_sweeper import add_value_labels
# %matplotlib inline
#
#
# df = pd.DataFrame({
# "accuracy": [.9472, .9394, .9190, .9164, .8366, .8251],
# "training_duration": [385.3, 218.8, 280.5, 184.9, 272.5, 182.3],
# "inference_duration": [34.2, 23.2, 27.8, 17.8, 27.6, 17.3],
# "model": ['resnet50 X 499', 'resnet50 X 299', 'resnet18 X 499', 'resnet18 X 299', 'squeezenet1_1 X 499', 'squeezenet1_1 X 299'],
# }).set_index("model"); df
#
#
# ax1, ax2, ax3 = df.plot.bar(
# rot=90, subplots=True, legend=False, figsize=(12, 12)
# )
#
#
# for i in range(len(df)):
# if i < len(df)/3:
# ax1.get_children()[i].set_color('r')
@ -481,69 +493,69 @@ print(f"'{MODEL_TYPE}' is {round(size_in_mb, 2)}MB.")
# if i >= len(df)/3 and i < 2*len(df)/3:
# ax1.get_children()[i].set_color('g')
# ax2.get_children()[i].set_color('g')
# ax3.get_children()[i].set_color('g')
# ax3.get_children()[i].set_color('g')
# if i >= 2*len(df)/3:
# ax1.get_children()[i].set_color('b')
# ax2.get_children()[i].set_color('b')
# ax3.get_children()[i].set_color('b')
#
# ax3.get_children()[i].set_color('b')
#
# ax1.set_title("Accuracy (%)")
# ax2.set_title("Training Duration (seconds)")
# ax3.set_title("Inference Speed (seconds)")
#
# ax3.set_title("Inference Speed (seconds)")
#
# ax1.set_ylabel("%")
# ax2.set_ylabel("seconds")
# ax3.set_ylabel("seconds")
#
#
# ax1.set_ylim(top=df["accuracy"].max() * 1.2)
# ax2.set_ylim(top=df["training_duration"].max() * 1.2)
# ax3.set_ylim(top=df["inference_duration"].max() * 1.2)
#
#
# add_value_labels(ax1, percentage=True)
# add_value_labels(ax2)
# add_value_labels(ax3)
# ```
#
#
# </p>
# </details>
# ### How we found good default parameters <a name="#appendix-good-parameters"></a>
#
# To explore the charactistics of a model, we - the computer vision repo team - have conducted various experiments to explore the impact of different hyperparameters on a model's _accuracy_, _training duration_, _inference speed_, and _memory footprint_. In this notebook, we used the results of our experiments to give us concrete evidence when it comes to understanding which parameters work and which dont.
#
#
# To explore the charactistics of a model, we - the computer vision repo team - have conducted various experiments to explore the impact of different hyperparameters on a model's _accuracy_, _training duration_, _inference speed_, and _memory footprint_. In this notebook, we used the results of our experiments to give us concrete evidence when it comes to understanding which parameters work and which dont.
#
# #### Datasets <a name="datasets"></a>
#
# For our experiments, we relied on a set of six different classification datasets. When selecting these datasets, we wanted to have a variety of image types with different amounts of data and number of classes.
#
# | Dataset Name | Number of Images | Number of Classes |
#
# For our experiments, we relied on a set of six different classification datasets. When selecting these datasets, we wanted to have a variety of image types with different amounts of data and number of classes.
#
# | Dataset Name | Number of Images | Number of Classes |
# | --- | --- | --- |
# | food101Subset | 5000 | 5 |
# | flickrLogos32Subset | 2740 | 33 |
# | fashionTexture | 1716 | 11 |
# | recycle_v3 | 564 | 11 |
# | food101Subset | 5000 | 5 |
# | flickrLogos32Subset | 2740 | 33 |
# | fashionTexture | 1716 | 11 |
# | recycle_v3 | 564 | 11 |
# | lettuce | 380 | 2 |
# | fridgeObjects | 134 | 4 |
#
# | fridgeObjects | 134 | 4 |
#
# #### Model Characteristics <a name="model-characteristics"></a>
#
#
# In our experiment, we look at these characteristics to evaluate the impact of various paramters. Here is how we calculated each of the following metrics:
#
#
# - __Accuracy__
#
# Accuracy is our evaluation metric for the model. It represents the average accuracy over 5 runs for our six different datasets.
#
#
#
# Accuracy is our evaluation metric for the model. It represents the average accuracy over 5 runs for our six different datasets.
#
#
# - __Training Duration__
#
#
# The training duration is how long it takes to train the model. It represents the average duration over 5 runs for our six different datasets.
#
#
# - __Inference Speed__
#
#
#
# - __Inference Speed__
#
# The inference speed is the time it takes the model to run 1000 predictions.
#
#
#
#
# - __Memory Footprint__
#
#
# The memory footprint is size of the model parameters saved as the pickled file. This can be achieved by running `learn.export(...)` and examining the size of the exported file.
#
#

Просмотреть файл

@ -1,6 +1,10 @@
#!/usr/bin/env python
# coding: utf-8
# <i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
#
# <i>Licensed under the MIT License.</i>
# # Testing different Hyperparameters and Benchmarking
# In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets.
@ -51,9 +55,10 @@ get_ipython().run_line_magic("matplotlib", "inline")
import sys
sys.path.append("../")
from utils_ic.datasets import unzip_url, Urls
from utils_ic.parameter_sweeper import *
sys.path.append("../../")
from utils_cv.classification.data import Urls
from utils_cv.common.data import unzip_url
from utils_cv.classification.parameter_sweeper import *
# To use the Parameter Sweeper tool for single label classification, we'll need to make sure that the data is stored such that images are sorted into their classes inside of a subfolder. In this notebook, we'll use the Fridge Objects dataset, which is already stored in the correct format. We also want to use the Fridge Objects Watermarked dataset. We want to see whether the original images (which are watermarked) will perform just as well as the non-watermarked images.
@ -117,7 +122,7 @@ sweeper.update_parameters(
#
# The `run` function returns a multi-index dataframe which we can work with right away.
# In[9]:
# In[8]:
df = sweeper.run(datasets=DATA, reps=REPS)
@ -165,7 +170,7 @@ df
# To see the results, show the df using the `clean_sweeper_df` helper function. This will display all the hyperparameters in a nice, readable way.
# In[10]:
# In[9]:
df = clean_sweeper_df(df)
@ -173,7 +178,7 @@ df = clean_sweeper_df(df)
# Since we've run our benchmarking over 3 repetitions, we may want to just look at the averages across the different __run numbers__.
# In[11]:
# In[10]:
df.mean(level=(1, 2)).T
@ -181,7 +186,7 @@ df.mean(level=(1, 2)).T
# Print the average accuracy over the different runs for each dataset independently.
# In[12]:
# In[11]:
ax = (
@ -194,7 +199,7 @@ ax.set_yticklabels(["{:,.2%}".format(x) for x in ax.get_yticks()])
# Additionally, we may want simply to see which set of hyperparameters perform the best across the different __datasets__. We can do that by averaging the results of the different datasets.
# In[13]:
# In[12]:
df.mean(level=(1)).T
@ -202,7 +207,7 @@ df.mean(level=(1)).T
# To make it easier to see which permutation did the best, we can plot the results using the `plot_sweeper_df` helper function. This plot will help us easily see which parameters offer the highest accuracies.
# In[14]:
# In[13]:
plot_sweeper_df(df.mean(level=(1)), sort_by="accuracy")

Просмотреть файл

@ -50,7 +50,7 @@
# For this notebook to run properly on our machine, the following should already be in place:
#
# * Local machine setup
# * We need to set up the "cvbp" conda environment. [These instructions](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/README.md#getting-started) explain how to do that.
# * We need to set up the "cvbp" conda environment. [These instructions](https://github.com/Microsoft/ComputerVision/blob/master/classification/README.md#getting-started) explain how to do that.
#
#
# * Azure subscription setup
@ -86,14 +86,13 @@ from azureml.core.webservice import AciWebservice, Webservice
from azureml.exceptions import ProjectSystemException, UserErrorException
# Computer Vision repository
sys.path.extend([".", "..", "../.."])
sys.path.extend([".", "../.."])
# This "sys.path.extend()" statement allows us to move up the directory hierarchy
# and access the utils_ic and utils_cv packages
from utils_cv.generate_deployment_env import generate_yaml
from utils_ic.common import data_path, ic_root_path
from utils_ic.constants import IMAGENET_IM_SIZE
from utils_ic.image_conversion import ims2strlist
from utils_ic.imagenet_models import model_to_learner
from utils_cv.common.deployment import generate_yaml
from utils_cv.common.data import data_path, root_path
from utils_cv.common.image import ims2strlist
from utils_cv.classification.model import IMAGENET_IM_SIZE, model_to_learner
# ## 4. Azure workspace <a id="workspace"></a>
@ -124,12 +123,10 @@ print(f"Azure ML SDK Version: {azureml.core.VERSION}")
# Let's define these variables here - These pieces of information can be found on the portal
subscription_id = os.getenv("SUBSCRIPTION_ID", default="<our_subscription_id>")
resource_group = os.getenv("RESOURCE_GROUP", default="<our_resource_group>")
workspace_name = os.getenv(
"WORKSPACE_NAME", default="<our_workspace_name>"
) # (e.g. "myworkspace")
workspace_name = os.getenv("WORKSPACE_NAME", default="<our_workspace_name>")
workspace_region = os.getenv(
"WORKSPACE_REGION", default="<our_workspace_region>"
) # (e.g. "westus2")
)
try:
# Let's load the workspace from the configuration file
@ -172,7 +169,7 @@ print(
# ## 5. Model retrieval and export <a id="model"></a>
#
# For demonstration purposes, we will use here a ResNet18 model, pretrained on ImageNet. The following steps would be the same if we had trained a model locally (cf. [**01_training_introduction.ipynb**](https://github.com/Microsoft/ComputerVisionBestPractices/blob/staging/image_classification/notebooks/01_training_introduction.ipynb) notebook for details).
# For demonstration purposes, we will use here a ResNet18 model, pretrained on ImageNet. The following steps would be the same if we had trained a model locally (cf. [**01_training_introduction.ipynb**](01_training_introduction.ipynb) notebook for details).
#
# Let's first retrieve the model.
@ -377,7 +374,7 @@ get_ipython().run_cell_magic(
# Create a deployment-specific yaml file from image_classification/environment.yml
generate_yaml(
directory=ic_root_path(),
directory=os.path.join(root_path(), "classification"),
ref_filename="environment.yml",
needed_libraries=["pytorch", "spacy", "fastai", "dataclasses"],
conda_filename="myenv.yml",
@ -450,7 +447,7 @@ print(ws.images["image-classif-resnet18-f48"].image_build_log_uri)
#
# To set them up properly, we need to indicate the number of CPU cores and the amount of memory we want to allocate to our web service. Optional tags and descriptions are also available for us to identify the instances in AzureML when looking at the `Compute` tab in the Azure Portal.
#
# <i><b>Note:</b> For production workloads, it is better to use [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/) (AKS) instead. We will demonstrate how to do this in the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb).<i>
# <i><b>Note:</b> For production workloads, it is better to use [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/) (AKS) instead. We will demonstrate how to do this in the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb).<i>
# In[24]:
@ -499,7 +496,7 @@ service = Webservice.deploy_from_image(
# An alternative way of deploying the service is to deploy from the model directly. In that case, we would need to provide the docker image configuration object (image_config), and our list of models (just one of them here).
# The advantage of `deploy_from_image` over <a href="https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.webservice(class)?view=azure-ml-py#deploy-from-model-workspace--name--models--image-config--deployment-config-none--deployment-target-none-">deploy_from_model</a> is that the former allows us
# to re-use the same Docker image in case the deployment of this service fails, or even for other
# types of deployments, as we will see in the next notebook (to be pushlished).
# types of deployments, as we will see in the next notebook.
# In[26]:
@ -616,7 +613,7 @@ print(f"Prediction: {resp.text}")
#
# For production requirements, i.e. when &gt; 100 requests per second are expected, we recommend deploying models to Azure Kubernetes Service (AKS). It is a convenient infrastructure as it manages hosted Kubernetes environments, and makes it easy to deploy and manage containerized applications without container orchestration expertise. It also supports deployments with CPU clusters and deployments with GPU clusters, the latter of which are [more economical and efficient](https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-deployment-of-deep-learning-models/) when serving complex models such as deep neural networks, and/or when traffic to the endpoint is high.
#
# We will see an example of this in the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb).
# We will see an example of this in the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb).
# ## 8. Clean up <a id="clean"></a>
#
@ -638,7 +635,7 @@ print(f"Prediction: {resp.text}")
#
# Now that we have verified that our web service works well on ACI, we can delete it. This helps reduce [costs](https://azure.microsoft.com/en-us/pricing/details/container-instances/), since the container group we were paying for no longer exists, and allows us to keep our workspace clean.
# In[32]:
# In[ ]:
service.delete()
@ -671,4 +668,6 @@ service.delete()
# ## 9. Next steps <a id="next-steps"></a>
#
# In the [next notebook](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/22_deployment_on_azure_kubernetes_service.ipynb), we will leverage the same Docker image, and deploy our model on AKS. In our [third tutorial](https://github.com/Microsoft/ComputerVision/blob/service_deploy/image_classification/notebooks/deployment/23_web_service_testing.ipynb), we will then learn how a Flask app, with an interactive user interface, can be used to call our web service.
# In the [next notebook](22_deployment_on_azure_kubernetes_service.ipynb), we will leverage the same Docker image, and deploy our model on AKS. In our [third tutorial](23_web_service_testing.ipynb), we will then learn how a Flask app, with an interactive user interface, can be used to call our web service.
# In[ ]:

Просмотреть файл

@ -27,7 +27,7 @@
#
# ## 1. Introduction <a id="intro"/>
#
# In many real life scenarios, trained machine learning models need to be deployed to production. As we saw in the [first](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/notebooks/21_deployment_on_azure_container_instances.ipynb) deployment notebook, this can be done by deploying on Azure Container Instances. In this tutorial, we will get familiar with another way of implementing a model into a production environment, this time using [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads) (AKS).
# In many real life scenarios, trained machine learning models need to be deployed to production. As we saw in the [first](21_deployment_on_azure_container_instances.ipynb) deployment notebook, this can be done by deploying on Azure Container Instances. In this tutorial, we will get familiar with another way of implementing a model into a production environment, this time using [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads) (AKS).
#
# AKS manages hosted Kubernetes environments. It makes it easy to deploy and manage containerized applications without container orchestration expertise. It also supports deployments with CPU clusters and deployments with GPU clusters. The latter have been shown to be [more economical and efficient](https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-deployment-of-deep-learning-models/) when serving complex models such as deep neural networks, and/or when traffic to the web service is high (&gt; 100 requests/second).
#
@ -39,7 +39,7 @@
# ## 2. Pre-requisites <a id="pre-reqs"/>
#
# This notebook relies on resources we created in [21_deployment_on_azure_container_instances.ipynb](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/notebooks/21_deployment_on_azure_container_instances.ipynb):
# This notebook relies on resources we created in [21_deployment_on_azure_container_instances.ipynb](21_deployment_on_azure_container_instances.ipynb):
# - Our local conda environment and Azure Machine Learning workspace
# - The Docker image that contains the model and scoring script needed for the web service to work.
#
@ -49,7 +49,7 @@
#
# Now that our prior resources are available, let's first import a few libraries we will need for the deployment on AKS.
# In[1]:
# In[4]:
# For automatic reloading of modified libraries
@ -69,7 +69,7 @@ from azureml.core.webservice import AksWebservice, Webservice
#
# <i><b>Note:</b> The Docker image we will use below is attached to the workspace we used in the prior notebook. It is then important to use the same workspace here. If, for any reason, we need to use a separate workspace here, then the steps followed to create a Docker image containing our image classifier model in the prior notebook, should be reproduced here.</i>
# In[2]:
# In[5]:
ws = Workspace.from_config()
@ -78,7 +78,7 @@ ws = Workspace.from_config()
# Let's check that the workspace is properly loaded
# In[3]:
# In[6]:
# Print the workspace attributes
@ -97,7 +97,7 @@ print(
#
# As for the deployment on Azure Container Instances, we will use Docker containers. The Docker image we created in the prior notebook is very much suitable for our deployment on Azure Kubernetes Service, as it contains the libraries we need and the model we registered. Let's make sure this Docker image is still available (if not, we can just run the cells of section "6. Model deployment on Azure" of the [prior notebook](https://github.com/Microsoft/ComputerVision/blob/staging/image_classification/notebooks/21_deployment_on_azure_container_instances.ipynb)).
# In[4]:
# In[7]:
print("Docker images:")
@ -109,7 +109,7 @@ for docker_im in ws.images:
# As we did not delete it in the prior notebook, our Docker image is still present in our workspace. Let's retrieve it.
# In[5]:
# In[8]:
docker_image = ws.images["image-classif-resnet18-f48"]
@ -119,13 +119,13 @@ docker_image = ws.images["image-classif-resnet18-f48"]
#
# <i><b>Note:</b> We will not use the `registered_model` object anywhere here. We are running the next 2 cells just for verification purposes.</i>
# In[6]:
# In[9]:
registered_model = docker_image.models[0]
# In[7]:
# In[10]:
print(
@ -141,7 +141,7 @@ print(
#
# Let's first check what types of compute resources we have, if any
# In[8]:
# In[11]:
print("List of compute resources associated with our workspace:")
@ -169,7 +169,7 @@ for cp in ws.compute_targets:
#
# Here, we will use a cluster of CPUs. The creation of such resource typically takes several minutes to complete.
# In[11]:
# In[12]:
# Declare the name of the cluster
@ -208,6 +208,12 @@ else:
# Creating ...
# SucceededProvisioning operation finished, operation "Succeeded"
# ```
#
# In the case when our cluster already exists, we get the following message:
#
# ```
# We retrieved the <aks_cluster_name> AKS compute target
# ```
# #### 5.B.b Alternative: Attachment of an existing AKS cluster
#
@ -229,7 +235,7 @@ else:
#
# <img src="media/aks_compute_target_cpu.jpg" width="900">
# In[12]:
# In[13]:
# Check provisioning status
@ -244,7 +250,7 @@ print(
#
# Once our web app is up and running, it is very important to monitor it, and measure the amount of traffic it gets, how long it takes to respond, the type of exceptions that get raised, etc. We will do so through [Application Insights](https://docs.microsoft.com/en-us/azure/azure-monitor/app/app-insights-overview), which is an application performance management service. To enable it on our soon-to-be-deployed web service, we first need to update our AKS configuration file:
# In[13]:
# In[14]:
# Set the AKS web service configuration and add monitoring to it
@ -257,7 +263,7 @@ aks_config = AksWebservice.deploy_configuration(enable_app_insights=True)
#
# <i><b>Note:</b> This deployment takes a few minutes to complete.</i>
# In[14]:
# In[15]:
if aks_target.provisioning_state == "Succeeded":

Просмотреть файл

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

Просмотреть файл

Просмотреть файл

@ -1,22 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import os
from utils_ic.image_conversion import im2base64, ims2strlist
def test_ims2strlist(tiny_ic_data_path):
""" Tests extraction of image content and conversion into string"""
im_list = [
os.path.join(tiny_ic_data_path, "can", "1.jpg"),
os.path.join(tiny_ic_data_path, "carton", "34.jpg"),
]
im_string_list = ims2strlist(im_list)
assert isinstance(im_string_list, list)
def test_im2base64(tiny_ic_data_path):
""" Tests extraction of image content and conversion into bytes"""
im_name = os.path.join(tiny_ic_data_path, "can", "1.jpg")
im_content = im2base64(im_name)
assert isinstance(im_content, bytes)

Просмотреть файл

@ -1,17 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from fastai.vision import models
from utils_ic.imagenet_models import model_to_learner
def test_load_learner():
# Test if the function loads an ImageNet model (ResNet) trainer
learn = model_to_learner(models.resnet34(pretrained=True))
assert len(learn.data.classes) == 1000 # Check Image net classes
assert isinstance(learn.model, models.ResNet)
# Test with SqueezeNet
learn = model_to_learner(models.squeezenet1_0())
assert len(learn.data.classes) == 1000
assert isinstance(learn.model, models.SqueezeNet)

Просмотреть файл

@ -1,265 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import os
from ipywidgets import widgets, Layout, IntSlider
import pandas as pd
from utils_ic.common import im_width, im_height, get_files_in_directory
class AnnotationWidget(object):
IM_WIDTH = 500 # pixels
def __init__(
self,
labels: list,
im_dir: str,
anno_path: str,
im_filenames: list = None,
):
"""Widget class to annotate images.
Args:
labels: List of abel names, e.g. ["bird", "car", "plane"].
im_dir: Directory containing the images to be annotated.
anno_path: path where to write annotations to, and (if exists) load annotations from.
im_fnames: List of image filenames. If set to None, then will auto-detect all images in the provided image directory.
"""
self.labels = labels
self.im_dir = im_dir
self.anno_path = anno_path
self.im_filenames = im_filenames
# Init
self.vis_image_index = 0
self.label_to_id = {s: i for i, s in enumerate(self.labels)}
if not im_filenames:
self.im_filenames = [os.path.basename(s) for s in get_files_in_directory(
im_dir,
suffixes=(
".jpg",
".jpeg",
".tif",
".tiff",
".gif",
".giff",
".png",
".bmp",
),
)]
assert len(self.im_filenames) > 0, f"Not a single image specified or found in directory {im_dir}."
# Initialize empty annotations and load previous annotations if file exist
self.annos = pd.DataFrame()
for im_filename in self.im_filenames:
if im_filename not in self.annos:
self.annos[im_filename] = pd.Series(
{"exclude": False, "labels": []}
)
if os.path.exists(self.anno_path):
print(f"Loading existing annotation from {self.anno_path}.")
with open(self.anno_path,'r') as f:
for line in f.readlines()[1:]:
vec = line.strip().split("\t")
im_filename = vec[0]
self.annos[im_filename].exclude = vec[1]=="True"
if len(vec)>2:
self.annos[im_filename].labels = vec[2].split(',')
# Create UI and "start" widget
self._create_ui()
def show(self):
return self.ui
def update_ui(self):
im_filename = self.im_filenames[self.vis_image_index]
im_path = os.path.join(self.im_dir, im_filename)
# Update the image and info
self.w_img.value = open(im_path, "rb").read()
self.w_filename.value = im_filename
self.w_path.value = self.im_dir
# Fix the width of the image widget and adjust the height
self.w_img.layout.height = (
f"{int(self.IM_WIDTH * (im_height(im_path)/im_width(im_path)))}px"
)
# Update annotations
self.exclude_widget.value = self.annos[im_filename].exclude
for w in self.label_widgets:
w.value = False
for label in self.annos[im_filename].labels:
label_id = self.label_to_id[label]
self.label_widgets[label_id].value = True
def _create_ui(self):
"""Create and initialize widgets"""
# ------------
# Callbacks + logic
# ------------
def skip_image():
"""Return true if image should be skipped, and false otherwise."""
# See if UI-checkbox to skip images is checked
if not self.w_skip_annotated.value:
return False
# Stop skipping if image index is out of bounds
if (
self.vis_image_index <= 0
or self.vis_image_index >= len(self.im_filenames) - 1
):
return False
# Skip if image has annotation
im_filename = self.im_filenames[self.vis_image_index]
labels = self.annos[im_filename].labels
exclude = self.annos[im_filename].exclude
if exclude or len(labels) > 0:
return True
return False
def button_pressed(obj):
"""Next / previous image button callback."""
# Find next/previous image. Variable step is -1 or +1 depending on which button was pressed.
step = int(obj.value)
self.vis_image_index += step
while skip_image():
self.vis_image_index += step
self.vis_image_index = min(
max(self.vis_image_index, 0), len(self.im_filenames) - 1
)
self.w_image_slider.value = self.vis_image_index
self.update_ui()
def slider_changed(obj):
"""Image slider callback.
Need to wrap in try statement to avoid errors when slider value is not a number.
"""
try:
self.vis_image_index = int(obj["new"]["value"])
self.update_ui()
except Exception:
pass
def anno_changed(obj):
"""Label checkbox callback.
Update annotation file and write to disk
"""
# Test if call is coming from the user having clicked on a checkbox to change its state,
# rather than a change of state when e.g. the checkbox value was updated programatically. This is a bit
# of hack, but necessary since widgets.Checkbox() does not support a on_click() callback or similar.
if "new" in obj and isinstance(obj["new"], dict) and len(obj["new"]) == 0:
# If single-label annotation then unset all checkboxes except the one which the user just clicked
if not self.w_multi_class.value:
for w in self.label_widgets:
if w.description != obj["owner"].description:
w.value = False
# Update annotation object
im_filename = self.im_filenames[self.vis_image_index]
self.annos[im_filename].labels = [
w.description for w in self.label_widgets if w.value
]
self.annos[im_filename].exclude = self.exclude_widget.value
# Write to disk as tab-separated file.
with open(self.anno_path,'w') as f:
f.write("{}\t{}\t{}\n".format("IM_FILENAME", "EXCLUDE", "LABELS"))
for k,v in self.annos.items():
if v.labels != [] or v.exclude:
f.write("{}\t{}\t{}\n".format(k, v.exclude, ",".join(v.labels)))
# ------------
# UI - image + controls (left side)
# ------------
w_next_image_button = widgets.Button(description="Next")
w_next_image_button.value = "1"
w_next_image_button.layout = Layout(width="80px")
w_next_image_button.on_click(button_pressed)
w_previous_image_button = widgets.Button(description="Previous")
w_previous_image_button.value = "-1"
w_previous_image_button.layout = Layout(width="80px")
w_previous_image_button.on_click(button_pressed)
self.w_filename = widgets.Text(
value="", description="Name:", layout=Layout(width="200px")
)
self.w_path = widgets.Text(
value="", description="Path:", layout=Layout(width="200px")
)
self.w_image_slider = IntSlider(
min=0,
max=len(self.im_filenames) - 1,
step=1,
value=self.vis_image_index,
continuous_update=False,
)
self.w_image_slider.observe(slider_changed)
self.w_img = widgets.Image()
self.w_img.layout.width = f"{self.IM_WIDTH}px"
w_header = widgets.HBox(
children=[
w_previous_image_button,
w_next_image_button,
self.w_image_slider,
self.w_filename,
self.w_path,
]
)
# ------------
# UI - info (right side)
# ------------
# Options widgets
self.w_skip_annotated = widgets.Checkbox(
value=False, description="Skip annotated images."
)
self.w_multi_class = widgets.Checkbox(
value=False, description="Allow multi-class labeling"
)
# Label checkboxes widgets
self.exclude_widget = widgets.Checkbox(
value=False, description="EXCLUDE IMAGE"
)
self.exclude_widget.observe(anno_changed)
self.label_widgets = [
widgets.Checkbox(value=False, description=label)
for label in self.labels
]
for label_widget in self.label_widgets:
label_widget.observe(anno_changed)
# Combine UIs into tab widget
w_info = widgets.VBox(
children=[
widgets.HTML(value="Options:"),
self.w_skip_annotated,
self.w_multi_class,
widgets.HTML(value="Annotations:"),
self.exclude_widget,
*self.label_widgets,
]
)
w_info.layout.padding = "20px"
self.ui = widgets.Tab(
children=[
widgets.VBox(
children=[
w_header,
widgets.HBox(children=[self.w_img, w_info]),
]
)
]
)
self.ui.set_title(0, "Annotator")
# Fill UI with content
self.update_ui()

Просмотреть файл

@ -1,73 +0,0 @@
import os
import numpy as np
from pathlib import Path
from PIL import Image
from typing import Union, Tuple, List
def ic_root_path() -> Path:
"""Get the image classification root path"""
return os.path.realpath(os.path.join(os.path.dirname(__file__), os.pardir))
def data_path() -> Path:
"""Get the data directory path"""
return os.path.realpath(
os.path.join(os.path.dirname(__file__), os.pardir, "data")
)
def im_width(input: Union[str, np.array]) -> int:
"""Returns the width of an image.
Args:
input: Image path or image as numpy array.
Return:
Image width.
"""
return im_width_height(input)[0]
def im_height(input: Union[str, np.array]) -> int:
"""Returns the height of an image.
Args:
input: Image path or image as numpy array.
Return:
Image height.
"""
return im_width_height(input)[1]
def im_width_height(input: Union[str, np.array]) -> Tuple[int, int]:
"""Returns the width and height of an image.
Args:
input: Image path or image as numpy array.
Return:
Tuple of ints (width,height).
"""
if isinstance(input, str) or isinstance(input, Path):
width, height = Image.open(
input
).size # this is fast since it does not load the full image
else:
width, height = (input.shape[1], input.shape[0])
return width, height
def get_files_in_directory(
directory: str, suffixes: List[str] = None
) -> List[str]:
"""Returns all filenames in a directory which optionally match one of multiple suffixes.
Args:
directory: directory to scan for files.
suffixes: only keep the filenames which ends with one of the suffixes (e.g. suffixes = [".jpg", ".png", ".gif"]).
Return:
List of filenames
"""
if not os.path.exists(directory):
raise Exception(f"Directory '{directory}' does not exist.")
filenames = [str(p) for p in Path(directory).iterdir() if p.is_file()]
if suffixes and suffixes != "":
filenames = [
s for s in filenames if s.lower().endswith(tuple(suffixes))
]
return filenames

Просмотреть файл

@ -1,2 +0,0 @@
# Default ImageNet models image size
IMAGENET_IM_SIZE = 224

Просмотреть файл

@ -1,159 +0,0 @@
import os
import requests
from .common import data_path
from pathlib import Path
from typing import List, Union
from urllib.parse import urljoin, urlparse
from fastai.vision import ImageList
from PIL import Image
from tqdm import tqdm
from zipfile import ZipFile
Url = str
class Urls:
# for now hardcoding base url into Urls class
base = "https://cvbp.blob.core.windows.net/public/datasets/image_classification/"
# Same link Keras is using
imagenet_labels_json = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
# datasets
fridge_objects_path = urljoin(base, "fridgeObjects.zip")
fridge_objects_watermark_path = urljoin(base, "fridgeObjectsWatermark.zip")
fridge_objects_tiny_path = urljoin(base, "fridgeObjectsTiny.zip")
fridge_objects_watermark_tiny_path = urljoin(
base, "fridgeObjectsWatermarkTiny.zip"
)
multilabel_fridge_objects_path = urljoin(
base, "multilabelFridgeObjects.zip"
)
food_101_subset_path = urljoin(base, "food101Subset.zip")
fashion_texture_path = urljoin(base, "fashionTexture.zip")
flickr_logos_32_subset_path = urljoin(base, "flickrLogos32Subset.zip")
lettuce_path = urljoin(base, "lettuce.zip")
recycle_path = urljoin(base, "recycle_v3.zip")
@classmethod
def all(cls) -> List[Url]:
return [v for k, v in cls.__dict__.items() if k.endswith("_path")]
def imagenet_labels() -> list:
"""List of ImageNet labels with the original index.
Returns:
list: ImageNet labels
"""
labels = requests.get(Urls.imagenet_labels_json).json()
return [labels[str(k)][1] for k in range(len(labels))]
def _get_file_name(url: str) -> str:
""" Get a file name based on url. """
return urlparse(url).path.split("/")[-1]
def unzip_url(
url: str,
fpath: Union[Path, str] = data_path(),
dest: Union[Path, str] = data_path(),
exist_ok: bool = False,
) -> Path:
""" Download file from URL to {fpath} and unzip to {dest}.
{fpath} and {dest} must be directories
Args:
url (str): url to download from
fpath (Union[Path, str]): The location to save the url zip file to
dest (Union[Path, str]): The destination to unzip {fpath}
exist_ok (bool): if exist_ok, then skip if exists, otherwise throw error
Raises:
FileExistsError: if file exists
Returns:
Path of {dest}
"""
def _raise_file_exists_error(path: Union[Path, str]) -> None:
if not exist_ok:
raise FileExistsError(path, "Use param {{exist_ok}} to ignore.")
os.makedirs(dest, exist_ok = True)
os.makedirs(fpath, exist_ok = True)
fname = _get_file_name(url)
fname_without_extension = fname.split(".")[0]
zip_file = Path(os.path.join(fpath, fname))
unzipped_dir = Path(os.path.join(fpath, fname_without_extension))
# download zipfile if zipfile not exists
if zip_file.is_file():
_raise_file_exists_error(zip_file)
else:
r = requests.get(url)
f = open(zip_file, "wb")
f.write(r.content)
f.close()
# unzip downloaded zipfile if dir not exists
if unzipped_dir.is_dir():
_raise_file_exists_error(unzipped_dir)
else:
z = ZipFile(zip_file, "r")
z.extractall(fpath)
z.close()
return os.path.realpath(os.path.join(fpath, fname_without_extension))
def unzip_urls(
urls: List[Url], dest: Union[Path, str] = data_path()
) -> List[Path]:
""" Download and unzip all datasets in Urls to dest """
# make dir if not exist
if not Path(dest).is_dir():
os.makedirs(dest)
# download all data urls
paths = list()
for url in urls:
paths.append(unzip_url(url, dest, exist_ok=True))
return paths
def downsize_imagelist(im_list: ImageList, out_dir: Union[Path, str], dim: int = 500):
""" Aspect-ratio preserving down-sizing of each image in the ImageList {im_list} so that min(width,height) is at most {dim} pixels.
Writes each image to the directory {out_dir} while preserving the original subdirectory structure.
Args:
im_list: Fastai ImageList object.
out_dir: Output root location.
dim: maximum image dimension (width/height) after resize
"""
assert len(im_list.items)>0, "Input ImageList does not contain any images."
# Find parent directory which all images have in common
im_paths = [str(s) for s in im_list.items]
src_root_dir = os.path.commonprefix(im_paths)
# Loop over all images
for src_path in tqdm(im_list.items):
# Load and optionally down-size image
im = Image.open(src_path).convert('RGB')
scale = float(dim) / min(im.size)
if scale < 1.0:
new_size = [int(round(f*scale)) for f in im.size]
im = im.resize(new_size, resample=Image.LANCZOS)
# Write image
src_rel_path = os.path.relpath(src_path, src_root_dir)
dst_path = os.path.join(out_dir, src_rel_path)
assert os.path.normpath(src_rel_path) != os.path.normpath(dst_path), "Image source and destination path should not be the same: {src_rel_path}"
os.makedirs(os.path.dirname(dst_path), exist_ok = True)
im.save(dst_path)

Просмотреть файл

@ -1,45 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# python regular libraries
from pathlib import Path
from typing import Union
from base64 import b64encode
def im2base64(im_path: Union[Path, str]) -> bytes:
"""
Args:
im_path (string): Path to the image
Returns: im_bytes
"""
with open(im_path, "rb") as image:
# Extract image bytes
im_content = image.read()
# Convert bytes into a string
im_bytes = b64encode(im_content)
return im_bytes
def ims2strlist(im_path_list: list) -> list:
"""
Args:
im_path_list (list of strings): List of image paths
Returns: im_string_list: List containing based64-encoded images
decoded into strings
"""
im_string_list = []
for im_path in im_path_list:
im_string_list.append(im2base64(im_path).decode("utf-8"))
return im_string_list

Просмотреть файл

@ -1,27 +0,0 @@
from fastai.vision import *
from utils_ic.datasets import imagenet_labels
from utils_ic.constants import IMAGENET_IM_SIZE
def model_to_learner(
model: nn.Module, im_size: int = IMAGENET_IM_SIZE
) -> Learner:
"""Create Learner based on pyTorch ImageNet model.
Args:
model (nn.Module): Base ImageNet model. E.g. models.resnet18()
im_size (int): Image size the model will expect to have.
Returns:
Learner: a model trainer for prediction
"""
# Currently, fast.ai api requires to pass a DataBunch to create a model trainer (learner).
# To use the learner for prediction tasks without retraining, we have to pass an empty DataBunch.
# single_from_classes is deprecated, but this is the easiest go-around method.
# Create ImageNet data spec as an empty DataBunch.
empty_data = ImageDataBunch.single_from_classes(
"", classes=imagenet_labels(), size=im_size
).normalize(imagenet_stats)
return Learner(empty_data, model)

Просмотреть файл

@ -1,445 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
"""
Helper module for drawing widgets and plots
"""
import bqplot
import bqplot.pyplot as bqpyplot
import fastai.data_block
from ipywidgets import widgets, Layout, IntSlider
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import (
precision_recall_curve,
average_precision_score,
roc_curve,
auc,
)
from sklearn.preprocessing import label_binarize
def plot_pr_roc_curves(
y_true: np.ndarray,
y_score: np.ndarray,
classes: iter,
show: bool = True,
figsize: tuple = (12, 6),
):
"""Plot precision-recall and ROC curves .
Currently, plots precision-recall and ROC curves.
Args:
y_true (np.ndarray): True class indices.
y_score (np.ndarray): Estimated probabilities.
classes (iterable): Class labels.
show (bool): Show plot. Use False if want to manually show the plot later.
figsize (tuple): Figure size (w, h).
"""
plt.subplots(2, 2, figsize=figsize)
plt.subplot(1, 2, 1)
plot_precision_recall_curve(y_true, y_score, classes, False)
plt.subplot(1, 2, 2)
plot_roc_curve(y_true, y_score, classes, False)
if show:
plt.show()
def plot_roc_curve(
y_true: np.ndarray, y_score: np.ndarray, classes: iter, show: bool = True
):
"""Plot receiver operating characteristic (ROC) curves and ROC areas.
If the given class labels are multi-label, it binarizes the classes and plots each ROC along with an averaged ROC.
For the averaged ROC, micro-average is used.
See details from: https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
Args:
y_true (np.ndarray): True class indices.
y_score (np.ndarray): Estimated probabilities.
classes (iterable): Class labels.
show (bool): Show plot. Use False if want to manually show the plot later.
"""
assert (
len(classes) == y_score.shape[1]
if len(y_score.shape) == 2
else len(classes) == 2
)
# Set random colors seed for reproducibility.
np.random.seed(123)
# Reference line
plt.plot([0, 1], [0, 1], color="gray", lw=1, linestyle="--")
# Plot ROC curve
if len(classes) == 2:
# If y_score is soft-max output from a binary-class problem, we use the second node's output only.
if len(y_score.shape) == 2:
y_score = y_score[:, 1]
_plot_roc_curve(y_true, y_score)
else:
y_true = label_binarize(y_true, classes=list(range(len(classes))))
_plot_multi_roc_curve(y_true, y_score, classes)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.title("ROC Curves")
plt.legend(loc="lower left")
if show:
plt.show()
def _plot_multi_roc_curve(y_true, y_score, classes):
# Plot ROC for each class
if len(classes) > 2:
for i in range(len(classes)):
_plot_roc_curve(y_true[:, i], y_score[:, i], classes[i])
# Compute micro-average ROC curve and ROC area
_plot_roc_curve(y_true.ravel(), y_score.ravel(), "avg")
def _plot_roc_curve(y_true, y_score, label=None):
fpr, tpr, _ = roc_curve(y_true, y_score)
roc_auc = auc(fpr, tpr)
if label == "avg":
lw = 2
prefix = "Averaged ROC"
else:
lw = 1
prefix = "ROC" if label is None else f"ROC for {label}"
plt.plot(
fpr,
tpr,
color=_generate_color(),
label=f"{prefix} (area = {roc_auc:0.2f})",
lw=lw,
)
def plot_precision_recall_curve(
y_true: np.ndarray, y_score: np.ndarray, classes: iter, show: bool = True
):
"""Plot precision-recall (PR) curves.
If the given class labels are multi-label, it binarizes the classes and plots each PR along with an averaged PR.
For the averaged PR, micro-average is used.
See details from: https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
Args:
y_true (np.ndarray): True class indices.
y_score (np.ndarray): Estimated probabilities.
classes (iterable): Class labels.
show (bool): Show plot. Use False if want to manually show the plot later.
"""
assert (
len(classes) == y_score.shape[1]
if len(y_score.shape) == 2
else len(classes) == 2
)
# Set random colors seed for reproducibility.
np.random.seed(123)
# Plot ROC curve
if len(classes) == 2:
# If y_score is soft-max output from a binary-class problem, we use the second node's output only.
if len(y_score.shape) == 2:
y_score = y_score[:, 1]
_plot_precision_recall_curve(
y_true, y_score, average_precision_score(y_true, y_score)
)
else:
y_true = label_binarize(y_true, classes=list(range(len(classes))))
_plot_multi_precision_recall_curve(y_true, y_score, classes)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.title("Precision-Recall Curves")
plt.legend(loc="lower left")
if show:
plt.show()
def _plot_multi_precision_recall_curve(y_true, y_score, classes):
# Plot PR for each class
if len(classes) > 2:
for i in range(len(classes)):
_plot_precision_recall_curve(
y_true[:, i],
y_score[:, i],
average_precision_score(y_true[:, i], y_score[:, i]),
classes[i],
)
# Plot averaged PR. A micro-average is used
_plot_precision_recall_curve(
y_true.ravel(),
y_score.ravel(),
average_precision_score(y_true, y_score, average="micro"),
"avg",
)
def _plot_precision_recall_curve(y_true, y_score, ap, label=None):
precision, recall, _ = precision_recall_curve(y_true, y_score)
if label == "avg":
lw = 2
prefix = "Averaged precision-recall"
else:
lw = 1
prefix = (
"Precision-recall"
if label is None
else f"Precision-recall for {label}"
)
plt.plot(
recall,
precision,
color=_generate_color(),
label=f"{prefix} (area = {ap:0.2f})",
lw=lw,
)
def _generate_color():
return np.random.rand(3)
def _list_sort(list1d, reverse=False, comparison_fn=lambda x: x):
"""Sorts list1f and returns (sorted list, list of indices)"""
indices = list(range(len(list1d)))
tmp = sorted(zip(list1d, indices), key=comparison_fn, reverse=reverse)
return list(map(list, list(zip(*tmp))))
class ResultsWidget(object):
IM_WIDTH = 500 # pixels
def __init__(
self,
dataset: fastai.data_block.LabelList,
y_score: np.ndarray,
y_label: iter,
):
"""Helper class to draw and update Image classification results widgets.
Args:
dataset (LabelList): Data used for prediction, containing ImageList x and CategoryList y.
y_score (np.ndarray): Predicted scores.
y_label (iterable): Predicted labels. Note, not a true label.
"""
assert len(y_score) == len(y_label) == len(dataset)
self.dataset = dataset
self.pred_scores = y_score
self.pred_labels = y_label
# Init
self.vis_image_index = 0
self.labels = dataset.classes
self.label_to_id = {s: i for i, s in enumerate(self.labels)}
self._create_ui()
def show(self):
return self.ui
def update(self):
scores = self.pred_scores[self.vis_image_index]
im = self.dataset.x[self.vis_image_index] # fastai Image object
_, sort_order = _list_sort(scores, reverse=True)
pred_labels_str = ""
for i in sort_order:
pred_labels_str += f"{self.labels[i]} ({scores[i]:3.2f})\n"
self.w_pred_labels.value = str(pred_labels_str)
self.w_image_header.value = f"Image index: {self.vis_image_index}"
self.w_img.value = im._repr_png_()
# Fix the width of the image widget and adjust the height
self.w_img.layout.height = (
f"{int(self.IM_WIDTH * (im.size[0]/im.size[1]))}px"
)
self.w_gt_label.value = str(self.dataset.y[self.vis_image_index])
self.w_filename.value = str(
self.dataset.items[self.vis_image_index].name
)
self.w_path.value = str(
self.dataset.items[self.vis_image_index].parent
)
bqpyplot.clear()
bqpyplot.bar(
self.labels,
scores,
align="center",
alpha=1.0,
color=np.abs(scores),
scales={"color": bqplot.ColorScale(scheme="Blues", min=0)},
)
def _create_ui(self):
"""Create and initialize widgets"""
# ------------
# Callbacks + logic
# ------------
def image_passes_filters(image_index):
"""Return if image should be shown."""
actual_label = str(self.dataset.y[image_index])
bo_pred_correct = actual_label == self.pred_labels[image_index]
if (bo_pred_correct and self.w_filter_correct.value) or (
not bo_pred_correct and self.w_filter_wrong.value
):
return True
return False
def button_pressed(obj):
"""Next / previous image button callback."""
step = int(obj.value)
self.vis_image_index += step
self.vis_image_index = min(
max(0, self.vis_image_index), int(len(self.pred_labels)) - 1
)
while not image_passes_filters(self.vis_image_index):
self.vis_image_index += step
if (
self.vis_image_index <= 0
or self.vis_image_index >= int(len(self.pred_labels)) - 1
):
break
self.vis_image_index = min(
max(0, self.vis_image_index), int(len(self.pred_labels)) - 1
)
self.w_image_slider.value = self.vis_image_index
self.update()
def slider_changed(obj):
"""Image slider callback.
Need to wrap in try statement to avoid errors when slider value is not a number.
"""
try:
self.vis_image_index = int(obj["new"]["value"])
self.update()
except Exception:
pass
# ------------
# UI - image + controls (left side)
# ------------
w_next_image_button = widgets.Button(description="Next")
w_next_image_button.value = "1"
w_next_image_button.layout = Layout(width="80px")
w_next_image_button.on_click(button_pressed)
w_previous_image_button = widgets.Button(description="Previous")
w_previous_image_button.value = "-1"
w_previous_image_button.layout = Layout(width="80px")
w_previous_image_button.on_click(button_pressed)
self.w_filename = widgets.Text(
value="", description="Name:", layout=Layout(width="200px")
)
self.w_path = widgets.Text(
value="", description="Path:", layout=Layout(width="200px")
)
self.w_image_slider = IntSlider(
min=0,
max=len(self.pred_labels) - 1,
step=1,
value=self.vis_image_index,
continuous_update=False,
)
self.w_image_slider.observe(slider_changed)
self.w_image_header = widgets.Text("", layout=Layout(width="130px"))
self.w_img = widgets.Image()
self.w_img.layout.width = f"{self.IM_WIDTH}px"
w_header = widgets.HBox(
children=[
w_previous_image_button,
w_next_image_button,
self.w_image_slider,
self.w_filename,
self.w_path,
]
)
# ------------
# UI - info (right side)
# ------------
w_filter_header = widgets.HTML(
value="Filters (use Image +1/-1 buttons for navigation):"
)
self.w_filter_correct = widgets.Checkbox(
value=True, description="Correct classifications"
)
self.w_filter_wrong = widgets.Checkbox(
value=True, description="Incorrect classifications"
)
w_gt_header = widgets.HTML(value="Ground truth:")
self.w_gt_label = widgets.Text(value="")
self.w_gt_label.layout.width = "360px"
w_pred_header = widgets.HTML(value="Predictions:")
self.w_pred_labels = widgets.Textarea(value="")
self.w_pred_labels.layout.height = "200px"
self.w_pred_labels.layout.width = "360px"
w_scores_header = widgets.HTML(value="Classification scores:")
self.w_scores = bqpyplot.figure()
self.w_scores.layout.height = "250px"
self.w_scores.layout.width = "370px"
self.w_scores.fig_margin = {
"top": 5,
"bottom": 80,
"left": 30,
"right": 5,
}
# Combine UIs into tab widget
w_info = widgets.VBox(
children=[
w_filter_header,
self.w_filter_correct,
self.w_filter_wrong,
w_gt_header,
self.w_gt_label,
w_pred_header,
self.w_pred_labels,
w_scores_header,
self.w_scores,
]
)
w_info.layout.padding = "20px"
self.ui = widgets.Tab(
children=[
widgets.VBox(
children=[
w_header,
widgets.HBox(children=[self.w_img, w_info]),
]
)
]
)
self.ui.set_title(0, "Results viewer")
# Fill UI with content
self.update()

Двоичные данные
media/intro_ic_vis.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 43 KiB

Двоичные данные
media/intro_is_vis.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 77 KiB

Двоичные данные
media/intro_iseg_vis.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 42 KiB

Двоичные данные
media/intro_od_vis.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 66 KiB

Просмотреть файл

Просмотреть файл

Просмотреть файл

@ -12,19 +12,20 @@ import pytest
from pathlib import Path
from typing import List
from tempfile import TemporaryDirectory
from utils_ic.datasets import unzip_url, Urls
from utils_cv.common.data import unzip_url
from utils_cv.classification.data import Urls
def path_notebooks():
def path_classification_notebooks():
""" Returns the path of the notebooks folder. """
return os.path.abspath(
os.path.join(os.path.dirname(__file__), os.path.pardir, "notebooks")
os.path.join(os.path.dirname(__file__), os.path.pardir, "classification", "notebooks")
)
@pytest.fixture(scope="module")
def notebooks():
folder_notebooks = path_notebooks()
def classification_notebooks():
folder_notebooks = path_classification_notebooks()
# Path for the notebooks
paths = {
@ -41,10 +42,14 @@ def notebooks():
"11_exploring_hyperparameters": os.path.join(
folder_notebooks, "11_exploring_hyperparameters.ipynb"
),
"deploy_on_ACI": os.path.join(
"21_deployment_on_azure_container_instances": os.path.join(
folder_notebooks,
"21_deployment_on_azure_container_instances.ipynb",
),
"22_deployment_on_azure_kubernetes_service": os.path.join(
folder_notebooks,
"22_deployment_on_azure_kubernetes_service.ipynb",
),
}
return paths
@ -72,7 +77,7 @@ def tmp_session(tmp_path_factory):
@pytest.fixture(scope="session")
def tiny_ic_multidata_path(tmp_session) -> List[Path]:
def tiny_ic_multidata_path(tmp_session) -> List[str]:
""" Returns the path to multiple dataset. """
return [
unzip_url(
@ -83,6 +88,6 @@ def tiny_ic_multidata_path(tmp_session) -> List[Path]:
@pytest.fixture(scope="session")
def tiny_ic_data_path(tmp_session) -> Path:
def tiny_ic_data_path(tmp_session) -> str:
""" Returns the path to the tiny fridge objects dataset. """
return unzip_url(Urls.fridge_objects_tiny_path, tmp_session, exist_ok=True)

Просмотреть файл

@ -0,0 +1,32 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
from PIL import Image
from fastai.vision.data import ImageList
from utils_cv.classification.data import imagenet_labels, downsize_imagelist
def test_imagenet_labels():
# Compare first five labels for quick check
IMAGENET_LABELS_FIRST_FIVE = (
"tench",
"goldfish",
"great_white_shark",
"tiger_shark",
"hammerhead",
)
labels = imagenet_labels()
for i in range(5):
assert labels[i] == IMAGENET_LABELS_FIRST_FIVE[i]
def test_downsize_imagelist(tiny_ic_data_path, tmp):
im_list = ImageList.from_folder(tiny_ic_data_path)
max_dim = 50
downsize_imagelist(im_list, tmp, max_dim)
im_list2 = ImageList.from_folder(tmp)
assert len(im_list) == len(im_list2)
for im_path in im_list2.items:
assert min(Image.open(im_path).size) <= max_dim

Просмотреть файл

@ -5,7 +5,22 @@ import pytest
from fastai.metrics import accuracy, error_rate
from fastai.vision import cnn_learner, models
from fastai.vision import ImageList, imagenet_stats
from utils_ic.fastai_utils import set_random_seed, TrainMetricsRecorder
from utils_cv.classification.model import (
TrainMetricsRecorder,
model_to_learner,
)
def test_model_to_learner():
# Test if the function loads an ImageNet model (ResNet) trainer
learn = model_to_learner(models.resnet34(pretrained=True))
assert len(learn.data.classes) == 1000 # Check Image net classes
assert isinstance(learn.model, models.ResNet)
# Test with SqueezeNet
learn = model_to_learner(models.squeezenet1_0())
assert len(learn.data.classes) == 1000
assert isinstance(learn.model, models.SqueezeNet)
@pytest.fixture
@ -21,10 +36,6 @@ def tiny_ic_data(tiny_ic_data_path):
)
def test_set_random_seed():
set_random_seed(1)
def test_train_metrics_recorder(tiny_ic_data):
model = models.resnet18
lr = 1e-4

Просмотреть файл

@ -7,6 +7,7 @@
import os
import glob
import papermill as pm
import pytest
import shutil
# Unless manually modified, python3 should be
@ -16,8 +17,9 @@ KERNEL_NAME = "cvbp"
OUTPUT_NOTEBOOK = "output.ipynb"
def test_webcam_notebook_run(notebooks):
notebook_path = notebooks["00_webcam"]
@pytest.mark.notebooks
def test_00_notebook_run(classification_notebooks):
notebook_path = classification_notebooks["00_webcam"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -26,8 +28,9 @@ def test_webcam_notebook_run(notebooks):
)
def test_01_notebook_run(notebooks, tiny_ic_data_path):
notebook_path = notebooks["01_training_introduction"]
@pytest.mark.notebooks
def test_01_notebook_run(classification_notebooks, tiny_ic_data_path):
notebook_path = classification_notebooks["01_training_introduction"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -38,8 +41,9 @@ def test_01_notebook_run(notebooks, tiny_ic_data_path):
)
def test_02_notebook_run(notebooks, tiny_ic_data_path):
notebook_path = notebooks["02_training_accuracy_vs_speed"]
@pytest.mark.notebooks
def test_02_notebook_run(classification_notebooks, tiny_ic_data_path):
notebook_path = classification_notebooks["02_training_accuracy_vs_speed"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -54,8 +58,9 @@ def test_02_notebook_run(notebooks, tiny_ic_data_path):
)
def test_10_notebook_run(notebooks, tiny_ic_data_path):
notebook_path = notebooks["10_image_annotation"]
@pytest.mark.notebooks
def test_10_notebook_run(classification_notebooks, tiny_ic_data_path):
notebook_path = classification_notebooks["10_image_annotation"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -67,8 +72,9 @@ def test_10_notebook_run(notebooks, tiny_ic_data_path):
)
def test_11_notebook_run(notebooks, tiny_ic_data_path):
notebook_path = notebooks["11_exploring_hyperparameters"]
@pytest.mark.notebooks
def test_11_notebook_run(classification_notebooks, tiny_ic_data_path):
notebook_path = classification_notebooks["11_exploring_hyperparameters"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -84,8 +90,14 @@ def test_11_notebook_run(notebooks, tiny_ic_data_path):
)
def skip_test_deploy_1_notebook_run(notebooks, tiny_ic_data_path):
notebook_path = notebooks["deploy_on_ACI"]
@pytest.mark.notebooks
def skip_test_21_notebook_run(classification_notebooks, tiny_ic_data_path):
""" NOTE - this function is intentionally prefixed with 'skip' so that
pytests bypasses this function
"""
notebook_path = classification_notebooks[
"21_deployment_on_azure_container_instances"
]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -116,7 +128,7 @@ def skip_test_deploy_1_notebook_run(notebooks, tiny_ic_data_path):
except OSError:
pass
# TODO should use temp folder for safe cleanup. Notebook should accept the folder paths via papermill param.
shutil.rmtree(os.path.join(os.getcwd(), "azureml-models"))
shutil.rmtree(os.path.join(os.getcwd(), "models"))
shutil.rmtree(os.path.join(os.getcwd(), "outputs"))

Просмотреть файл

@ -2,8 +2,7 @@
# Licensed under the MIT License.
import pytest
import pandas as pd
from utils_ic.parameter_sweeper import *
from utils_cv.classification.parameter_sweeper import *
def _test_sweeper_run(df: pd.DataFrame, df_length: int):

Просмотреть файл

@ -3,7 +3,7 @@
import pytest
import numpy as np
from utils_ic.plot_utils import (
from utils_cv.classification.plot import (
plot_roc_curve,
plot_precision_recall_curve,
plot_pr_roc_curves,
@ -44,7 +44,9 @@ def multiclass_result():
return np.array(MULTI_Y_TRUE), np.array(MULTI_Y_SCORE), MULTI_CLASSES
def test_plot_roc_curve(binaryclass_result_1, binaryclass_result_2, multiclass_result):
def test_plot_roc_curve(
binaryclass_result_1, binaryclass_result_2, multiclass_result
):
# Binary-class plot
y_true, y_score, classes = binaryclass_result_1
plot_roc_curve(y_true, y_score, classes, False)
@ -55,7 +57,9 @@ def test_plot_roc_curve(binaryclass_result_1, binaryclass_result_2, multiclass_r
plot_roc_curve(y_true, y_score, classes, False)
def test_plot_precision_recall_curve(binaryclass_result_1, binaryclass_result_2, multiclass_result):
def test_plot_precision_recall_curve(
binaryclass_result_1, binaryclass_result_2, multiclass_result
):
# Binary-class plot
y_true, y_score, classes = binaryclass_result_1
plot_precision_recall_curve(y_true, y_score, classes, False)
@ -66,7 +70,9 @@ def test_plot_precision_recall_curve(binaryclass_result_1, binaryclass_result_2,
plot_precision_recall_curve(y_true, y_score, classes, False)
def test_plot_pr_roc_curves(binaryclass_result_1, binaryclass_result_2, multiclass_result):
def test_plot_pr_roc_curves(
binaryclass_result_1, binaryclass_result_2, multiclass_result
):
# Binary-class plot
y_true, y_score, classes = binaryclass_result_1
plot_pr_roc_curves(y_true, y_score, classes, False, (1, 1))

Просмотреть файл

@ -3,14 +3,33 @@
import os
import pytest
from pathlib import Path
from PIL import Image
import shutil
from pathlib import Path
from typing import Union
from unittest import mock
from utils_cv.classification.data import Urls
from utils_cv.common.data import (
data_path,
get_files_in_directory,
unzip_url,
root_path,
)
from fastai.vision import ImageList
from utils_ic.datasets import downsize_imagelist, imagenet_labels, unzip_url, Urls
def test_root_path():
s = root_path()
assert isinstance(s, Path) and s != ""
def test_data_path():
s = data_path()
assert isinstance(s, Path) and s != ""
def test_get_files_in_directory(tiny_ic_data_path):
im_dir = os.path.join(tiny_ic_data_path, "can")
assert len(get_files_in_directory(im_dir)) == 22
assert len(get_files_in_directory(im_dir, suffixes=[".jpg"])) == 22
assert len(get_files_in_directory(im_dir, suffixes=[".nonsense"])) == 0
def _test_url_data(url: str, path: Union[Path, str], dir_name: str):
@ -61,28 +80,3 @@ def test_unzip_url_not_exist_ok(tmp_path):
shutil.rmtree(tmp_path / "fridgeObjects")
os.remove(tmp_path / "fridgeObjects.zip")
unzip_url(Urls.fridge_objects_path, tmp_path, exist_ok=False)
def test_imagenet_labels():
# Compare first five labels for quick check
IMAGENET_LABELS_FIRST_FIVE = (
"tench",
"goldfish",
"great_white_shark",
"tiger_shark",
"hammerhead",
)
labels = imagenet_labels()
for i in range(5):
assert labels[i] == IMAGENET_LABELS_FIRST_FIVE[i]
def test_downsize_imagelist(tiny_ic_data_path, tmp):
im_list = ImageList.from_folder(tiny_ic_data_path)
max_dim = 50
downsize_imagelist(im_list, tmp, max_dim)
im_list2 = ImageList.from_folder(tmp)
assert len(im_list) == len(im_list2)
for im_path in im_list2.items:
assert min(Image.open(im_path).size) <= max_dim

Просмотреть файл

@ -3,17 +3,17 @@
import os
import sys
sys.path.extend([".", "..", "../..", "../../.."])
from utils_cv.generate_deployment_env import generate_yaml
from utils_ic.common import ic_root_path
from utils_cv.common.data import root_path
from utils_cv.common.deployment import generate_yaml
def test_generate_yaml():
"""Tests creation of deployment-specific yaml file
from existing image_classification/environment.yml"""
generate_yaml(
directory=ic_root_path(),
directory=os.path.join(root_path(), "classification"),
ref_filename="environment.yml",
needed_libraries=["fastai", "pytorch"],
conda_filename="mytestyml.yml",

Просмотреть файл

@ -2,7 +2,7 @@
# Licensed under the MIT License.
import torch.cuda as cuda
from utils_ic.gpu_utils import gpu_info, which_processor
from utils_cv.common.gpu import gpu_info, which_processor
def test_gpu_info():

Просмотреть файл

@ -4,24 +4,17 @@
import os
import numpy as np
from pathlib import Path
from PIL import Image
import pytest
from utils_ic.common import data_path, get_files_in_directory, ic_root_path, im_height, im_width, im_width_height
def test_ic_root_path():
s = ic_root_path()
assert isinstance(s, str) and s != ""
def test_data_path():
s = data_path()
assert isinstance(s, str) and s != ""
from utils_cv.common.image import (
im_width,
im_height,
im_width_height,
im2base64,
ims2strlist,
)
def test_im_width(tiny_ic_data_path):
im_path = Path(tiny_ic_data_path)/"can"/"1.jpg"
im_path = Path(tiny_ic_data_path) / "can" / "1.jpg"
assert (
im_width(im_path) == 499
), "Expected image width of 499, but got {}".format(im_width(im_path))
@ -32,7 +25,7 @@ def test_im_width(tiny_ic_data_path):
def test_im_height(tiny_ic_data_path):
im_path = Path(tiny_ic_data_path)/"can"/"1.jpg"
im_path = Path(tiny_ic_data_path) / "can" / "1.jpg"
assert (
im_height(im_path) == 665
), "Expected image height of 665, but got ".format(im_width(60))
@ -43,7 +36,7 @@ def test_im_height(tiny_ic_data_path):
def test_im_width_height(tiny_ic_data_path):
im_path = Path(tiny_ic_data_path)/"can"/"1.jpg"
im_path = Path(tiny_ic_data_path) / "can" / "1.jpg"
w, h = im_width_height(im_path)
assert w == 499 and h == 665
im = np.zeros((100, 50))
@ -51,8 +44,18 @@ def test_im_width_height(tiny_ic_data_path):
assert w == 50 and h == 100
def test_get_files_in_directory(tiny_ic_data_path):
im_dir = os.path.join(tiny_ic_data_path, "can")
assert len(get_files_in_directory(im_dir)) == 22
assert len(get_files_in_directory(im_dir, suffixes=[".jpg"])) == 22
assert len(get_files_in_directory(im_dir, suffixes=[".nonsense"])) == 0
def test_ims2strlist(tiny_ic_data_path):
""" Tests extraction of image content and conversion into string"""
im_list = [
os.path.join(tiny_ic_data_path, "can", "1.jpg"),
os.path.join(tiny_ic_data_path, "carton", "34.jpg"),
]
im_string_list = ims2strlist(im_list)
assert isinstance(im_string_list, list)
def test_im2base64(tiny_ic_data_path):
""" Tests extraction of image content and conversion into bytes"""
im_name = os.path.join(tiny_ic_data_path, "can", "1.jpg")
im_content = im2base64(im_name)
assert isinstance(im_content, bytes)

Просмотреть файл

@ -0,0 +1,5 @@
from utils_cv.common.misc import set_random_seed
def test_set_random_seed():
set_random_seed(1)

Просмотреть файл

Просмотреть файл

Просмотреть файл

Просмотреть файл

@ -0,0 +1,90 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import os
import requests
from pathlib import Path
from typing import List, Union
from urllib.parse import urljoin
from fastai.vision import ImageList
from PIL import Image
from tqdm import tqdm
class Urls:
# for now hardcoding base url into Urls class
base = "https://cvbp.blob.core.windows.net/public/datasets/image_classification/"
# Same link Keras is using
imagenet_labels_json = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
# datasets
fridge_objects_path = urljoin(base, "fridgeObjects.zip")
fridge_objects_watermark_path = urljoin(base, "fridgeObjectsWatermark.zip")
fridge_objects_tiny_path = urljoin(base, "fridgeObjectsTiny.zip")
fridge_objects_watermark_tiny_path = urljoin(
base, "fridgeObjectsWatermarkTiny.zip"
)
multilabel_fridge_objects_path = urljoin(
base, "multilabelFridgeObjects.zip"
)
food_101_subset_path = urljoin(base, "food101Subset.zip")
fashion_texture_path = urljoin(base, "fashionTexture.zip")
flickr_logos_32_subset_path = urljoin(base, "flickrLogos32Subset.zip")
lettuce_path = urljoin(base, "lettuce.zip")
recycle_path = urljoin(base, "recycle_v3.zip")
@classmethod
def all(cls) -> List[str]:
return [v for k, v in cls.__dict__.items() if k.endswith("_path")]
def imagenet_labels() -> list:
"""List of ImageNet labels with the original index.
Returns:
list: ImageNet labels
"""
labels = requests.get(Urls.imagenet_labels_json).json()
return [labels[str(k)][1] for k in range(len(labels))]
def downsize_imagelist(
im_list: ImageList, out_dir: Union[Path, str], dim: int = 500
):
"""Aspect-ratio preserving down-sizing of each image in the ImageList {im_list}
so that min(width,height) is at most {dim} pixels.
Writes each image to the directory {out_dir} while preserving the original
subdirectory structure.
Args:
im_list: Fastai ImageList object.
out_dir: Output root location.
dim: maximum image dimension (width/height) after resize
"""
assert (
len(im_list.items) > 0
), "Input ImageList does not contain any images."
# Find parent directory which all images have in common
im_paths = [str(s) for s in im_list.items]
src_root_dir = os.path.commonprefix(im_paths)
# Loop over all images
for src_path in tqdm(im_list.items):
# Load and optionally down-size image
im = Image.open(src_path).convert('RGB')
scale = float(dim) / min(im.size)
if scale < 1.0:
new_size = [int(round(f * scale)) for f in im.size]
im = im.resize(new_size, resample=Image.LANCZOS)
# Write image
src_rel_path = os.path.relpath(src_path, src_root_dir)
dst_path = os.path.join(out_dir, src_rel_path)
assert os.path.normpath(src_rel_path) != os.path.normpath(
dst_path
), "Image source and destination path should not be the same: {src_rel_path}"
os.makedirs(os.path.dirname(dst_path), exist_ok=True)
im.save(dst_path)

Просмотреть файл

@ -1,34 +1,51 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
import random
from time import time
from typing import Any, List
from fastai.basic_train import LearnerCallback
from fastai.core import PBar
from fastai.torch_core import TensorOrNumList
from fastai.vision import (
Learner, nn,
ImageDataBunch, imagenet_stats,
)
from fastprogress.fastprogress import format_time
from IPython.display import display
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import numpy as np
import torch
from torch import Tensor
from utils_cv.classification.data import imagenet_labels
def set_random_seed(s):
"""Set random seed for fastai
https://docs.fast.ai/dev/test.html#getting-reproducible-results
# Default ImageNet models image size
IMAGENET_IM_SIZE = 224
def model_to_learner(
model: nn.Module, im_size: int = IMAGENET_IM_SIZE
) -> Learner:
"""Create Learner based on pyTorch ImageNet model.
Args:
model (nn.Module): Base ImageNet model. E.g. models.resnet18()
im_size (int): Image size the model will expect to have.
Returns:
Learner: a model trainer for prediction
"""
np.random.seed(s)
torch.manual_seed(s)
random.seed(s)
if torch.cuda.is_available():
torch.cuda.manual_seed(s)
torch.cuda.manual_seed_all(s)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Currently, fast.ai api requires to pass a DataBunch to create a model trainer (learner).
# To use the learner for prediction tasks without retraining, we have to pass an empty DataBunch.
# single_from_classes is deprecated, but this is the easiest go-around method.
# Create ImageNet data spec as an empty DataBunch.
empty_data = ImageDataBunch.single_from_classes(
"", classes=imagenet_labels(), size=im_size
).normalize(imagenet_stats)
return Learner(empty_data, model)
class TrainMetricsRecorder(LearnerCallback):
@ -249,4 +266,3 @@ class TrainMetricsRecorder(LearnerCallback):
def last_valid_metrics(self):
"""Validation set metrics from the last epoch"""
return self.valid_metrics[-1]

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше