This commit is contained in:
jiata 2019-09-30 16:58:53 +00:00
Родитель 6080b71a8a bfda220966
Коммит 7848f91797
176 изменённых файлов: 7427 добавлений и 920 удалений

Просмотреть файл

@ -10,6 +10,7 @@
# E402 module level import not at top of file
# E731 do not assign a lambda expression, use a def
# F821 undefined name 'get_ipython' --> from generated python files using nbconvert
# W605 invalid escape sequence '\W'
ignore = E203, E266, E501, W503, F405, E402, E731, F821
ignore = E203, E266, E501, W503, F405, E402, E731, F821, W605
max-line-length = 79

Просмотреть файл

@ -1,15 +1,28 @@
# Computer Vision
In recent years, we see an extra-ordinary growth in Computer Vision, with applications in face recognition, image understanding, search, drones, mapping, semi-autonomous and autonomous vehicles. Key essence to many of these applications are visual recognition tasks such as image classification, object detection and image similarity. Researchers have been applying newer deep learning methods to achieve state-of-the-art(SOTA) results on these challenging visual recognition tasks.
This repository provides examples and best practice guidelines for building computer vision systems. The focus of the repository is on state-of-the-art methods that are popular among researchers and practitioners working on problems involving image recognition, object detection and image similarity.
These examples are provided as Jupyter notebooks and common utility functions. All examples use PyTorch as the deep learning library.
This repository provides examples and best practice guidelines for building computer vision systems. All examples are given as Jupyter notebooks, and use PyTorch as the deep learning library.
## Overview
The goal of this repository is to accelerate the development of computer vision applications. Rather than creating implementions from scratch, the focus is on providing examples and links to existing state-of-the-art libraries. In addition, having worked in this space for many years, we aim to answer common questions, point out frequently observed pitfalls, and show how to use the cloud for training and deployment.
The current main priority is to support image classification. Additionally, we also provide a basic (but often sufficiently accurate) example for image similarity. See the [projects](https://github.com/Microsoft/ComputerVision/projects) and [milestones](https://github.com/Microsoft/ComputerVision/milestones) pages in this repository for more details.
We hope that these examples and utilities can significantly reduce the “time to market” by simplifying the experience from defining the business problem to development of solution by orders of magnitude. In addition, the example notebooks would serve as guidelines and showcase best practices and usage of the tools in a wide variety of languages.
## Scenarios
The following is a summary of commonly used Computer Vision scenarios that are covered in this repository. For each of these scenarios, we give you the tools to effectively build your own model. This includes tasks such as fine-tuning your own model on your own data, to more complex tasks such as hard-negative mining and even model deployment. See all supported scenarios [here](scenarios).
| Scenario | Description |
| -------- | ----------- |
| [Classification](scenarios/classification) | Image Classification is a supervised machine learning technique that allows you to learn and predict the category of a given image. |
| [Similarity](scenarios/similarity) | Image Similarity is a way to compute a similarity score given a pair of images. Given an image, it allows you to identify the most similar image in a given dataset. |
| [Detection](scenarios/detection) | Object Detection is a supervised machine learning technique that allows you to detect the bounding box of an object within an image. |
## Getting Started
To get started:
1. (Optional) Create an Azure Data Science Virtual Machine with e.g. a V100 GPU ([instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm), [price table](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/)).
@ -19,6 +32,7 @@ To get started:
git clone https://github.com/Microsoft/ComputerVision
```
1. Install the conda environment, you'll find the `environment.yml` file in the root directory. To build the conda environment:
> If you are using Windows, remove `- pycocotools>=2.0` from the `environment.yaml`
```
conda env create -f environment.yml
```
@ -31,14 +45,14 @@ To get started:
```
jupyter labextension install jupyter-webrtc
```
> If you are using Windows run at this point:
> - `pip install Cython`
> - `pip install git+https://github.com/philferriere/cocoapi.git#egg=pycocotools^&subdirectory=PythonAPI`
1. Start the Jupyter notebook server
```
jupyter notebook
```
1. At this point, you should be able to run the notebooks in this repo. Explore our notebooks on the following computer vision domains. Make sure to change the kernel to "Python (cv)".
- [/classification](classification#notebooks)
- [/similarity](similarity#notebooks)
1. At this point, you should be able to run the [notebooks](#scenarios) in this repo.
As an alternative to the steps above, and if one wants to install only
the 'utils_cv' library (without creating a new conda environment),
@ -51,7 +65,6 @@ pip install git+https://github.com/microsoft/ComputerVision.git@master#egg=utils
or by downloading the repo and then running `pip install .` in the
root directory.
## Introduction
Note that for certain computer vision problems, you may not need to build your own models. Instead, pre-built or easily customizable solutions exist which do not require any custom coding or machine learning expertise. We strongly recommend evaluating if these can sufficiently solve your problem. If these solutions are not applicable, or the accuracy of these solutions is not sufficient, then resorting to more complex and time-consuming custom approaches may be necessary.

Просмотреть файл

@ -1,50 +0,0 @@
# Image classification
This directory provides examples and best practices for building image classification systems. Our goal is to enable users to easily and quickly train high-accuracy classifiers on their own datasets. We provide example notebooks with pre-set default parameters that are shown to work well on a variety of data sets. We also include extensive documentation of common pitfalls and best practices. Additionally, we show how Azure, Microsoft's cloud computing platform, can be used to speed up training on large data sets or deploy models as web services.
We recommend using PyTorch as a Deep Learning platform for its ease of use, simplicity when debugging, and popularity in the data science community. For Computer Vision functionality, we also rely heavily on [fast.ai](https://github.com/fastai/fastai), a PyTorch data science library which comes with rich deep learning features and extensive documentation. We highly recommend watching the [2019 fast.ai lecture series](https://course.fast.ai/videos/?lesson=1) video to understand the underlying technology. Fast.ai's [documentation](https://docs.fast.ai/) is also a valuable resource.
## Frequently asked questions
Answers to Frequently Asked Questions such as "How many images do I need to train a model?" or "How to annotate images?" can be found in the [FAQ.md](FAQ.md) file.
## Notebooks
We provide several notebooks to show how image classification algorithms are designed, evaluated and operationalized. Notebooks starting with `0` are intended to be run sequentially, as there are dependencies between them. These notebooks contain introductory "required" material. Notebooks starting with `1` can be considered optional and contain more complex and specialized topics.
While all notebooks can be executed in Windows, we have found that fast.ai is much faster on the Linux operating system. Additionally, using GPU dramatically improves training speeds. We suggest using an Azure Data Science Virtual Machine with V100 GPU ([instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm), [price table](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/)).
We have also found that some browsers do not render Jupyter widgets correctly. If you have issues, try using an alternative browser, such as Edge or Chrome.
| Notebook name | Description |
| --- | --- |
| [00_webcam.ipynb](notebooks/00_webcam.ipynb)| Demonstrates inference on an image from your computer's webcam using a pre-trained model.
| [01_training_introduction.ipynb](notebooks/01_training_introduction.ipynb)| Introduces some of the basic concepts around model training and evaluation.|
| [02_multilabel_classification.ipynb](notebooks/02_multilabel_classification.ipynb)| Introduces multi-label classification and introduces key differences between training a multi-label and single-label classification models.|
| [03_training_accuracy_vs_speed.ipynb](notebooks/03_training_accuracy_vs_speed.ipynb)| Trains a model with high accuracy vs one with a fast inferencing speed. *<font color="orange"> Use this to train on your own datasets! </font>* |
| [10_image_annotation.ipynb](notebooks/10_image_annotation.ipynb)| A simple UI to annotate images. |
| [11_exploring_hyperparameters.ipynb](notebooks/11_exploring_hyperparameters.ipynb)| Finds optimal model parameters using grid search. |
| [12_hard_negative_sampling.ipynb](notebooks/12_hard_negative_sampling.ipynb)| Demonstrated how to use hard negatives to improve your model performance. |
| [20_azure_workspace_setup.ipynb](notebooks/20_azure_workspace_setup.ipynb)| Sets up your Azure resources and Azure Machine Learning workspace. |
| [21_deployment_on_azure_container_instances.ipynb](notebooks/21_deployment_on_azure_container_instances.ipynb)| Deploys a trained model exposed on a REST API using Azure Container Instances (ACI). |
| [22_deployment_on_azure_kubernetes_service.ipynb](notebooks/22_deployment_on_azure_kubernetes_service.ipynb)| Deploys a trained model exposed on a REST API using the Azure Kubernetes Service (AKS). |
| [23_aci_aks_web_service_testing.ipynb](notebooks/23_aci_aks_web_service_testing.ipynb)| Tests the deployed models on either ACI or AKS. |
| [24_exploring_hyperparameters_on_azureml.ipynb](notebooks/24_exploring_hyperparameters_on_azureml.ipynb)| Performs highly parallel parameter sweeping using AzureML's HyperDrive. |
## Azure-enhanced notebooks
Azure products and services are used in certain notebooks to enhance the efficiency of developing classification systems at scale.
To successfully run these notebooks, the users **need an Azure subscription** or can [use Azure for free](https://azure.microsoft.com/en-us/free/).
The Azure products featured in the notebooks include:
* [Azure Machine Learning service](https://azure.microsoft.com/en-us/services/machine-learning-service/) - Azure Machine Learning service is a cloud service used to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. It is used across various notebooks for the AI model development related tasks such as deployment. [20_azure_workspace_setup](notebooks/20_azure_workspace_setup.ipynb) shows how to set up your Azure resources and connect to an Azure Machine Learning service workspace.
* [Azure Container Instance](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aci) - You can use Azure Machine Learning service to host your classification model in a web service deployment on Azure Container Instance (ACI). ACI is good for low scale, CPU-based workloads. [21_deployment_on_azure_container_instances](notebooks/21_deployment_on_azure_container_instances.ipynb) explains how to deploy a web service to ACI through Azure ML.
* [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aks) - You can use Azure Machine Learning service to host your classification model in a web service deployment on Azure Kubernetes Service (AKS). AKS is good for high-scale production deployments and provides autoscaling, and fast response times. [22_deployment_on_azure_kubernetes_service](notebooks/22_deployment_on_azure_kubernetes_service.ipynb) explains how to deploy a web service to AKS through Azure ML.

Двоичные данные
classification/notebooks/media/ACI_diagram_2.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 41 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 13 KiB

Двоичные данные
classification/notebooks/media/acr_manifest.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 49 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 73 KiB

Двоичные данные
classification/notebooks/media/acr_tag.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 55 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 40 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 37 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 125 KiB

Двоичные данные
classification/notebooks/media/anno_ui.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 67 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 118 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 30 KiB

Двоичные данные
classification/notebooks/media/datastore.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 23 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 228 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 124 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 83 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 128 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 62 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 52 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 125 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 118 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 103 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 151 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 92 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 114 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 13 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 11 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 154 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 104 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 44 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 140 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 181 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 84 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 130 KiB

Двоичные данные
classification/notebooks/media/deployments.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 50 KiB

Двоичные данные
classification/notebooks/media/docker_images.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 49 KiB

Двоичные данные
classification/notebooks/media/experiment.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 55 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 114 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 13 KiB

Двоичные данные
classification/notebooks/media/hard_neg.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 164 KiB

Двоичные данные
classification/notebooks/media/hard_neg_ex1.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 46 KiB

Двоичные данные
classification/notebooks/media/hard_neg_ex2.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 154 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 38 KiB

Двоичные данные
classification/notebooks/media/ip_address.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 51 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 154 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 47 KiB

Двоичные данные
classification/notebooks/media/models.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 81 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 76 KiB

Двоичные данные
classification/notebooks/media/output.PNG

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 18 KiB

Двоичные данные
classification/notebooks/media/predictions.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 104 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 41 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 181 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 80 KiB

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 130 KiB

Двоичные данные
classification/notebooks/media/website_ui.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 13 KiB

Двоичные данные
classification/notebooks/media/widget.PNG

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 105 KiB

Двоичные данные
classification/notebooks/media/workspace.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 63 KiB

Просмотреть файл

@ -7,4 +7,4 @@ Each project should live in its own subdirectory ```/contrib/<project>``` and co
| Directory | Project description |
|---|---|
| | |
| vm_builder | This script helps users easily create an Ubuntu Data Science Virtual Machine with a GPU with the Computer Vision repo installed and ready to be used. If you find the script to be out-dated or not working, you can create the VM using the Azure portal or the Azure CLI tool with a few more steps. |

Просмотреть файл

@ -0,0 +1,22 @@
# VM Builder
This mini project will help you set up a Virtual Machine with the Computer
Vision repo installed on it.
You can use this project simply by running:
```bash
python vm_builder.py
```
This will kick off an interactive bash session that will create your VM on
Azure and install the repo on it.
Once your VM has been setup, you can ssh tunnel to port 8899 and you'll
find the Computer Vision repo setup and ready to be used.
```bash
ssh -L 8899:localhost:8899 <username>@<ip-address>
```
Visit localhost:8899 on your browser to start using the notebooks.

Просмотреть файл

@ -0,0 +1,624 @@
import json
import re
import subprocess
import textwrap
from shutil import which
from prompt_toolkit import prompt
from prompt_toolkit import print_formatted_text, HTML
# variables
UBUNTU_DSVM_IMAGE = (
"microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest"
)
vm_options = dict(
gpu=dict(size="Standard_NC6s_v3", family="NCSv3", cores=6),
cpu=dict(size="Standard_DS3_v2", family="DSv2", cores=4),
)
# list of cmds
account_list_cmd = "az account list -o table"
sub_id_list_cmd = 'az account list --query []."id" -o tsv'
region_list_cmd = 'az account list-locations --query []."name" -o tsv'
silent_login_cmd = 'az login --query "[?n]|[0]"'
set_account_sub_cmd = "az account set -s {}"
provision_rg_cmd = "az group create --name {} --location {}"
provision_vm_cmd = (
"az vm create --resource-group {} --name {} --size {} --image {} "
"--admin-username {} --admin-password {} --authentication-type password"
)
vm_ip_cmd = (
"az vm show -d --resource-group {}-rg --name {} "
'--query "publicIps" -o json'
)
quota_cmd = (
"az vm list-usage --location {} --query "
"[?contains(localName,'{}')].{{max:limit,current:currentValue}}"
)
install_repo_cmd = (
"az vm run-command invoke -g {}-rg -n {} "
"--command-id RunShellScript --scripts"
)
# install repo invoke script
install_repo_script = """<<<EOF
ls
EOF
"""
tmp = """<<<EOF
rm -rf computervision
conda remove -n cv --all
git clone https://www.github.com/microsoft/computervision
cd computervision
conda env create -f environment.yml
tmux
jupyter notebook --port 8888
EOF"""
def is_installed(cli_app: str) -> bool:
"""Check whether `name` is on PATH and marked as executable."""
return which(cli_app) is not None
def validate_password(password: str) -> bool:
""" Checks that the password is valid.
Args:
password: password string
Returns: True if valid, else false.
"""
if len(password) < 12 or len(password) > 123:
print_formatted_text(
HTML(
(
"<ansired>Input must be between 12 and 123 characters. "
"Please try again.</ansired>"
)
)
)
return False
if (
len([c for c in password if c.islower()]) <= 0
or len([c for c in password if c.isupper()]) <= 0
):
print_formatted_text(
HTML(
(
"<ansired>Input must contain a upper and a lower case "
"character. Please try again.</ansired>"
)
)
)
return False
if len([c for c in password if c.isdigit()]) <= 0:
print_formatted_text(
HTML(
"<ansired>Input must contain a digit. Please try again.</ansired>"
)
)
return False
if len(re.findall("[\W_]", password)) <= 0:
print_formatted_text(
HTML(
(
"<ansired>Input must contain a special character. "
"Please try again.</ansired>"
)
)
)
return False
return True
def validate_vm_name(vm_name) -> bool:
""" Checks that the vm name is valid.
Args:
vm_name: the name of the vm to check
Returns: True if valid, else false.
"""
# we minus 3 for max length because we generate the rg using: "{vm_name}-rg"
if len(vm_name) < 1 or len(vm_name) > (64 - 3):
print_formatted_text(
HTML(
(
f"<ansired>Input must be between 1 and {64-3} characters. "
"Please try again.</ansired>"
)
)
)
return False
if not bool(re.match("^[A-Za-z0-9-]*$", vm_name)):
print_formatted_text(
HTML(
(
"<ansired>You can only use alphanumeric characters and "
"hyphens. Please try again.</ansired>"
)
)
)
return False
return True
def check_valid_yes_no_response(input: str) -> bool:
if input in ("Y", "y", "N", "n"):
return True
else:
print_formatted_text(
HTML("<ansired>Enter 'y' or 'n'. Please try again.</ansired>")
)
return False
def yes_no_prompter(msg: str) -> bool:
cond = None
valid_response = False
while not valid_response:
cond = prompt(msg)
valid_response = check_valid_yes_no_response(cond)
return True if cond in ("Y", "y") else False
def prompt_subscription_id() -> str:
""" Prompt for subscription id. """
subscription_id = None
subscription_is_valid = False
results = subprocess.run(
sub_id_list_cmd.split(" "), stdout=subprocess.PIPE
)
subscription_ids = results.stdout.decode("utf-8").strip().split("\n")
while not subscription_is_valid:
subscription_id = prompt(
("Enter your subscription id " "(copy & paste it from above): ")
)
if subscription_id in subscription_ids:
subscription_is_valid = True
else:
print_formatted_text(
HTML(
(
"<ansired>The subscription id you entered is not "
"valid. Please try again.</ansired>"
)
)
)
return subscription_id
def prompt_vm_name() -> str:
""" Prompt for VM name. """
vm_name = None
vm_name_is_valid = False
while not vm_name_is_valid:
vm_name = prompt(
f"Enter a name for your vm (ex. 'cv-datascience-vm'): "
)
vm_name_is_valid = validate_vm_name(vm_name)
return vm_name
def prompt_region() -> str:
""" Prompt for region. """
region = None
region_is_valid = False
results = subprocess.run(
region_list_cmd.split(" "), stdout=subprocess.PIPE
)
valid_regions = results.stdout.decode("utf-8").strip().split("\n")
while not region_is_valid:
region = prompt(f"Enter a region for your vm (ex. 'eastus'): ")
if region in valid_regions:
region_is_valid = True
else:
print_formatted_text(
HTML(
textwrap.dedent(
"""\
<ansired>The region you entered is invalid. You can run
`az account list-locations` to see a list of the valid
regions. Please try again.</ansired>\
"""
)
)
)
return region
def prompt_username() -> str:
""" Prompt username. """
username = None
username_is_valid = False
while not username_is_valid:
username = prompt("Enter a username: ")
if len(username) > 0:
username_is_valid = True
else:
print_formatted_text(
HTML(
(
"<ansired>Username cannot be empty. "
"Please try again.</ansired>"
)
)
)
return username
def prompt_password() -> str:
""" Prompt for password. """
password = None
password_is_valid = False
while not password_is_valid:
password = prompt("Enter a password: ", is_password=True)
if not validate_password(password):
continue
password_match = prompt(
"Enter your password again: ", is_password=True
)
if password == password_match:
password_is_valid = True
else:
print_formatted_text(
HTML(
(
"<ansired>Your passwords do not match. Please try "
"again.</ansired>"
)
)
)
return password
def prompt_use_gpu() -> str:
""" Prompt for GPU or CPU. """
return yes_no_prompter(
(
"Do you want to use a GPU-enabled VM (It will incur a "
"higher cost) [y/n]: "
)
)
def prompt_use_cpu_instead() -> str:
""" Prompt switch to using CPU. """
return yes_no_prompter(
(
"Do you want to switch to using a CPU instead? (This will "
"likely solve your out-of-quota problem) [y/n]: "
)
)
def get_available_quota(region: str, vm_family: str) -> int:
""" Get available quota of the subscription in the specified region.
Args:
region: the region to check
vm_family: the vm family to check
Returns: the available quota
"""
results = subprocess.run(
quota_cmd.format(region, vm_family).split(" "), stdout=subprocess.PIPE
)
quota = json.loads("".join(results.stdout.decode("utf-8")))
return int(quota[0]["max"]) - int(quota[0]["current"])
def print_intro_dialogue():
print_formatted_text(
HTML(
textwrap.dedent(
"""
Azure Data Science Virtual Machine Builder
This utility will help you create an Azure Data Science Ubuntu Virtual
Machine that you will be able to run your notebooks in. The VM will
be based on the Ubuntu DSVM image.
For more information about Ubuntu DSVMs, see here:
https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro
This utility will let you select a GPU machine or a CPU machine.
The GPU machine specs:
- size: Standard_NC6s_v3 (NVIDIA Tesla V100 GPUs)
- family: NC6s
- cores: 6
The CPU machine specs:
- size: Standard_DS3_v2 (Intel Xeon® E5-2673 v3 2.4 GHz (Haswell))
- family: DSv2
- cores: 4
Pricing information on the SKUs can be found here:
https://azure.microsoft.com/en-us/pricing/details/virtual-machines
To use this utility, you must have an Azure subscription which you can
get from azure.microsoft.com.
Answer the questions below to setup your machine.
------------------------------------------
"""
)
)
)
def check_az_cli_installed():
if not is_installed("az"):
print(
textwrap.dedent(
"""\
You must have the Azure CLI installed. For more information on
installing the Azure CLI, see here:
https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest
"""
)
)
exit(0)
def check_logged_in() -> bool:
print("Checking to see if you are logged in...")
results = subprocess.run(
account_list_cmd.split(" "),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
return False if "az login" in str(results.stderr) else True
def log_in(logged_in: bool):
if not logged_in:
subprocess.run(silent_login_cmd.split(" "))
print("\n")
else:
print_formatted_text(
HTML(
(
"<ansigreen>Looks like you're already logged "
"in.</ansigreen>\n"
)
)
)
def show_accounts():
print("Here is a list of your subscriptions:")
results = subprocess.run(
account_list_cmd.split(" "), stdout=subprocess.PIPE
)
print_formatted_text(
HTML(f"<ansigreen>{results.stdout.decode('utf-8')}</ansigreen>")
)
def check_quota(region: str, vm: dict, subscription_id: str) -> dict:
if get_available_quota(region, vm["family"]) < vm["cores"]:
print_formatted_text(
HTML(
textwrap.dedent(
f"""\
<ansired>
The subscription '{subscription_id}' does not have enough
cores of {vm['family']} in the region: {region}.
To request more cores:
https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request
(If you selected GPU, you may try using CPU instead.)
</ansired>\
"""
)
)
)
if prompt_use_cpu_instead():
vm = vm_options["cpu"]
else:
print_formatted_text(HTML("Exiting.."))
exit()
return vm
def create_rg(vm_name: str, region: str):
print_formatted_text(
HTML(("\n<ansiyellow>Creating the resource group.</ansiyellow>"))
)
results = subprocess.run(
provision_rg_cmd.format(f"{vm_name}-rg", region).split(" "),
stdout=subprocess.PIPE,
)
if "Succeeded" in results.stdout.decode("utf-8"):
print_formatted_text(
HTML(
(
"<ansigreen>Your resource group was "
"successfully created.</ansigreen>\n"
)
)
)
def create_vm(vm_name: str, vm: dict, username: str, password: str):
print_formatted_text(
HTML(
(
"<ansiyellow>Creating the Data Science VM. "
"This may take up a few minutes...</ansiyellow>"
)
)
)
subprocess.run(
provision_vm_cmd.format(
f"{vm_name}-rg",
vm_name,
vm["size"],
UBUNTU_DSVM_IMAGE,
username,
password,
).split(" "),
stdout=subprocess.PIPE,
)
def get_vm_ip(vm_name: str) -> str:
results = subprocess.run(
vm_ip_cmd.format(vm_name, vm_name).split(" "),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
vm_ip = results.stdout.decode("utf-8").strip().strip('"')
if len(vm_ip) > 0:
print_formatted_text(
HTML("<ansigreen>VM creation succeeded.</ansigreen>\n")
)
return vm_ip
def install_repo(username: str, password: str, vm_ip: str, vm_name: str):
print_formatted_text(
HTML("<ansiyellow>Setting up your machine...</ansiyellow>")
)
invoke_cmd = install_repo_cmd.format(vm_name, vm_name)
cmds = invoke_cmd.split(" ")
cmds.append(
f"""<<<EOF
export PATH=/anaconda/bin:$PATH
conda remove -n cv --all
cd /home/{username}
rm -rf computervision
git clone https://www.github.com/microsoft/computervision
chmod 777 computervision
cd computervision
conda env create -f environment.yml
source activate cv
python -m ipykernel install --user --name cv --display-name "Python (cv)"
jupyter notebook --port 8899 --allow-root --NotebookApp.token='' --NotebookApp.password='' &
EOF"""
)
subprocess.run(cmds, stdout=subprocess.PIPE)
print_formatted_text(
HTML(
(
"<ansigreen>Successfully installed the repo "
"on the machine.</ansigreen>\n"
)
)
)
def print_exit_dialogue(
vm_name: str, vm_ip: str, region: str, username: str, subscription_id: str
):
print_formatted_text(
HTML(
textwrap.dedent(
f"""
DSVM creation is complete. We recommend saving the details below.
<ansiyellow>
VM information:
- vm_name: {vm_name}
- ip: {vm_ip}
- region: {region}
- username: {username}
- password: ****
- resource_group: {vm_name}-rg
- subscription_id: {subscription_id}
</ansiyellow>
To start/stop VM:
<ansiyellow>
$az vm stop -g {vm_name}-rg -n {vm_name}
$az vm start -g {vm_name}-rg -n {vm_name}
</ansiyellow>
To connect via ssh and tunnel:
<ansiyellow>
$ssh -L 8899:localhost:8899 {username}@{vm_ip}
</ansiyellow>
To delete the VM (this command is unrecoverable):
<ansiyellow>
$az group delete -n {vm_name}-rg
</ansiyellow>
Please remember that virtual machines will incur a cost on your
Azure subscription. Remember to stop your machine if you are not
using it to minimize the cost.\
"""
)
)
)
def vm_builder() -> None:
""" Interaction session to create a data science vm. """
# print intro dialogue
print_intro_dialogue()
# validate active user
prompt("Press enter to continue...\n")
# check that az cli is installed
check_az_cli_installed()
# check if we are logged in
logged_in = check_logged_in()
# login to the az cli and suppress output
log_in(logged_in)
# show account sub list
show_accounts()
# prompt fields
subscription_id = prompt_subscription_id()
vm_name = prompt_vm_name()
region = prompt_region()
use_gpu = prompt_use_gpu()
username = prompt_username()
password = prompt_password()
# set GPU
vm = vm_options["gpu"] if use_gpu else vm_options["cpu"]
# check quota
vm = check_quota(region, vm, subscription_id)
# set sub id
subprocess.run(set_account_sub_cmd.format(subscription_id).split(" "))
# provision rg
create_rg(vm_name, region)
# create vm
create_vm(vm_name, vm, username, password)
# get vm ip
vm_ip = get_vm_ip(vm_name)
# install cvbp on dsvm
install_repo(username, password, vm_ip, vm_name)
# exit message
print_exit_dialogue(vm_name, vm_ip, region, username, subscription_id)
if __name__ == "__main__":
vm_builder()

Просмотреть файл

@ -17,9 +17,9 @@ channels:
- fastai
dependencies:
- python==3.6.8
- pytorch==1.0.0
- torchvision
- fastai==1.0.48
- pytorch>=1.2.0
- torchvision>=0.3.0
- fastai==1.0.57
- ipykernel>=4.6.1
- jupyter>=1.0.0
- pytest>=3.6.4
@ -34,8 +34,9 @@ dependencies:
- pre-commit>=1.14.4
- pyyaml>=5.1.2
- requests>=2.22.0
- cython
- cython>=0.29.1
- pip:
- nvidia-ml-py3
- nteract-scrapbook
- azureml-sdk[notebooks,contrib]>=1.0.30
- git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

Просмотреть файл

Просмотреть файл

@ -59,21 +59,21 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fast.ai: 1.0.48\n",
"Fast.ai (Torch) is using GPU: Tesla V100-PCIE-16GB\n"
"Fast.ai: 1.0.57\n",
"Torch is using GPU: Tesla V100-PCIE-16GB\n"
]
}
],
"source": [
"import sys\n",
"sys.path.append(\"../../\")\n",
"sys.path.append(\"../../../\")\n",
"import io\n",
"import os\n",
"import time\n",

Просмотреть файл

@ -74,7 +74,7 @@
"from utils_cv.classification.widget import ResultsWidget\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.common.gpu import which_processor\n",
"from utils_cv.common.gpu import db_num_workers, which_processor\n",
"\n",
"print(f\"Fast.ai version = {fastai.__version__}\")\n",
"which_processor()"
@ -202,7 +202,7 @@
" .split_by_rand_pct(valid_pct=0.2, seed=10)\n",
" .label_from_folder()\n",
" .transform(size=IM_SIZE)\n",
" .databunch(bs=BATCH_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats)\n",
")"
]
@ -559,7 +559,7 @@
"source": [
"interp = ClassificationInterpretation.from_learner(learn)\n",
"# Get prediction scores. We convert tensors to numpy array to plot them later.\n",
"pred_scores = to_np(interp.probs)"
"pred_scores = to_np(interp.preds)"
]
},
{

Просмотреть файл

@ -80,7 +80,7 @@
"from utils_cv.classification.plot import plot_thresholds\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.common.gpu import which_processor\n",
"from utils_cv.common.gpu import db_num_workers, which_processor\n",
"\n",
"print(f\"Fast.ai version = {fastai.__version__}\")\n",
"which_processor()"
@ -315,7 +315,7 @@
" .split_by_rand_pct(0.2, seed=10)\n",
" .label_from_df(label_delim=' ')\n",
" .transform(size=IM_SIZE)\n",
" .databunch(bs=BATCH_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats))"
]
},
@ -820,7 +820,7 @@
],
"source": [
"interp = learn.interpret()\n",
"plot_thresholds(zero_one_accuracy, interp.probs, interp.y_true)"
"plot_thresholds(zero_one_accuracy, interp.preds, interp.y_true)"
]
},
{
@ -847,7 +847,7 @@
}
],
"source": [
"optimal_threshold = get_optimal_threshold(zero_one_accuracy, interp.probs, interp.y_true)\n",
"optimal_threshold = get_optimal_threshold(zero_one_accuracy, interp.preds, interp.y_true)\n",
"optimal_threshold"
]
},
@ -875,7 +875,7 @@
}
],
"source": [
"zero_one_accuracy(interp.probs, interp.y_true, threshold=optimal_threshold)"
"zero_one_accuracy(interp.preds, interp.y_true, threshold=optimal_threshold)"
]
},
{

Просмотреть файл

@ -124,7 +124,18 @@
"\n",
"from utils_cv.classification.data import Urls, is_data_multilabel\n",
"from utils_cv.classification.model import hamming_accuracy, TrainMetricsRecorder\n",
"from utils_cv.common.data import unzip_url"
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.common.gpu import db_num_workers, which_processor"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f\"Fast.ai version = {fastai.__version__}\")\n",
"which_processor()"
]
},
{
@ -275,7 +286,7 @@
"source": [
"data = (\n",
" label_list.transform(tfms=get_transforms(), size=IM_SIZE)\n",
" .databunch(bs=BATCH_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats)\n",
")"
]

Просмотреть файл

@ -103,7 +103,7 @@
"from utils_cv.classification.widget import ResultsWidget\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.common.gpu import which_processor\n",
"from utils_cv.common.gpu import db_num_workers, which_processor\n",
"from utils_cv.common.misc import copy_files, set_random_seed\n",
"from utils_cv.common.plot import line_graph, show_ims\n",
"\n",
@ -205,7 +205,7 @@
" .split_by_rand_pct(valid_pct=0.2, seed=10)\n",
" .label_const() # We don't use labels for negative data\n",
" .transform(size=IMAGE_SIZE)\n",
" .databunch(bs=BATCH_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats)\n",
")\n",
"# Do not shuffle U when we predict\n",
@ -264,7 +264,7 @@
" .split_by_folder()\n",
" .label_from_folder()\n",
" .transform(size=IMAGE_SIZE)\n",
" .databunch(bs=BATCH_SIZE)\n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers())\n",
" .normalize(imagenet_stats)\n",
")\n",
"data.show_batch()"
@ -836,7 +836,7 @@
" .split_by_folder()\n",
" .label_from_folder()\n",
" .transform(size=IMAGE_SIZE) \n",
" .databunch(bs=BATCH_SIZE) \n",
" .databunch(bs=BATCH_SIZE, num_workers = db_num_workers()) \n",
" .normalize(imagenet_stats))\n",
"print(data.batch_stats)\n",
"\n",

Просмотреть файл

@ -36,7 +36,9 @@
"* Prepare Data\n",
"* Prepare Training Script\n",
"* Setup and Run Hyperdrive Experiment\n",
"* Model Import, Re-train and Test"
"* Model Import, Re-train and Test\n",
"\n",
"For key concepts of AzureML see this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml?view=azure-ml-py&toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fpython%2Fapi%2Fazureml_py_toc%2Ftoc.json%3Fview%3Dazure-ml-py&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fpython%2Fazureml_py_breadcrumb%2Ftoc.json%3Fview%3Dazure-ml-py) on model training and evaluation."
]
},
{
@ -51,6 +53,7 @@
"\n",
"import fastai\n",
"from fastai.vision import *\n",
"import scrapbook as sb\n",
"\n",
"import azureml.core\n",
"from azureml.core import Workspace, Experiment\n",
@ -160,7 +163,7 @@
"### 2. Create Remote Target\n",
"We create a GPU cluster as our remote compute target. If a cluster with the same name already exists in our workspace, the script will load it instead. This [link](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#compute-targets-for-training) provides more information about how to set up a compute target on different locations.\n",
"\n",
"By default, the VM size is set to use _STANDARD_NC6_ machines. However, if quota is available, our recommendation is to use _STANDARD_NC6S_V3_ machines which come with the much faster V100 GPU."
"By default, the VM size is set to use STANDARD\\_NC6 machines. However, if quota is available, our recommendation is to use STANDARD\\_NC6S\\_V3 machines which come with the much faster V100 GPU. We set the minimum number of nodes to zero so that the cluster won't incur additional compute charges when not in use."
]
},
{
@ -250,7 +253,7 @@
"outputs": [],
"source": [
"# creating a folder for the training script here\n",
"script_folder = os.path.join(os.getcwd(), \"hyperparameter\")\n",
"script_folder = os.path.join(os.getcwd(), \"hyperdrive\")\n",
"os.makedirs(script_folder, exist_ok=True)"
]
},
@ -420,6 +423,7 @@
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" entry_script='train.py',\n",
" use_gpu=True,\n",
" pip_packages=['fastai'],\n",
" conda_packages=['scikit-learn'])"
]
@ -638,7 +642,10 @@
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
"source": [
"# Log some outputs using scrapbook which are used during testing to verify correct notebook execution\n",
"sb.glue(\"best_accuracy\", best_run_metrics['accuracy'])"
]
}
],
"metadata": {

Просмотреть файл

Просмотреть файл

@ -0,0 +1,78 @@
# Image classification
This directory provides examples and best practices for building image classification systems. Our goal is to enable users to easily and quickly train high-accuracy classifiers on their own datasets. We provide example notebooks with pre-set default parameters that are shown to work well on a variety of data sets. We also include extensive documentation of common pitfalls and best practices. Additionally, we show how Azure, Microsoft's cloud computing platform, can be used to speed up training on large data sets or deploy models as web services.
We recommend using PyTorch as a Deep Learning platform for its ease of use, simplicity when debugging, and popularity in the data science community. For Computer Vision functionality, we also rely heavily on [fast.ai](https://github.com/fastai/fastai), a PyTorch data science library which comes with rich deep learning features and extensive documentation. We highly recommend watching the [2019 fast.ai lecture series](https://course.fast.ai/videos/?lesson=1) video to understand the underlying technology. Fast.ai's [documentation](https://docs.fast.ai/) is also a valuable resource.
## Frequently asked questions
Answers to Frequently Asked Questions such as "How many images do I need to train a model?" or "How to annotate images?" can be found in the [FAQ.md](FAQ.md) file.
## Notebooks
We provide several notebooks to show how image classification algorithms are designed, evaluated and operationalized. Notebooks starting with `0` are intended to be run sequentially, as there are dependencies between them. These notebooks contain introductory "required" material. Notebooks starting with `1` can be considered optional and contain more complex and specialized topics.
While all notebooks can be executed in Windows, we have found that fast.ai is much faster on the Linux operating system. Additionally, using GPU dramatically improves training speeds. We suggest using an Azure Data Science Virtual Machine with V100 GPU ([instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm), [price table](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/)).
We have also found that some browsers do not render Jupyter widgets correctly. If you have issues, try using an alternative browser, such as Edge or Chrome.
| Notebook name | Description |
| --- | --- |
| [00_webcam.ipynb](00_webcam.ipynb)| Demonstrates inference on an image from your computer's webcam using a pre-trained model.
| [01_training_introduction.ipynb](01_training_introduction.ipynb)| Introduces some of the basic concepts around model training and evaluation.|
| [02_multilabel_classification.ipynb](02_multilabel_classification.ipynb)| Introduces multi-label classification and introduces key differences between training a multi-label and single-label classification models.|
| [03_training_accuracy_vs_speed.ipynb](03_training_accuracy_vs_speed.ipynb)| Trains a model with high accuracy vs one with a fast inferencing speed. *<font color="orange"> Use this to train on your own datasets! </font>* |
| [10_image_annotation.ipynb](10_image_annotation.ipynb)| A simple UI to annotate images. |
| [11_exploring_hyperparameters.ipynb](11_exploring_hyperparameters.ipynb)| Finds optimal model parameters using grid search. |
| [12_hard_negative_sampling.ipynb](12_hard_negative_sampling.ipynb)| Demonstrated how to use hard negatives to improve your model performance. |
| [20_azure_workspace_setup.ipynb](20_azure_workspace_setup.ipynb)| Sets up your Azure resources and Azure Machine Learning workspace. |
| [21_deployment_on_azure_container_instances.ipynb](21_deployment_on_azure_container_instances.ipynb)| Deploys a trained model exposed on a REST API using Azure Container Instances (ACI). |
| [22_deployment_on_azure_kubernetes_service.ipynb](22_deployment_on_azure_kubernetes_service.ipynb)| Deploys a trained model exposed on a REST API using the Azure Kubernetes Service (AKS). |
| [23_aci_aks_web_service_testing.ipynb](23_aci_aks_web_service_testing.ipynb)| Tests the deployed models on either ACI or AKS. |
| [24_exploring_hyperparameters_on_azureml.ipynb](24_exploring_hyperparameters_on_azureml.ipynb)| Performs highly parallel parameter sweeping using AzureML's HyperDrive. |
## Using a Virtual Machine
You may want to use a virtual machine to run the notebooks. Doing so will give you a lot more flexibility -- whether it is using a GPU enabled machine or simply working in Linux.
__Data Science Virtual Machine Builder__
One easy way to create your VM is to use the 'create_dsvm.py' tool located inside of the 'tools' folder in the root directory of the repo. Simply run `python tools/create_dsvm.py` at the root level of the repo. This tool preconfigures your virtual machine with the appropriate settings for working with this repository.
__Using the Azure Portal or CLI__
You can also spin up a VM directly using the Azure portal. For this repository,
you will want to create a Data Science Virtual Machine (DSVM). To do so, follow
[this](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro)
link that shows you how to provision your VM through the portal.
You can alternatively use the Azure command line (CLI) as well. Follow
[this](https://docs.microsoft.com/en-us/cli/azure/azure-cli-vm-tutorial?view=azure-cli-latest)
link to learn more about the Azure CLI and how it can be used to provision
resources.
Once your virtual machine has been created, ssh and tunnel into the machine, then run the "Getting started" steps inside of it. The 'create_dsvm' tool will show you how to properly perform the tunneling too. If you created your virtual machine using the portal or the CLI, you can tunnel your jupyter notebook ports using the following command:
```
$ssh -L local_port:remote_address:remote_port username@server.com
```
## Azure-enhanced notebooks
Azure products and services are used in certain notebooks to enhance the efficiency of developing classification systems at scale.
To successfully run these notebooks, the users **need an Azure subscription** or can [use Azure for free](https://azure.microsoft.com/en-us/free/).
The Azure products featured in the notebooks include:
* [Azure Machine Learning service](https://azure.microsoft.com/en-us/services/machine-learning-service/) - Azure Machine Learning service is a cloud service used to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. It is used across various notebooks for the AI model development related tasks such as deployment. [20_azure_workspace_setup](20_azure_workspace_setup.ipynb) shows how to set up your Azure resources and connect to an Azure Machine Learning service workspace.
* [Azure Container Instance](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aci) - You can use Azure Machine Learning service to host your classification model in a web service deployment on Azure Container Instance (ACI). ACI is good for low scale, CPU-based workloads. [21_deployment_on_azure_container_instances](21_deployment_on_azure_container_instances.ipynb) explains how to deploy a web service to ACI through Azure ML.
* [Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aks) - You can use Azure Machine Learning service to host your classification model in a web service deployment on Azure Kubernetes Service (AKS). AKS is good for high-scale production deployments and provides autoscaling, and fast response times. [22_deployment_on_azure_kubernetes_service](22_deployment_on_azure_kubernetes_service.ipynb) explains how to deploy a web service to AKS through Azure ML.

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 314 KiB

После

Ширина:  |  Высота:  |  Размер: 314 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 173 KiB

После

Ширина:  |  Высота:  |  Размер: 173 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 81 KiB

После

Ширина:  |  Высота:  |  Размер: 81 KiB

Просмотреть файл

До

Ширина:  |  Высота:  |  Размер: 636 KiB

После

Ширина:  |  Высота:  |  Размер: 636 KiB

Просмотреть файл

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,628 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>\n",
"\n",
"<i>Licensed under the MIT License.</i>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Testing different Hyperparameters and Benchmarking"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook, we will cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets using AzureML. We assume familiarity with the basic concepts and parameters, which are discussed in the [01_training_introduction.ipynb](01_training_introduction.ipynb), [02_mask_rcnn.ipynb](02_mask_rcnn.ipynb) and [03_training_accuracy_vs_speed.ipynb](03_training_accuracy_vs_speed.ipynb) notebooks. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will be using a Faster R-CNN model with ResNet-50 backbone to find all objects in an image belonging to 4 categories: 'can', 'carton', 'milk_bottle', 'water_bottle'. We will then conduct hyper-parameter tuning to find the best set of parameters for this model. For this, we present an overall process of utilizing AzureML, specifically [Hyperdrive](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive?view=azure-ml-py) which can train and evaluate many different parameter combinations in parallel. We demonstrate the following key steps: \n",
"* Configure AzureML Workspace\n",
"* Create Remote Compute Target (GPU cluster)\n",
"* Prepare Data\n",
"* Prepare Training Script\n",
"* Setup and Run Hyperdrive Experiment\n",
"* Model Import, Re-train and Test\n",
"\n",
"This notebook is very similar to the [24_exploring_hyperparameters_on_azureml.ipynb](../../classification/notebooks/24_exploring_hyperparameters_on_azureml.ipynb) hyperdrive notebook used for image classification. For key concepts of AzureML see this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-train-models-with-aml?view=azure-ml-py&toc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fpython%2Fapi%2Fazureml_py_toc%2Ftoc.json%3Fview%3Dazure-ml-py&bc=https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fpython%2Fazureml_py_breadcrumb%2Ftoc.json%3Fview%3Dazure-ml-py) on model training and evaluation."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"from distutils.dir_util import copy_tree\n",
"import numpy as np\n",
"import scrapbook as sb\n",
"\n",
"import azureml.core\n",
"from azureml.core import Workspace, Experiment\n",
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"import azureml.data\n",
"from azureml.train.estimator import Estimator\n",
"from azureml.train.hyperdrive import (\n",
" RandomParameterSampling, GridParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal, choice, uniform\n",
")\n",
"import azureml.widgets as widgets\n",
"\n",
"sys.path.append(\"../../\")\n",
"from utils_cv.common.data import unzip_url\n",
"from utils_cv.detection.data import Urls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ensure edits to libraries are loaded and plotting is shown in the notebook."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"%reload_ext autoreload\n",
"%autoreload 2\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now define some parameters which will be used in this notebook:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"# Azure resources\n",
"subscription_id = \"YOUR_SUBSCRIPTION_ID\"\n",
"resource_group = \"YOUR_RESOURCE_GROUP_NAME\" \n",
"workspace_name = \"YOUR_WORKSPACE_NAME\" \n",
"workspace_region = \"YOUR_WORKSPACE_REGION\" #Possible values eastus, eastus2, etc.\n",
"\n",
"# Choose a size for our cluster and the maximum number of nodes\n",
"VM_SIZE = \"STANDARD_NC6\" #\"STANDARD_NC6\", STANDARD_NC6S_V3\"\n",
"MAX_NODES = 10\n",
"\n",
"# Hyperparameter grid search space\n",
"IM_MAX_SIZES = [100,200] #Default is 1333 pixels, defining small values here to speed up training\n",
"LEARNING_RATES = np.linspace(1e-2, 1e-5, 4).tolist()\n",
"\n",
"# Image data\n",
"DATA_PATH = unzip_url(Urls.fridge_objects_path, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Config AzureML workspace\n",
"Below we setup (or load an existing) AzureML workspace, and get all its details as follows. Note that the resource group and workspace will get created if they do not yet exist. For more information regaring the AzureML workspace see also the [20_azure_workspace_setup.ipynb](../../classification/notebooks/20_azure_workspace_setup.ipynb) notebook in the image classification folder.\n",
"\n",
"To simplify clean-up (see end of this notebook), we recommend creating a new resource group to run this notebook."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from utils_cv.common.azureml import get_or_create_workspace\n",
"\n",
"ws = get_or_create_workspace(\n",
" subscription_id,\n",
" resource_group,\n",
" workspace_name,\n",
" workspace_region)\n",
"\n",
"# Print the workspace attributes\n",
"print('Workspace name: ' + ws.name, \n",
" 'Workspace region: ' + ws.location, \n",
" 'Subscription id: ' + ws.subscription_id, \n",
" 'Resource group: ' + ws.resource_group, sep = '\\n')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2. Create Remote Target\n",
"We create a GPU cluster as our remote compute target. If a cluster with the same name already exists in our workspace, the script will load it instead. This [link](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#compute-targets-for-training) provides more information about how to set up a compute target on different locations.\n",
"\n",
"By default, the VM size is set to use STANDARD\\_NC6 machines. However, if quota is available, our recommendation is to use STANDARD\\_NC6S\\_V3 machines which come with the much faster V100 GPU. We set the minimum number of nodes to zero so that the cluster won't incur additional compute charges when not in use."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found existing compute target.\n",
"{'currentNodeCount': 0, 'targetNodeCount': 0, 'nodeStateCounts': {'preparingNodeCount': 0, 'runningNodeCount': 0, 'idleNodeCount': 0, 'unusableNodeCount': 0, 'leavingNodeCount': 0, 'preemptedNodeCount': 0}, 'allocationState': 'Steady', 'allocationStateTransitionTime': '2019-08-30T16:15:49.268000+00:00', 'errors': None, 'creationTime': '2019-08-30T14:31:48.860219+00:00', 'modifiedTime': '2019-08-30T14:32:04.865042+00:00', 'provisioningState': 'Succeeded', 'provisioningStateTransitionTime': None, 'scaleSettings': {'minNodeCount': 0, 'maxNodeCount': 10, 'nodeIdleTimeBeforeScaleDown': 'PT120S'}, 'vmPriority': 'Dedicated', 'vmSize': 'STANDARD_NC6'}\n"
]
}
],
"source": [
"CLUSTER_NAME = \"gpu-cluster\"\n",
"\n",
"try:\n",
" # Retrieve if a compute target with the same cluster name already exists\n",
" compute_target = ComputeTarget(workspace=ws, name=CLUSTER_NAME)\n",
" print('Found existing compute target.')\n",
" \n",
"except ComputeTargetException:\n",
" # If it doesn't already exist, we create a new one with the name provided\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=VM_SIZE,\n",
" min_nodes=0,\n",
" max_nodes=MAX_NODES)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, CLUSTER_NAME, compute_config)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# we can use get_status() to get a detailed status for the current cluster. \n",
"print(compute_target.get_status().serialize())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3. Prepare data\n",
"In this notebook, we'll use the Fridge Objects dataset, which is already stored in the correct format. We then upload our data to the AzureML workspace.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Retrieving default datastore that got automatically created when we setup a workspace\n",
"ds = ws.get_default_datastore()\n",
"\n",
"# We now upload the data to the 'data' folder on the Azure portal\n",
"ds.upload(\n",
" src_dir=DATA_PATH,\n",
" target_path='data',\n",
" overwrite=True, # overwrite data if it already exists on the Azure blob storage\n",
" show_progress=True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Here's where you can see the data in your portal: \n",
"<img src=\"media/datastore.jpg\" width=\"800\" alt=\"Datastore screenshot for Hyperdrive notebook run\">\n",
"\n",
"### 4. Prepare training script\n",
"\n",
"Next step is to prepare scripts that AzureML Hyperdrive will use to train and evaluate models with selected hyperparameters."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"# Create a folder for the training script and copy the utils_cv library into that folder\n",
"script_folder = os.path.join(os.getcwd(), \"hyperdrive\")\n",
"os.makedirs(script_folder, exist_ok=True)\n",
"_ = copy_tree(os.path.join('..', '..', 'utils_cv'), os.path.join(script_folder, 'utils_cv'))"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting C:\\Users\\pabuehle\\Desktop\\ComputerVision\\detection\\notebooks\\hyperdrive/train.py\n"
]
}
],
"source": [
"%%writefile $script_folder/train.py\n",
"\n",
"# Use different matplotlib backend to avoid error during remote execution\n",
"import matplotlib \n",
"matplotlib.use(\"Agg\") \n",
"import matplotlib.pyplot as plt\n",
"\n",
"import os\n",
"import sys\n",
"import argparse\n",
"import numpy as np\n",
"from pathlib import Path\n",
"from azureml.core import Run\n",
"from utils_cv.detection.dataset import DetectionDataset\n",
"from utils_cv.detection.model import DetectionLearner, get_pretrained_fasterrcnn\n",
"from utils_cv.common.gpu import which_processor\n",
"which_processor()\n",
"\n",
"\n",
"# Parse arguments passed by Hyperdrive\n",
"parser = argparse.ArgumentParser()\n",
"parser.add_argument('--data-folder', type=str, dest='data_dir')\n",
"parser.add_argument('--epochs', type=int, dest='epochs', default=10)\n",
"parser.add_argument('--batch_size', type=int, dest='batch_size', default=1)\n",
"parser.add_argument('--learning_rate', type=float, dest='learning_rate', default=1e-4)\n",
"parser.add_argument('--min_size', type=int, dest='min_size', default=800)\n",
"parser.add_argument('--max_size', type=int, dest='max_size', default=1333)\n",
"parser.add_argument('--rpn_pre_nms_top_n_train', type=int, dest='rpn_pre_nms_top_n_train', default=2000)\n",
"parser.add_argument('--rpn_pre_nms_top_n_test', type=int, dest='rpn_pre_nms_top_n_test', default=1000)\n",
"parser.add_argument('--rpn_post_nms_top_n_train', type=int, dest='rpn_post_nms_top_n_train', default=2000)\n",
"parser.add_argument('--rpn_post_nms_top_n_test', type=int, dest='rpn_post_nms_top_n_test', default=1000)\n",
"parser.add_argument('--rpn_nms_thresh', type=float, dest='rpn_nms_thresh', default=0.7)\n",
"parser.add_argument('--box_score_thresh', type=float, dest='box_score_thresh', default=0.05)\n",
"parser.add_argument('--box_nms_thresh', type=float, dest='box_nms_thresh', default=0.5)\n",
"parser.add_argument('--box_detections_per_img', type=int, dest='box_detections_per_img', default=100)\n",
"args = parser.parse_args()\n",
"params = vars(args)\n",
"print(f\"params = {params}\")\n",
"\n",
"# Getting training and validation data\n",
"path = os.path.join(params['data_dir'], \"data\")\n",
"data = DetectionDataset(path, train_pct=0.5, batch_size = params[\"batch_size\"])\n",
"print(\n",
" f\"Training dataset: {len(data.train_ds)} | Training DataLoader: {data.train_dl} \\n \\\n",
" Testing dataset: {len(data.test_ds)} | Testing DataLoader: {data.test_dl}\"\n",
")\n",
"\n",
"# Get model\n",
"model = get_pretrained_fasterrcnn(\n",
" num_classes = len(data.labels),\n",
" min_size = params[\"min_size\"],\n",
" max_size = params[\"max_size\"],\n",
" rpn_pre_nms_top_n_train = params[\"rpn_pre_nms_top_n_train\"],\n",
" rpn_pre_nms_top_n_test = params[\"rpn_pre_nms_top_n_test\"],\n",
" rpn_post_nms_top_n_train = params[\"rpn_post_nms_top_n_train\"], \n",
" rpn_post_nms_top_n_test = params[\"rpn_post_nms_top_n_test\"],\n",
" rpn_nms_thresh = params[\"rpn_nms_thresh\"],\n",
" box_score_thresh = params[\"box_score_thresh\"], \n",
" box_nms_thresh = params[\"box_nms_thresh\"],\n",
" box_detections_per_img = params[\"box_detections_per_img\"]\n",
")\n",
"detector = DetectionLearner(data, model)\n",
"\n",
"# Run Training\n",
"detector.fit(params[\"epochs\"], lr=params[\"learning_rate\"], print_freq=30)\n",
"print(f\"Average precision after each epoch: {detector.ap}\")\n",
"\n",
"# Add log entries\n",
"run = Run.get_context()\n",
"run.log(\"accuracy\", float(detector.ap[-1])) # Logging our primary metric 'accuracy'\n",
"run.log(\"data_dir\", params[\"data_dir\"])\n",
"run.log(\"epochs\", params[\"epochs\"])\n",
"run.log(\"batch_size\", params[\"batch_size\"])\n",
"run.log(\"learning_rate\", params[\"learning_rate\"])\n",
"run.log(\"min_size\", params[\"min_size\"])\n",
"run.log(\"max_size\", params[\"max_size\"])\n",
"run.log(\"rpn_pre_nms_top_n_train\", params[\"rpn_pre_nms_top_n_train\"])\n",
"run.log(\"rpn_pre_nms_top_n_test\", params[\"rpn_pre_nms_top_n_test\"])\n",
"run.log(\"rpn_post_nms_top_n_train\", params[\"rpn_post_nms_top_n_train\"])\n",
"run.log(\"rpn_post_nms_top_n_test\", params[\"rpn_post_nms_top_n_test\"])\n",
"run.log(\"rpn_nms_thresh\", params[\"rpn_nms_thresh\"])\n",
"run.log(\"box_score_thresh\", params[\"box_score_thresh\"])\n",
"run.log(\"box_nms_thresh\", params[\"box_nms_thresh\"])\n",
"run.log(\"box_detections_per_img\", params[\"box_detections_per_img\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5. Setup and run Hyperdrive experiment\n",
"\n",
"#### 5.1 Create Experiment \n",
"Experiment is the main entry point into experimenting with AzureML. To create new Experiment or get the existing one, we pass our experimentation name 'hyperparameter-tuning'.\n"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"exp = Experiment(workspace=ws, name='hyperparameter-tuning')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 5.2. Define search space\n",
"\n",
"Now we define the search space of hyperparameters. To test discrete parameter values use 'choice()', and for uniform sampling use 'uniform()'. For more options, see [Hyperdrive parameter expressions](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py).\n",
"\n",
"Hyperdrive provides three different parameter sampling methods: 'RandomParameterSampling', 'GridParameterSampling', and 'BayesianParameterSampling'. Details about each method can be found [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters). Here, we use the 'GridParameterSampling'."
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"# Grid-search\n",
"param_sampling = GridParameterSampling( {\n",
" '--learning_rate': choice(LEARNING_RATES),\n",
" '--max_size': choice(IM_MAX_SIZES)\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<b>AzureML Estimator</b> is the building block for training. An Estimator encapsulates the training code and parameters, the compute resources and runtime environment for a particular training scenario.\n",
"We create one for our experimentation with the dependencies our model requires as follows:"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"script_params = {\n",
" '--data-folder': ds.as_mount()\n",
"}\n",
"\n",
"est = Estimator(source_directory=script_folder,\n",
" script_params=script_params,\n",
" compute_target=compute_target,\n",
" entry_script='train.py',\n",
" use_gpu=True,\n",
" pip_packages=['nvidia-ml-py3','fastai'],\n",
" conda_packages=['scikit-learn', 'pycocotools>=2.0','torchvision==0.3','cudatoolkit==9.0'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now create a HyperDriveConfig object which includes information about parameter space sampling, termination policy, primary metric, estimator and the compute target to execute the experiment runs on."
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"hyperdrive_run_config = HyperDriveConfig(\n",
" estimator=est,\n",
" hyperparameter_sampling=param_sampling,\n",
" policy=None, # Do not use any early termination \n",
" primary_metric_name='accuracy',\n",
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n",
" max_total_runs=None, # Set to none to run all possible grid parameter combinations,\n",
" max_concurrent_runs=MAX_NODES\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 5.3 Run Experiment\n",
"\n",
"We now run the parameter sweep and visualize the experiment progress using the `RunDetails` widget:\n",
"<img src=\"media/hyperdrive_widget_run.jpg\" width=\"700px\">\n",
"\n",
"Once completed, the accuracy for the different runs can be analyzed via the widget, for example below is a plot of the accuracy versus learning rate below (for two different image sizes)\n",
"<img src=\"media/hyperdrive_widget_analysis.jpg\" width=\"700px\">\n"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Url to hyperdrive run on the Azure portal: https://mlworkspace.azure.ai/portal/subscriptions/2ad17db4-e26d-4c9e-999e-adae9182530c/resourceGroups/pabuehle_delme2_hyperdrive/providers/Microsoft.MachineLearningServices/workspaces/pabuehle_ws/experiments/hyperparameter-tuning/runs/hyperparameter-tuning_1567193416225\n"
]
}
],
"source": [
"hyperdrive_run = exp.submit(config=hyperdrive_run_config)\n",
"print(f\"Url to hyperdrive run on the Azure portal: {hyperdrive_run.get_portal_url()}\")"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c80070535f744b8aab68560b31aa38fe",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"_HyperDriveWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'INFO'…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"widgets.RunDetails(hyperdrive_run).show()"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'runId': 'hyperparameter-tuning_1567190769563',\n",
" 'target': 'gpu-cluster',\n",
" 'status': 'Canceled',\n",
" 'startTimeUtc': '2019-08-30T18:46:09.79512Z',\n",
" 'endTimeUtc': '2019-08-30T19:21:47.165873Z',\n",
" 'properties': {'primary_metric_config': '{\"name\": \"accuracy\", \"goal\": \"maximize\"}',\n",
" 'runTemplate': 'HyperDrive',\n",
" 'azureml.runsource': 'hyperdrive',\n",
" 'platform': 'AML',\n",
" 'baggage': 'eyJvaWQiOiAiNWFlYTJmMzAtZjQxZC00ZDA0LWJiOGUtOWU0NGUyZWQzZGQ2IiwgInRpZCI6ICI3MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMWRiNDciLCAidW5hbWUiOiAiMDRiMDc3OTUtOGRkYi00NjFhLWJiZWUtMDJmOWUxYmY3YjQ2In0',\n",
" 'ContentSnapshotId': '348bdd53-a99f-4ddd-8ab3-a727cd12bdba'},\n",
" 'logFiles': {'azureml-logs/hyperdrive.txt': 'https://pabuehlestorage779f8bc80.blob.core.windows.net/azureml/ExperimentRun/dcid.hyperparameter-tuning_1567190769563/azureml-logs/hyperdrive.txt?sv=2018-11-09&sr=b&sig=xLa2nd2%2BFQxDmg7tQGBScePCocDYJEayFyf9MIIPO8Y%3D&st=2019-08-30T19%3A11%3A48Z&se=2019-08-31T03%3A21%3A48Z&sp=r'}}"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hyperdrive_run.wait_for_completion()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To load an existing Hyperdrive Run instead of start new one, we can use \n",
"```python\n",
"hyperdrive_run = azureml.train.hyperdrive.HyperDriveRun(exp, <your-run-id>, hyperdrive_run_config=hyperdrive_run_config)\n",
"```\n",
"We also can cancel the Run with \n",
"```python \n",
"hyperdrive_run_config.cancel().\n",
"```\n",
"\n",
"Once all the child-runs are finished, we can get the best run and the metrics."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Best Run Id:hyperparameter-tuning_1567193416225_4\n",
"Run(Experiment: hyperparameter-tuning,\n",
"Id: hyperparameter-tuning_1567193416225_4,\n",
"Type: azureml.scriptrun,\n",
"Status: Completed)\n",
"\n",
"* Best hyperparameters:\n",
"{'--data-folder': '$AZUREML_DATAREFERENCE_workspaceblobstore', '--learning_rate': '0.01', '--max_size': '200'}\n",
"Accuracy = 0.8988979153074632\n",
"Learning Rate = 0.01\n"
]
}
],
"source": [
"# Get best run and print out metrics\n",
"best_run = hyperdrive_run.get_best_run_by_primary_metric()\n",
"best_run_metrics = best_run.get_metrics()\n",
"parameter_values = best_run.get_details()['runDefinition']['arguments']\n",
"best_parameters = dict(zip(parameter_values[::2], parameter_values[1::2]))\n",
"\n",
"print(f\"* Best Run Id:{best_run.id}\")\n",
"print(best_run)\n",
"print(\"\\n* Best hyperparameters:\")\n",
"print(best_parameters)\n",
"print(f\"Accuracy = {best_run_metrics['accuracy']}\")\n",
"print(\"Learning Rate =\", best_run_metrics['learning_rate'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7. Clean up\n",
"\n",
"To avoid unnecessary expenses, all resources which were created in this notebook need to get deleted once parameter search is concluded. To simplify this clean-up step, we recommended creating a new resource group to run this notebook. This resource group can then be deleted, e.g. using the Azure Portal, which will remove all created resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Log some outputs using scrapbook which are used during testing to verify correct notebook execution\n",
"sb.glue(\"best_accuracy\", best_run_metrics['accuracy'])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (cv)",
"language": "python",
"name": "cv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

Просмотреть файл

@ -0,0 +1,82 @@
# Object Detection
## Frequently asked questions
This document tries to answer frequent questions related to object detection. For generic Machine Learning questions, such as "How many training examples do I need?" or "How to monitor GPU usage during training?" see also the image classification [FAQ](https://github.com/microsoft/ComputerVision/blob/master/classification/FAQ.md).
* General
* Data
* [How to annotate images?](#how-to-annotate-images)
* Technology
* [How does the technology work?](#how-does-the-technology-work)
* [R-CNN object detection approaches](#r-cnn-object-detection-approaches)
* [Intersection-over-Union overlap metric](#intersection-over-union-overlap-metric)
* [Non-maxima suppression](#non-maxima-suppression)
* [Mean Average Precision](#mean-average-precision)
## General
## Data
### How to annotate images?
Annotated object locations are required to train and evaluate an object detector. One of the best open source UIs which runs on Windows and Linux is [VOTT](https://github.com/Microsoft/VoTT/releases). VOTT can be used to manually draw rectangles around one or more objects in an image. These annotations can then be exported in Pascal-VOC format (single xml-file per image) which the provided notebooks know how to read.
<p align="center">
<img src="media/vott_ui.jpg" width="600" align="center"/>
</p>
When creating a new project in VOTT, note that the "source connection" can simply point to a local folder which contains the images to be annotated, and respectively the "target connection" to a folder where to write the output. Pascal VOC style annotations can be exported by selecting "Pascal VOC" in the "Export Settings" tab and then using the "Export Project" button in the "Tags Editor" tab.
Selection and annotating images is complex and consistency is key. For example:
* All objects in an image need to be annotated, even if the image contains many of them. Consider removing the image if this would take too much time.
* Ambiguous images should be removed, for example if it is unclear to a human if an object is lemon or a tennis ball, or if the image is blurry, etc.
* Occluded objects should either be always annotated, or never.
* Ensuring consistency is difficult especially if multiple people are involved. Hence our recommendation is, if possible, that the person who trains the model annotates all images. This also helps in gaining a better understanding of the problem domain.
Especially the test set used for evaluation should be of high annotation quality so that accuracy measures reflect the true performance of the model. The training set can, but ideally shouldn't be, noisy.
## Technology
### How does the technology work?
State-of-the-art object detection methods, such as used in this repository, are based on Convolutional Neural Networks (CNN) which have been shown to work well on image data. Most such methods use a CNN as backbone which was pre-trained on millions of images (typically using the [ImageNet](http://image-net.org/index) dataset). Such a pre-trained model is then incorporated into an object detection pipeline, and can be fine-tuned with only a small amount of annotated images. For a more detailed explanation of "fine-tuning", including code examples, see the [classification](../classification/) folder.
### R-CNN Object Detection Approaches
R-CNNs for Object Detection were introduced in 2014 by [Ross Girshick et al.](http://arxiv.org/abs/1311.2524), and shown to outperform previous state-of-the-art approaches on one of the major object recognition challenges: [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/). The main drawback of the approach was its slow inference speed. Since then, three major follow-up papers were published which introduced significant speed improvements: [Fast R-CNN](https://arxiv.org/pdf/1504.08083v2.pdf) and [Faster R-CNN](https://arxiv.org/abs/1506.01497), and [Mask R-CNN](https://arxiv.org/pdf/1703.06870.pdf).
Similar to most object detection methods, R-CNN use a deep Neural Network which was trained for image classification using millions of annotated images and modify it for the purpose of object detection. The basic idea from the first R-CNN paper is illustrated in the figure below (taken from the paper):
1. Given an input image
2. A large number region proposals, aka Regions-of-Interests (ROIs), are generated.
3. These ROIs are then independently sent through the network which outputs a vector of e.g. 4096 floating point values for each ROI.
4. Finally, a classifier is learned which takes the 4096 floats ROI representation as input and outputs a label and confidence to each ROI.
<p align="center">
<img src="media/rcnn_pipeline.jpg" width="600" align="center"/>
</p>
While this approach works well in terms of accuracy, it is very costly to compute since the Neural Network has to be evaluated for each ROI. Fast R-CNN addresses this drawback by only evaluating most of the network (to be specific: the convolution layers) a single time per image. According to the authors, this leads to a 213 times speed-up during testing and a 9x speed-up during training without loss of accuracy. Faster R-CNN then shows how ROIs can be computed as part of the network, essentially combining all steps in the figure above into a single DNN.
### Intersection-over-Union overlap metric
It is often necessary to measure by how much two given rectangles overlap. For example, one rectangle might correspond to the ground-truth location of an object, while the second rectangle corresponds to the estimated location, and the goal is to measure how precise the object was detected.
For this, a metric called Intersection-over-Union (IoU) is typically used. In the example below, the IoU is given by dividing the yellow area by the combined yellow and blue areas. An IoU of 1.0 corresponds to a perfect match, while an IoU of 0 indicates that the two rectangles do not overlap. Typically an IoU of 0.5 is considered a good localization. See also this [page](https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/) for a more in-depth discussion.
<p align="center">
<img src="media/iou_example.jpg" width="400" align="center"/>
</p>
### Non-maxima suppression
Object detection methods often output multiple detections which fully or partly cover the same object in an image. These detections need to be pruned to be able to count objects and obtain their exact locations. This is traditionally done using a technique called Non-Maxima Suppression (NMS), and is implemented by iteratively selecting the detection with highest confidence and removing all other detections which (i) are classified to be of the same class; and (ii) have a significant overlap measured using the Intersection-over-Union (IOU) metric.
Detection results with confidence scores before (left) and after non-maxima Suppression using IOU thresholds of (middle) 0.8 and (right) 0.5:
<p align="center">
<img src="media/nms_example.jpg" width="600" align="center"/>
</p>
### Mean Average Precision
Once trained, the quality of the model can be measured using different criteria, such as precision, recall, accuracy, area-under-curve, etc. A common metric which is used for the Pascal VOC object recognition challenge is to measure the Average Precision (AP) for each class. Average Precision takes confidence in the detections into account and hence assigns a smaller penalty to false detections with low confidence. For a description of Average Precision see [Everingham et. al](http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf). The mean Average Precision (mAP) is then computed by taking the average over all APs.

Просмотреть файл

@ -0,0 +1,35 @@
# Object Detection
This directory provides examples and best practices for building object detection systems. Our goal is to enable the users to bring their own datasets and train a high-accuracy model easily and quickly. To this end, we provide example notebooks with pre-set default parameters shown to work well on a variety of datasets, and extensive documentation of common pitfalls, best practices, etc.
Object Detection is one of the main problems in Computer Vision. Traditionally, this required expert knowledge to identify and implement so called “features” that highlight the position of objects in the image. Starting in 2012 with the famous AlexNet paper, Deep Neural Networks are used to automatically find these features. This lead to a huge improvement in the field for a large range of problems.
This repository uses [torchvision's](https://pytorch.org/docs/stable/torchvision/index.html) Faster R-CNN implementation which has been shown to work well on a wide variety of Computer Vision problems. See the [FAQ](FAQ.md) for an explanation of the underlying data science aspects.
We recommend running these samples on a machine with GPU, on either Windows or Linux. While a GPU is technically not required, training gets prohibitively slow even when using only a few dozens of images.
```diff
+ (August 2019) This is work-in-progress and more functionality and documentation will be added continuously.
```
## Frequently asked questions
Answers to frequently asked questions such as "How does the technology work?" can be found in the [FAQ](FAQ.md) located in this folder. For generic questions such as "How many training examples do I need?" or "How to monitor GPU usage during training?" see the [FAQ.md](../classification/FAQ.md) in the classification folder.
## Notebooks
We provide several notebooks to show how object detection algorithms can be designed and evaluated:
| Notebook name | Description |
| --- | --- |
| [00_webcam.ipynb](./00_webcam.ipynb)| Quick-start notebook which demonstrates how to build an object detection system using a single image or webcam as input.
| [01_training_and_evaluation_introduction.ipynb](./01_training_and_evaluation_introduction.ipynb)| Notebook which explains the basic concepts around model training and evaluation.|
| [11_exploring_hyperparameters_on_azureml.ipynb](./11_exploring_hyperparameters_on_azureml.ipynb)| Performs highly parallel parameter sweeping using AzureML's HyperDrive. |
## Contribution guidelines
See the [contribution guidelines](../../CONTRIBUTING.md) in the root folder.

Двоичные данные
scenarios/detection/media/00_webcam_snapshot.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 154 KiB

Двоичные данные
scenarios/detection/media/datastore.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 158 KiB

Двоичные данные
scenarios/detection/media/figures.pptx Normal file

Двоичный файл не отображается.

Двоичные данные
scenarios/detection/media/hyperdrive_widget_analysis.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 77 KiB

Двоичные данные
scenarios/detection/media/hyperdrive_widget_run.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 279 KiB

Двоичные данные
scenarios/detection/media/iou_example.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 51 KiB

Двоичные данные
scenarios/detection/media/labelimg_ui.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 219 KiB

Двоичные данные
scenarios/detection/media/nms_example.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 107 KiB

Двоичные данные
scenarios/detection/media/rcnn_pipeline.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 108 KiB

Двоичные данные
scenarios/detection/media/vott_ui.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 82 KiB

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше