Merge branch 'staging' into pabuehle_testing

This commit is contained in:
PatrickBue 2019-08-12 18:23:21 +00:00 коммит произвёл GitHub
Родитель f82ccee7a3 c93eabd549
Коммит d2502c7eb3
17 изменённых файлов: 1239 добавлений и 513 удалений

Просмотреть файл

@ -13,10 +13,6 @@ variables:
value : 'reports/test-unit.xml'
trigger: none
pr:
- staging
- master
jobs:
- job: AzureMLNotebookTest
timeoutInMinutes: 300
@ -61,4 +57,4 @@ jobs:
inputs:
testResultsFiles: '**/test-*.xml'
failTaskOnFailedTests: true
condition: succeededOrFailed()
condition: succeededOrFailed()

Просмотреть файл

@ -1,3 +1,23 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# More info on scheduling: https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#scheduled-triggers
# Implementing the scheduler from the dashboard
# Uncomment in case it wants to be done from using the yml
# schedules:
# - cron: "56 22 * * *"
# displayName: Daily track of metrics
# branches:
# include:
# - master
# always: true
# no PR builds
pr: none
# no CI trigger
trigger: none
jobs:
- job: Repometrics
@ -5,7 +25,6 @@ jobs:
vmImage: 'ubuntu-16.04'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.6'
@ -13,13 +32,13 @@ jobs:
- script: |
cp tools/repo_metrics/config_template.py tools/repo_metrics/config.py
sed -i ''s/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/$(github_token)/g'' tools/repo_metrics/config.py
sed -i ''s/XXXXXXXXXXXXXXXXXXXXXXXXX/$(cosmosdb_connectionstring)/g'' tools/repo_metrics/config.py
sed -i 's#<GITHUB_TOKEN>#$(github_token)#' tools/repo_metrics/config.py
sed -i "s#<CONNECTION_STRING>#`echo '$(cosmosdb_connectionstring)' | sed 's@&@\\\\&@g'`#" tools/repo_metrics/config.py
displayName: Configure CosmosDB Connection
- script: |
python -m pip install python-dateutil>=2.80 pymongo>=3.8.0 gitpython>2.1.11 requests>=2.21.0
python tools/repo_metrics/track_metrics.py --github_repo "https://github.com/microsoft/ComputerVision" --save_to_database
python -m pip install 'python-dateutil>=2.8.0' 'pymongo>=3.8.0' 'gitpython>2.1.11' 'requests>=2.21.0'
python tools/repo_metrics/track_metrics.py --github_repo 'https://github.com/microsoft/ComputerVision' --save_to_database
displayName: Python script to record stats

Просмотреть файл

@ -5,6 +5,10 @@ steps:
echo "##vso[task.prependpath]/data/anaconda/bin"
displayName: Add Conda to PATH
- bash: |
rm -rf /data/anaconda/envs/cv
displayName: 'Remove conda env in case it was not created correctly'
- bash: |
conda env create -f environment.yml
source activate cv

Просмотреть файл

@ -20,12 +20,12 @@ When you submit a pull request, a CLA-bot will automatically determine whether y
## Steps to Contributing
Here are the basic steps to get started with your first contribution. Please reach out with any questions.
1. Use [open issues](https://github.com/Microsoft/Recommenders/issues) to discuss the proposed changes. Create an issue describing changes if necessary to collect feedback. Also, please use provided labels to tag issues so everyone can easily sort issues of interest.
1. Use [open issues](https://github.com/Microsoft/ComputerVision/issues) to discuss the proposed changes. Create an issue describing changes if necessary to collect feedback. Also, please use provided labels to tag issues so everyone can easily sort issues of interest.
1. [Fork the repo](https://help.github.com/articles/fork-a-repo/) so you can make and test local changes.
1. Create a new branch for the issue. We suggest prefixing the branch with your username and then a descriptive title: (e.g. gramhagen/update_contributing_docs)
1. Create a test that replicates the issue.
1. Make code changes.
1. Ensure unit tests pass and code style / formatting is consistent (see [wiki](https://github.com/Microsoft/Recommenders/wiki/Coding-Guidelines#python-and-docstrings-style) for more details).
1. Ensure unit tests pass and code style / formatting is consistent, and follows the [Zen of Python](https://github.com/Microsoft/Recommenders/wiki/Coding-Guidelines#the-zen-of-python).
1. We use [pre-commit](https://pre-commit.com/) package to run our pre-commit hooks. We use black formatter and flake8 linting on each commit. In order to set up pre-commit on your machine, follow the steps here, please note that you only need to run these steps the first time you use pre-commit for this project.
* Update your conda environment, pre-commit is part of the yaml file or just do
@ -49,7 +49,6 @@ Here are the basic steps to get started with your first contribution. Please rea
Note: We use the staging branch to land all new features, so please remember to create the Pull Request against staging.
Once the features included in a milestone are complete we will merge staging into master and make a release. See the wiki for more detail about our [merge strategy](https://github.com/Microsoft/Recommenders/wiki/Strategy-to-merge-the-code-to-master-branch).
## Working with Notebooks
@ -77,8 +76,6 @@ nbdiff notebook_1.ipynb notebook_2.ipynb
We strive to maintain high quality code to make the utilities in the repository easy to understand, use, and extend. We also work hard to maintain a friendly and constructive environment. We've found that having clear expectations on the development process and consistent style helps to ensure everyone can contribute and collaborate effectively.
Please review the [coding guidelines](https://github.com/Microsoft/Recommenders/wiki/Coding-Guidelines) wiki page to see more details about the expectations for development approach and style.
We follow the Google docstring guidlines outlined on this [styleguide](https://github.com/google/styleguide/blob/gh-pages/pyguide.md#38-comments-and-docstrings) page. For example:
```python
def bite(n:int, animal:animal_object) -> bool:
@ -103,7 +100,7 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
Apart from the official Code of Conduct developed by Microsoft, in the Recommenders team we adopt the following behaviors, to ensure a great working environment:
Apart from the official Code of Conduct developed by Microsoft, we adopt the following behaviors, to ensure a great working environment:
#### Do not point fingers
Lets be constructive. For example: "This method is missing docstrings" instead of "YOU forgot to put docstrings".

Просмотреть файл

@ -10,9 +10,10 @@ The current main priority is to support image classification. Additionally, we a
## Getting Started
To get started on your local machine:
To get started:
1. Install Anaconda with Python >= 3.6. [Miniconda](https://conda.io/miniconda.html) is a quick way to get started.
1. (Optional) Create an Azure Data Science Virtual Machine with e.g. a V100 GPU ([instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm), [price table](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/)).
1. Install Anaconda with Python >= 3.6. [Miniconda](https://conda.io/miniconda.html). This step can be skipped if working on a Data Science Virtual Machine.
1. Clone the repository
```
git clone https://github.com/Microsoft/ComputerVision

Просмотреть файл

@ -22,6 +22,7 @@
"source": [
"In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets.\n",
"\n",
"For an example of how to scale up with remote GPU clusters on Azure Machine Learning, please view [24_exploring_hyperparameters_on_azureml.ipynb](../24_exploring_hyperparameters_on_azureml).\n",
"## Table of Contents\n",
"\n",
"* [Testing hyperparameters](#hyperparam)\n",
@ -52,7 +53,7 @@
"metadata": {},
"source": [
"Ensure edits to libraries are loaded and plotting is shown in the notebook."
]
]
},
{
"cell_type": "code",

Просмотреть файл

@ -20,16 +20,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets using AzureML"
"In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets using AzureML. We assume familiarity with the basic concepts and parameters, which are discussed in the [01_training_introduction.ipynb](01_training_introduction.ipynb), [02_multilabel_classification.ipynb](02_multilabel_classification.ipynb) and [03_training_accuracy_vs_speed.ipynb](03_training_accuracy_vs_speed.ipynb) notebooks. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Similar to [11_exploring_hyperparameters.ipynb](https://github.com/microsoft/ComputerVision/blob/master/classification/notebooks/11_exploring_hyperparameters.ipynb), we will learn more about how different learning rates and different image sizes affect our model's accuracy when restricted to 10 epochs, and we want to build an AzureML experiment to test out these hyperparameters. \n",
"Similar to [11_exploring_hyperparameters.ipynb](https://github.com/microsoft/ComputerVision/blob/master/classification/notebooks/11_exploring_hyperparameters.ipynb), we will learn more about how different learning rates and different image sizes affect our model's accuracy when restricted to 16 epochs, and we want to build an AzureML experiment to test out these hyperparameters. \n",
"\n",
"We will be using a ResNet50 model to classify a set of images into 4 categories - 'can', 'carton', 'milk_bottle', 'water_bottle'. We will then conduct hyper-parameter tuning to find the best set of parameters for this model. For this,\n",
"We will be using a ResNet18 model to classify a set of images into 4 categories: 'can', 'carton', 'milk_bottle', 'water_bottle'. We will then conduct hyper-parameter tuning to find the best set of parameters for this model. For this,\n",
"we present an overall process of utilizing AzureML, specifically [Hyperdrive](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive?view=azure-ml-py) component to run this tuning in parallel (and not successively).We demonstrate the following key steps: \n",
"* Configure AzureML Workspace\n",
"* Create Remote Compute Target (GPU cluster)\n",
@ -43,15 +43,7 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SDK version: 1.0.48\n"
]
}
],
"outputs": [],
"source": [
"import os\n",
"import sys\n",
@ -65,15 +57,14 @@
"from azureml.core.compute import ComputeTarget, AmlCompute\n",
"from azureml.core.compute_target import ComputeTargetException\n",
"import azureml.data\n",
"from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal, choice\n",
"from azureml.train.estimator import Estimator\n",
"\n",
"from azureml.train.hyperdrive import (\n",
" RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal, choice, uniform\n",
")\n",
"import azureml.widgets as widgets\n",
"\n",
"from utils_cv.classification.data import Urls\n",
"from utils_cv.common.data import unzip_url\n",
"\n",
"print(\"SDK version:\", azureml.core.VERSION)"
"from utils_cv.common.data import unzip_url"
]
},
{
@ -98,8 +89,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Config AzureML workspace\n",
"Below we setup AzureML workspace and get all its details as follows:"
"We now define some parameters which will be used in this notebook:"
]
},
{
@ -116,36 +106,37 @@
"subscription_id = \"YOUR_SUBSCRIPTION_ID\"\n",
"resource_group = \"YOUR_RESOURCE_GROUP_NAME\" \n",
"workspace_name = \"YOUR_WORKSPACE_NAME\" \n",
"workspace_region = \"YOUR_WORKSPACE_REGION\" #Possible values eastus, eastus2 and so on.\n",
"workspace_region = \"YOUR_WORKSPACE_REGION\" #Possible values eastus, eastus2, etc.\n",
"\n",
"max_total_runs=50\n"
"# Choose a size for our cluster and the maximum number of nodes\n",
"VM_SIZE = \"STANDARD_NC6\" #\"STANDARD_NC6S_V3\"\n",
"MAX_NODES = 12\n",
"\n",
"# Hyperparameter search space\n",
"IM_SIZES = [150, 300]\n",
"LEARNING_RATE_MAX = 1e-3\n",
"LEARNING_RATE_MIN = 1e-5\n",
"MAX_TOTAL_RUNS = 10 #Set to higher value to test more parameter combinations\n",
"\n",
"# Image data\n",
"DATA = unzip_url(Urls.fridge_objects_path, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1. Config AzureML workspace\n",
"Below we setup (or load an existing) AzureML workspace, and get all its details as follows. Note that the resource group and workspace will get created if they do not yet exist. For more information regaring the AzureML workspace see also the [20_azure_workspace_setup.ipynb](20_azure_workspace_setup.ipynb) notebook.\n",
"\n",
"To simplify clean-up (see end of this notebook), we recommend creating a new resource group to run this notebook."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING - Warning: Falling back to use azure cli login credentials.\n",
"If you run your code in unattended mode, i.e., where you can't give a user input, then we recommend to use ServicePrincipalAuthentication or MsiAuthentication.\n",
"Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Workspace name: smoketestwsnew\n",
"Workspace region: eastus2\n",
"Subscription id: 0ca618d2-22a8-413a-96d0-0f1b531129c3\n",
"Resource group: smoketestnew11\n"
]
}
],
"outputs": [],
"source": [
"from utils_cv.common.azureml import get_or_create_workspace\n",
"\n",
@ -167,9 +158,9 @@
"metadata": {},
"source": [
"### 2. Create Remote Target\n",
"We create a GPU cluster as our remote compute target. If a cluster with the same name already exists in our workspace, the script will load it instead. We can see [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#compute-targets-for-training) to learn more about setting up a compute target on different locations.\n",
"We create a GPU cluster as our remote compute target. If a cluster with the same name already exists in our workspace, the script will load it instead. This [link](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#compute-targets-for-training) provides more information about how to set up a compute target on different locations.\n",
"\n",
"This notebook selects STANDARD_NC6 virtual machine (VM) and sets its priority as 'lowpriority' to reduce costs."
"By default, the VM size is set to use _STANDARD_NC6_ machines. However, if quota is available, our recommendation is to use _STANDARD_NC6S_V3_ machines which come with the much faster V100 GPU."
]
},
{
@ -181,37 +172,32 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Found existing compute target.\n",
"{'currentNodeCount': 1, 'targetNodeCount': 0, 'nodeStateCounts': {'preparingNodeCount': 0, 'runningNodeCount': 0, 'idleNodeCount': 0, 'unusableNodeCount': 0, 'leavingNodeCount': 1, 'preemptedNodeCount': 0}, 'allocationState': 'Resizing', 'allocationStateTransitionTime': '2019-07-22T04:40:41.047000+00:00', 'errors': None, 'creationTime': '2019-07-22T02:26:37.808395+00:00', 'modifiedTime': '2019-07-22T02:26:53.969636+00:00', 'provisioningState': 'Succeeded', 'provisioningStateTransitionTime': None, 'scaleSettings': {'minNodeCount': 0, 'maxNodeCount': 4, 'nodeIdleTimeBeforeScaleDown': 'PT120S'}, 'vmPriority': 'Dedicated', 'vmSize': 'STANDARD_NC6'}\n"
"Creating a new compute target...\n",
"Creating\n",
"Succeeded\n",
"AmlCompute wait for completion finished\n",
"Minimum number of nodes requested have been provisioned\n",
"{'currentNodeCount': 0, 'targetNodeCount': 0, 'nodeStateCounts': {'preparingNodeCount': 0, 'runningNodeCount': 0, 'idleNodeCount': 0, 'unusableNodeCount': 0, 'leavingNodeCount': 0, 'preemptedNodeCount': 0}, 'allocationState': 'Steady', 'allocationStateTransitionTime': '2019-08-06T15:57:12.457000+00:00', 'errors': None, 'creationTime': '2019-08-06T15:56:43.315467+00:00', 'modifiedTime': '2019-08-06T15:57:25.740370+00:00', 'provisioningState': 'Succeeded', 'provisioningStateTransitionTime': None, 'scaleSettings': {'minNodeCount': 0, 'maxNodeCount': 12, 'nodeIdleTimeBeforeScaleDown': 'PT120S'}, 'vmPriority': 'Dedicated', 'vmSize': 'STANDARD_NC6'}\n"
]
}
],
"source": [
"# choose a name for our cluster\n",
"cluster_name = \"gpu-cluster-nc6\"\n",
"# Remote compute (cluster) configuration. If you want to reduce costs even more, set these to small.\n",
"# For example, using Standard_DS1_v2 instead of using STANDARD_NC6\n",
"VM_SIZE = 'STANDARD_NC6'\n",
"VM_PRIORITY = 'lowpriority'\n",
"\n",
"# Cluster nodes\n",
"MIN_NODES = 0\n",
"MAX_NODES = 4\n",
"CLUSTER_NAME = \"gpu-cluster\"\n",
"\n",
"try:\n",
" # Retrieve if a compute target with the same cluster_name already exists\n",
" compute_target = ComputeTarget(workspace=ws, name=cluster_name)\n",
" # Retrieve if a compute target with the same cluster name already exists\n",
" compute_target = ComputeTarget(workspace=ws, name=CLUSTER_NAME)\n",
" print('Found existing compute target.')\n",
" \n",
"except ComputeTargetException:\n",
" # If it doesn't already exist, we create a new one with the name provided\n",
" print('Creating a new compute target...')\n",
" compute_config = AmlCompute.provisioning_configuration(vm_size=VM_SIZE,\n",
" min_nodes=MIN_NODES,\n",
" min_nodes=0,\n",
" max_nodes=MAX_NODES)\n",
"\n",
" # create the cluster\n",
" compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n",
"\n",
" compute_target = ComputeTarget.create(ws, CLUSTER_NAME, compute_config)\n",
" compute_target.wait_for_completion(show_output=True)\n",
"\n",
"# we can use get_status() to get a detailed status for the current cluster. \n",
@ -228,327 +214,10 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploading an estimated of 138 files\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/cvbp_milk_bottle.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/cvbp_water_bottle.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/example.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects.zip\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/1.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/10.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/11.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/12.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/13.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/14.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/15.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/16.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/17.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/18.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/19.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/2.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/20.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/21.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/22.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/23.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/24.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/25.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/26.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/27.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/28.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/29.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/3.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/30.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/31.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/32.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/4.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/5.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/15.jpg, 1 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/6.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/32.jpg, 2 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/7.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/cvbp_water_bottle.jpg, 3 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/8.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/11.jpg, 4 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/9.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/6.jpg, 5 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/33.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/cvbp_milk_bottle.jpg, 6 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/34.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/14.jpg, 7 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/35.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/5.jpg, 8 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/36.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/example.jpg, 9 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/37.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/18.jpg, 10 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/38.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/7.jpg, 11 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/39.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/9.jpg, 12 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/40.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/36.jpg, 13 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/41.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/13.jpg, 14 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/42.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/34.jpg, 15 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/43.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/39.jpg, 16 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/44.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/33.jpg, 17 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/45.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/41.jpg, 18 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/23.jpg, 19 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/1.jpg, 20 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/46.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/47.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/12.jpg, 21 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/48.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/49.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/16.jpg, 22 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/50.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/31.jpg, 23 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/51.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/26.jpg, 24 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/52.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/47.jpg, 25 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/53.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/44.jpg, 26 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/54.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/49.jpg, 27 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/55.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/53.jpg, 28 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/56.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/29.jpg, 29 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/57.jpg\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/56.jpg, 30 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/58.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/43.jpg, 31 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/59.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/54.jpg, 32 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/60.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/8.jpg, 33 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/61.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/55.jpg, 34 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/62.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/40.jpg, 35 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/63.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/10.jpg, 36 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/64.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/58.jpg, 37 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/100.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/42.jpg, 38 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/101.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/59.jpg, 39 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/65.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/100.jpg, 40 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/66.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/62.jpg, 41 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/67.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/48.jpg, 42 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/68.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/45.jpg, 43 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/69.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/51.jpg, 44 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/70.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/50.jpg, 45 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/71.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/46.jpg, 46 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/72.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/60.jpg, 47 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/73.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/66.jpg, 48 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/74.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/65.jpg, 49 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/75.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/25.jpg, 50 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/76.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/73.jpg, 51 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/77.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/67.jpg, 52 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/78.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/77.jpg, 53 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/79.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/52.jpg, 54 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/80.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/22.jpg, 55 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/81.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/75.jpg, 56 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/82.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/63.jpg, 57 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/83.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/78.jpg, 58 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/84.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/101.jpg, 59 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/85.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/79.jpg, 60 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/86.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/74.jpg, 61 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/87.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/61.jpg, 62 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/88.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/57.jpg, 63 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/89.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/69.jpg, 64 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/90.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/70.jpg, 65 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/91.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/68.jpg, 66 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/92.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/83.jpg, 67 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/93.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/72.jpg, 68 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/94.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/81.jpg, 69 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/95.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/64.jpg, 70 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/3.jpg, 71 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/96.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/97.jpg\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/84.jpg, 72 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/35.jpg, 73 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/98.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/99.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/71.jpg, 74 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/102.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/85.jpg, 75 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/103.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/17.jpg, 76 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/104.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/87.jpg, 77 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/105.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/94.jpg, 78 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/106.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/97.jpg, 79 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/107.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/2.jpg, 80 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/108.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/88.jpg, 81 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/109.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/28.jpg, 82 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/110.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/21.jpg, 83 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/111.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/27.jpg, 84 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/112.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/106.jpg, 85 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/113.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/86.jpg, 86 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/114.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/108.jpg, 87 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/115.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/102.jpg, 88 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/116.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/20.jpg, 89 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/117.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/76.jpg, 90 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/118.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/104.jpg, 91 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/119.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/90.jpg, 92 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/120.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/91.jpg, 93 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/121.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/93.jpg, 94 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/122.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/109.jpg, 95 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/123.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/113.jpg, 96 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/124.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/19.jpg, 97 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/125.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/92.jpg, 98 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/126.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/122.jpg, 99 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/127.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/80.jpg, 100 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/128.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/96.jpg, 101 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/119.jpg, 102 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/24.jpg, 103 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/129.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/130.jpg\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/131.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/116.jpg, 104 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/132.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/120.jpg, 105 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/133.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/121.jpg, 106 files out of an estimated total of 138\n",
"Uploading /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/134.jpg\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/89.jpg, 107 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/103.jpg, 108 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/123.jpg, 109 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/128.jpg, 110 files out of an estimated total of 138\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/118.jpg, 111 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/127.jpg, 112 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/126.jpg, 113 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/38.jpg, 114 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/133.jpg, 115 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/134.jpg, 116 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/124.jpg, 117 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/99.jpg, 118 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/131.jpg, 119 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/132.jpg, 120 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/82.jpg, 121 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/129.jpg, 122 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/98.jpg, 123 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/111.jpg, 124 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/117.jpg, 125 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/107.jpg, 126 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/4.jpg, 127 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/114.jpg, 128 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/milk_bottle/95.jpg, 129 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/125.jpg, 130 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/110.jpg, 131 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/105.jpg, 132 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/112.jpg, 133 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/can/30.jpg, 134 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/130.jpg, 135 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/carton/37.jpg, 136 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects/water_bottle/115.jpg, 137 files out of an estimated total of 138\n",
"Uploaded /Users/richinjain/projects/ComputerVision/data/fridgeObjects.zip, 138 files out of an estimated total of 138\n",
"Uploaded 138 files\n"
]
},
{
"data": {
"text/plain": [
"$AZUREML_DATAREFERENCE_f63fbd85fa17436fa173eb6034cd9eb5"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"# Note, all the files under DATA will be uploaded to the data store\n",
"DATA = unzip_url(Urls.fridge_objects_path, exist_ok=True)\n",
"REPS = 3\n",
"\n",
"# Retrieving default datastore that got automatically created when we setup a workspace\n",
"ds = ws.get_default_datastore()\n",
"\n",
@ -594,7 +263,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting /Users/richinjain/projects/ComputerVision/classification/notebooks/hyperparameter/train.py\n"
"Overwriting C:\\Users\\pabuehle\\Desktop\\ComputerVision\\classification\\notebooks\\hyperparameter/train.py\n"
]
}
],
@ -615,69 +284,66 @@
"\n",
"run = Run.get_context()\n",
"\n",
"\n",
"#------------------------------------------------------------------\n",
"# Define parameters that we are going to use for training\n",
"ARCHITECTURE = models.resnet50\n",
"ARCHITECTURE = models.resnet18\n",
"EPOCHS_HEAD = 4\n",
"EPOCHS_BODY = 12\n",
"BATCH_SIZE = 16\n",
"#------------------------------------------------------------------\n",
"\n",
"\n",
"# Parse arguments passed by Hyperdrive\n",
"parser = argparse.ArgumentParser()\n",
"\n",
"\n",
"# Data path\n",
"parser.add_argument('--data-folder', type=str, dest='DATA_DIR', help=\"Datastore path\")\n",
"parser.add_argument('--im_size', type=int, dest='IM_SIZE')\n",
"parser.add_argument('--learning_rate', type=float, dest='LEARNING_RATE')\n",
"\n",
"args = parser.parse_args()\n",
"params = vars(args)\n",
"\n",
"if params['IM_SIZE'] is None:\n",
" raise ValueError(\"Image Size empty\")\n",
" \n",
"if params['LEARNING_RATE'] is None:\n",
" raise ValueError(\"Learning Rate empty\")\n",
"\n",
"if params['DATA_DIR'] is None:\n",
" raise ValueError(\"Data folder empty\")\n",
" \n",
"\n",
"# Getting training and validation data\n",
"path = params['DATA_DIR'] + '/data/fridgeObjects'\n",
"\n",
"# Getting training and validation data and training the CNN as done in 01_training_introduction.ipynb\n",
"data = (ImageList.from_folder(path)\n",
" .split_by_rand_pct(valid_pct=0.2, seed=10)\n",
" .split_by_rand_pct(valid_pct=0.5, seed=10)\n",
" .label_from_folder() \n",
" .transform(size=params['IM_SIZE']) \n",
" .databunch(bs=16) \n",
" .databunch(bs=BATCH_SIZE) \n",
" .normalize(imagenet_stats))\n",
"\n",
"# Get model and run training\n",
"learn = cnn_learner(\n",
" data,\n",
" ARCHITECTURE,\n",
" metrics=[accuracy]\n",
")\n",
"\n",
"epochs=1 # Change the value to 10 to see multiple runs, defaulting to 1 for quick run of notebook.\n",
"learn.fit_one_cycle(EPOCHS_HEAD, params['LEARNING_RATE'])\n",
"learn.unfreeze()\n",
"learn.fit(epochs, params['LEARNING_RATE'])\n",
"learn.fit_one_cycle(EPOCHS_BODY, params['LEARNING_RATE'])\n",
"\n",
"# Add log entries\n",
"training_losses = [x.numpy().ravel()[0] for x in learn.recorder.losses]\n",
"accuracy = [x[0].numpy().ravel()[0] for x in learn.recorder.metrics][-1]\n",
"\n",
"#run.log_list('training_loss', training_losses)\n",
"#run.log_list('validation_loss', learn.recorder.val_losses)\n",
"#run.log_list('error_rate', error_rate)\n",
"accuracy = [100*x[0].numpy().ravel()[0] for x in learn.recorder.metrics][-1]\n",
"run.log('data_dir',params['DATA_DIR'])\n",
"run.log('im_size', params['IM_SIZE'])\n",
"run.log('learning_rate', params['LEARNING_RATE'])\n",
"run.log('accuracy', float(accuracy)) # Logging our primary metric 'accuracy'\n",
"\n",
"# Save trained model\n",
"current_directory = os.getcwd()\n",
"output_folder = os.path.join(current_directory, 'outputs')\n",
"MODEL_NAME = 'im_classif_resnet50' # Name we will give our model both locally and on Azure\n",
"PICKLED_MODEL_NAME = MODEL_NAME + '.pkl'\n",
"model_name = 'im_classif_resnet' # Name we will give our model both locally and on Azure\n",
"os.makedirs(output_folder, exist_ok=True)\n",
"\n",
"learn.export(os.path.join(output_folder, PICKLED_MODEL_NAME))"
"learn.export(os.path.join(output_folder, model_name + \".pkl\"))"
]
},
{
@ -686,8 +352,6 @@
"source": [
"### 5. Setup and run Hyperdrive experiment\n",
"\n",
"Next step is to prepare scripts that AzureML Hyperdrive will use to train and evaluate models with selected hyperparameters. To run the model notebook from the Hyperdrive Run, all we need is to prepare an entry script which parses the hyperparameter arguments, passes them to the notebook, and records the results of the notebook to AzureML Run logs. \n",
"\n",
"#### 5.1 Create Experiment \n",
"Experiment is the main entry point into experimenting with AzureML. To create new Experiment or get the existing one, we pass our experimentation name 'hyperparameter-tuning'.\n"
]
@ -708,10 +372,7 @@
"source": [
"#### 5.2. Define search space\n",
"\n",
"Now we define the search space of hyperparameters. For example, if you want to test different batch sizes of {64, 128, 256}, you can use azureml.train.hyperdrive.choice(64, 128, 256). To search from a continuous space, use uniform(start, end). For more options, see [Hyperdrive parameter expressions](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py).\n",
"\n",
"In this notebook we use the ResNet50 architecture, and fix the number of epochs to 10.\n",
"In the search space, we set different learning rates and image sizes. Details about the hyperparameters can be found in [11_exploring_hyperparameters.ipynb notebook](https://github.com/microsoft/ComputerVision/blob/master/classification/notebooks/11_exploring_hyperparameters.ipynb).\n",
"Now we define the search space of hyperparameters. As shown below, to test discrete parameter values use 'choice()', and for uniform sampling use 'uniform()'. For more options, see [Hyperdrive parameter expressions](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.hyperdrive.parameter_expressions?view=azure-ml-py).\n",
"\n",
"Hyperdrive provides three different parameter sampling methods: 'RandomParameterSampling', 'GridParameterSampling', and 'BayesianParameterSampling'. Details about each method can be found [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters). Here, we use the 'RandomParameterSampling'."
]
@ -722,20 +383,13 @@
"metadata": {},
"outputs": [],
"source": [
"IM_SIZES = [299, 499]\n",
"LEARNING_RATES = [1e-3, 1e-4, 1e-5]\n",
"\n",
"# Hyperparameter search space\n",
"param_sampling = RandomParameterSampling( {\n",
" '--learning_rate': choice(LEARNING_RATES),\n",
" '--learning_rate': uniform(LEARNING_RATE_MIN, LEARNING_RATE_MAX),\n",
" '--im_size': choice(IM_SIZES)\n",
" }\n",
")\n",
"\n",
"primary_metric_name = 'accuracy'\n",
"primary_metric_goal = PrimaryMetricGoal.MAXIMIZE\n",
"max_concurrent_runs=4\n",
"\n",
"early_termination_policy = BanditPolicy(slack_factor=0.15, evaluation_interval=1, delay_evaluation=20)"
]
},
@ -781,7 +435,7 @@
"- early termination policy, in this case we use [Bandit Policy](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-tune-hyperparameters#bandit-policy)\n",
"- primary metric name reported by our runs, in this case it is accuracy \n",
"- the goal, which determines whether the primary metric has to be maximized/minimized, in this case it is to maximize our accuracy \n",
"- number of total child-runs, in this case it is 4\n",
"- number of total child-runs\n",
"\n",
"The bigger the search space, the more child-runs get triggered for better results."
]
@ -795,10 +449,10 @@
"hyperdrive_run_config = HyperDriveConfig(estimator=est,\n",
" hyperparameter_sampling=param_sampling,\n",
" policy=early_termination_policy,\n",
" primary_metric_name=primary_metric_name,\n",
" primary_metric_goal=primary_metric_goal,\n",
" max_total_runs=max_total_runs,\n",
" max_concurrent_runs= max_concurrent_runs)"
" primary_metric_name='accuracy',\n",
" primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n",
" max_total_runs=MAX_TOTAL_RUNS,\n",
" max_concurrent_runs=MAX_NODES)"
]
},
{
@ -816,7 +470,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "fff89f7fb8284f24a94932ca876cbae2",
"model_id": "5c51804ba4794f3aa163354fef634c59",
"version_major": 2,
"version_minor": 0
},
@ -826,25 +480,12 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "9a3c69449ea34e48a4e0c884ced34538",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"_UserRunWidget(widget_settings={'childWidgetDisplay': 'popup', 'send_telemetry': False, 'log_level': 'INFO', '…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Now we submit the Run to our experiment. \n",
"hyperdrive_run = exp.submit(config=hyperdrive_run_config)\n",
"\n",
"# We can see the experiment progress from this notebook by using \n",
"widgets.RunDetails(hyperdrive_run).show()"
]
@ -857,18 +498,18 @@
{
"data": {
"text/plain": [
"{'runId': 'hyperparameter-tuning_1563770544897',\n",
" 'target': 'gpu-cluster-nc6',\n",
"{'runId': 'hyperparameter-tuning_1565107066432',\n",
" 'target': 'gpu-cluster',\n",
" 'status': 'Completed',\n",
" 'startTimeUtc': '2019-07-22T04:42:25.393015Z',\n",
" 'endTimeUtc': '2019-07-22T04:49:58.250673Z',\n",
" 'startTimeUtc': '2019-08-06T15:57:46.90426Z',\n",
" 'endTimeUtc': '2019-08-06T16:13:21.185098Z',\n",
" 'properties': {'primary_metric_config': '{\"name\": \"accuracy\", \"goal\": \"maximize\"}',\n",
" 'runTemplate': 'HyperDrive',\n",
" 'azureml.runsource': 'hyperdrive',\n",
" 'platform': 'AML',\n",
" 'baggage': 'eyJvaWQiOiAiNmY1Yjc5M2UtZjhiOS00NGY0LTk0N2YtNTg3N2ZjMDFjZmFjIiwgInRpZCI6ICI3MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMWRiNDciLCAidW5hbWUiOiAiMDRiMDc3OTUtOGRkYi00NjFhLWJiZWUtMDJmOWUxYmY3YjQ2In0',\n",
" 'ContentSnapshotId': 'a63feca7-742e-49c3-b568-9cf6a53b34c3'},\n",
" 'logFiles': {'azureml-logs/hyperdrive.txt': 'https://smoketesstorage0231aa20c.blob.core.windows.net/azureml/ExperimentRun/dcid.hyperparameter-tuning_1563770544897/azureml-logs/hyperdrive.txt?sv=2018-03-28&sr=b&sig=LL8Fx6UZhJ9jddaqS1xeR%2BHi98wUHPZ%2FYuAxGH3Y39I%3D&st=2019-07-22T04%3A39%3A59Z&se=2019-07-22T12%3A49%3A59Z&sp=r'}}"
" 'baggage': 'eyJvaWQiOiAiNWFlYTJmMzAtZjQxZC00ZDA0LWJiOGUtOWU0NGUyZWQzZGQ2IiwgInRpZCI6ICI3MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMWRiNDciLCAidW5hbWUiOiAiMDRiMDc3OTUtOGRkYi00NjFhLWJiZWUtMDJmOWUxYmY3YjQ2In0',\n",
" 'ContentSnapshotId': 'c662f56a-ff58-432e-b732-8a3bc6818778'},\n",
" 'logFiles': {'azureml-logs/hyperdrive.txt': 'https://pabuehlestorage1c7e31216.blob.core.windows.net/azureml/ExperimentRun/dcid.hyperparameter-tuning_1565107066432/azureml-logs/hyperdrive.txt?sv=2018-11-09&sr=b&sig=8D2gwxb%2BYn7nbzgGVHE7QSzJ%2FG7C1swzmLD7%2Fior2vE%3D&st=2019-08-06T17%3A36%3A08Z&se=2019-08-07T01%3A46%3A08Z&sp=r'}}"
]
},
"execution_count": 14,
@ -877,7 +518,7 @@
}
],
"source": [
"hyperdrive_run.wait_for_completion()\n"
"hyperdrive_run.wait_for_completion()"
]
},
{
@ -911,15 +552,15 @@
"name": "stdout",
"output_type": "stream",
"text": [
"* Best Run Id:hyperparameter-tuning_1563770544897_0\n",
"* Best Run Id:hyperparameter-tuning_1565107066432_8\n",
"Run(Experiment: hyperparameter-tuning,\n",
"Id: hyperparameter-tuning_1563770544897_0,\n",
"Id: hyperparameter-tuning_1565107066432_8,\n",
"Type: azureml.scriptrun,\n",
"Status: Completed)\n",
"\n",
"* Best hyperparameters:\n",
"{'--data-folder': '$AZUREML_DATAREFERENCE_workspaceblobstore', '--im_size': '299', '--learning_rate': '0.001'}\n",
"Accuracy = 0.26923078298568726\n"
"{'--data-folder': '$AZUREML_DATAREFERENCE_workspaceblobstore', '--im_size': '150', '--learning_rate': '0.000552896672441507'}\n",
"Accuracy = 92.53731369972229\n"
]
}
],
@ -956,8 +597,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading outputs/im_classif_resnet50.pkl..\n",
"119547037146038801333356\n"
"Downloading outputs/im_classif_resnet.pkl..\n"
]
}
],
@ -968,11 +608,10 @@
"os.makedirs(output_folder, exist_ok=True)\n",
"\n",
"for f in best_run.get_file_names():\n",
" if f.startswith('outputs/im_classif_resnet50'):\n",
" if f.startswith('outputs/im_classif_resnet'):\n",
" print(\"Downloading {}..\".format(f))\n",
" best_run.download_file('outputs/im_classif_resnet50.pkl')\n",
"saved_model =joblib.load('im_classif_resnet50.pkl')\n",
"print(saved_model)"
" best_run.download_file('outputs/im_classif_resnet.pkl')\n",
"saved_model =joblib.load('im_classif_resnet.pkl')"
]
},
{
@ -984,12 +623,27 @@
"saved_model.predict(image)\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7. Clean up\n",
"\n",
"To avoid unnecessary expenses, all resources which were created in this notebook need to get deleted once parameter search is concluded. To simplify this clean-up step, we recommend creating a new resource group to run this notebook. This resource group can then be deleted, e.g. using the Azure Portal, which will remove all created resources."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"celltoolbar": "Tags",
"kernelspec": {
"display_name": "cv",
"display_name": "Python (cv)",
"language": "python",
"name": "cv"
},

Просмотреть файл

@ -6,6 +6,12 @@ The majority of state-of-the-art systems for image similarity use DNNs to comput
A major difference between modern image similarity approaches is how the DNN is trained. A simple but quite powerful approach is to use a standard image classification loss - this is the approach taken in this repository, and explained in the [classification](../classification/README.md) folder. More accurate similarity measures are based on DNNs which are trained explicitly for image similarity, such as the [FaceNet](https://arxiv.org/pdf/1503.03832.pdf) work which uses a Siamese network architecture. FaceNet-like approaches will be added to this repository at a later point.
## Frequently asked questions
Answers to Frequently Asked Questions such as "How many images do I need to train a model?" or "How to annotate images?" can be found in the [FAQ.md](FAQ.md) file. For image classification specified questions, see the [FAQ.md](../classification/FAQ.md) in the classification folder.
## Notebooks
We provide several notebooks to show how image similarity algorithms can be designed and evaluated.
@ -14,11 +20,10 @@ We provide several notebooks to show how image similarity algorithms can be desi
| --- | --- |
| [00_webcam.ipynb](./notebooks/00_webcam.ipynb)| Quick start notebook which demonstrates how to build an image retrieval system using a single image or webcam as input.
| [01_training_and_evaluation_introduction.ipynb](./notebooks/01_training_and_evaluation_introduction.ipynb)| Notebook which explains the basic concepts around model training and evaluation, based on using DNNs trained for image classification.|
| [11_exploring_hyperparameters.ipynb](notebooks/11_exploring_hyperparameters.ipynb)| Finds optimal model parameters using grid search. |
## Coding guidelines
See the [coding guidelines](../classification/#coding-guidelines) in the image classification folder.
## Frequently asked questions
Answers to Frequently Asked Questions such as "How many images do I need to train a model?" or "How to annotate images?" can be found in the [FAQ.md](FAQ.md) file. For image classification specified questions, see the [FAQ.md](../classification/FAQ.md) in the classification folder.

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -85,7 +85,7 @@ def classification_notebooks():
),
"24_exploring_hyperparameters_on_azureml": os.path.join(
folder_notebooks, "24_exploring_hyperparameters_on_azureml.ipynb"
)
),
}
return paths
@ -100,6 +100,9 @@ def similarity_notebooks():
"01": os.path.join(
folder_notebooks, "01_training_and_evaluation_introduction.ipynb"
),
"11": os.path.join(
folder_notebooks, "11_exploring_hyperparameters.ipynb"
),
}
return paths
@ -252,14 +255,16 @@ def testing_databunch(tmp_session):
def pytest_addoption(parser):
parser.addoption("--subscription_id",
help="Azure Subscription Id to create resources in")
parser.addoption("--resource_group",
help="Name of the resource group")
parser.addoption("--workspace_name",
help="Name of Azure ML Workspace")
parser.addoption("--workspace_region",
help="Azure region to create the workspace in")
parser.addoption(
"--subscription_id",
help="Azure Subscription Id to create resources in",
)
parser.addoption("--resource_group", help="Name of the resource group")
parser.addoption("--workspace_name", help="Name of Azure ML Workspace")
parser.addoption(
"--workspace_region", help="Azure region to create the workspace in"
)
@pytest.fixture
def subscription_id(request):

Просмотреть файл

@ -23,3 +23,23 @@ def test_01_notebook_run(similarity_notebooks):
nb_output = sb.read_notebook(OUTPUT_NOTEBOOK)
assert nb_output.scraps["median_rank"].data <= 10
@pytest.mark.notebooks
@pytest.mark.linuxgpu
def test_11_notebook_run(similarity_notebooks, tiny_ic_data_path):
notebook_path = similarity_notebooks["11"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
parameters=dict(
PM_VERSION=pm.__version__,
# Speed up testing since otherwise would take ~12 minutes on V100
DATA_PATHS=[tiny_ic_data_path],
REPS=1,
IM_SIZES=[60, 100],
),
kernel_name=KERNEL_NAME,
)
nb_output = sb.read_notebook(OUTPUT_NOTEBOOK)
assert min(nb_output.scraps["ranks"].data) <= 30

Просмотреть файл

@ -113,9 +113,11 @@ def test_24_notebook_run(
subscription_id,
resource_group,
workspace_name,
workspace_region
workspace_region,
):
notebook_path = classification_notebooks["24_exploring_hyperparameters_on_azureml"]
notebook_path = classification_notebooks[
"24_exploring_hyperparameters_on_azureml"
]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
@ -125,8 +127,9 @@ def test_24_notebook_run(
resource_group=resource_group,
workspace_name=workspace_name,
workspace_region=workspace_region,
epochs=1,
max_total_runs=1
MAX_NODES=2,
MAX_TOTAL_RUNS=1,
IM_SIZES=[30, 40],
),
kernel_name=KERNEL_NAME,
)

Просмотреть файл

@ -47,4 +47,21 @@ def test_01_notebook_run(similarity_notebooks, tiny_ic_data_path):
),
kernel_name=KERNEL_NAME,
)
nb_output = sb.read_notebook(OUTPUT_NOTEBOOK)
@pytest.mark.notebooks
def test_11_notebook_run(similarity_notebooks, tiny_ic_data_path):
notebook_path = similarity_notebooks["11"]
pm.execute_notebook(
notebook_path,
OUTPUT_NOTEBOOK,
parameters=dict(
PM_VERSION=pm.__version__,
DATA_PATHS=[tiny_ic_data_path],
REPS=1,
LEARNING_RATES=[1e-4],
IM_SIZES=[30],
EPOCHS=[1],
),
kernel_name=KERNEL_NAME,
)

Просмотреть файл

@ -1,8 +1,8 @@
# Repository Metrics
[![Build Status](https://dev.azure.com/best-practices/computervision/_apis/build/status/repo-metrics?branchName=master)](https://dev.azure.com/best-practices/computervision/_build/latest?definitionId=27&branchName=master)
[![Build Status](https://dev.azure.com/best-practices/computervision/_apis/build/status/repo-metrics?branchName=staging)](https://dev.azure.com/best-practices/computervision/_build/latest?definitionId=27&branchName=staging)
We developed a script that allows us to track the metrics of the ComputerVisionBestPractices repo. Some of the metrics we can track are listed here:
We developed a script that allows us to track the repo metrics. Some of the metrics we can track are listed here:
* Number of stars
* Number of forks
@ -10,17 +10,27 @@ We developed a script that allows us to track the metrics of the ComputerVisionB
* Number of views
* Number of lines of code
To see the full list of metrics, see [git_stats.py](scripts/repo_metrics/git_stats.py)
To see the full list of metrics, see [git_stats.py](git_stats.py)
The first step is to set up the credentials, copy the configuration file and fill up the credentials of GitHub and CosmosDB:
cp scripts/repo_metrics/config_template.py scripts/repo_metrics/config.py
cp tools/repo_metrics/config_template.py tools/repo_metrics/config.py
To track the current state of the repository and save it to CosmosDB:
python scripts/repo_metrics/track_metrics.py --github_repo "https://github.com/Microsoft/ComputerVision" --save_to_database
python tools/repo_metrics/track_metrics.py --github_repo "https://github.com/Microsoft/ComputerVision" --save_to_database
To track an event related to this repository and save it to CosmosDB:
python scripts/repo_metrics/track_metrics.py --event "Today we did our first blog of the project" --event_date 2018-12-01 --save_to_database
python tools/repo_metrics/track_metrics.py --event "Today we did our first blog of the project" --event_date 2018-12-01 --save_to_database
### Setting up Azure CosmosDB
The API that we is used to track the GitHub metrics is the [Mongo API](https://docs.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction).
The database name and collections name are defined in the [config file](config_template.py). There are two main collections, defined as `COLLECTION_GITHUB_STATS` and `COLLECTION_EVENTS` to store the information defined on the previous section.
**IMPORTANT NOTE**: If the database and the collections are created directly through the portal, a common partition key should be defined. We recommend to use `date` as partition key.

Просмотреть файл

@ -3,10 +3,12 @@
# Github token
# More info: https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/
GITHUB_TOKEN = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
GITHUB_TOKEN = "<GITHUB_TOKEN>"
# CosmosDB Mongo API
CONNECTION_STRING = "mongodb://XXXXXXXXXXXXXXXXXXXXXXXXX.documents.azure.com:10255/?ssl=true&replicaSet=globaldb"
# * Azure Portal: Settings -> Connection String -> PRIMARY CONNECTION STRING
# * For example, 'mongodb://<USERNAME>:<PRIMARY PASSWORD>@<HOST>:<PORT>/?ssl=true&replicaSet=globaldb'
CONNECTION_STRING = "<CONNECTION_STRING>"
DATABASE = "cv_stats"
COLLECTION_GITHUB_STATS = "github_stats"
COLLECTION_EVENTS = "events"

Просмотреть файл

@ -14,7 +14,6 @@ import logging
from datetime import datetime
from dateutil.parser import isoparse
from pymongo import MongoClient
from datetime import datetime
from tools.repo_metrics.git_stats import Github
from tools.repo_metrics.config import (
GITHUB_TOKEN,
@ -32,6 +31,7 @@ log = logging.getLogger()
def parse_args():
"""Argument parser.
Returns:
obj: Parser.
"""
@ -61,12 +61,14 @@ def parse_args():
def connect(uri="mongodb://localhost"):
"""Mongo connector.
Args:
uri (str): Connection string.
Returns:
obj: Mongo client.
"""
client = MongoClient(uri, serverSelectionTimeoutMS=1000)
client = MongoClient(uri, serverSelectionTimeoutMS=5000)
# Send a query to the server to see if the connection is working.
try:
@ -78,9 +80,11 @@ def connect(uri="mongodb://localhost"):
def event_as_dict(event, date):
"""Encodes an string event input as a dictionary with the date.
Args:
event (str): Details of a event.
date (datetime): Date of the event.
Returns:
dict: Dictionary with the event and the date.
"""
@ -89,8 +93,10 @@ def event_as_dict(event, date):
def github_stats_as_dict(github):
"""Encodes Github statistics as a dictionary with the date.
Args:
obj: Github object.
Returns:
dict: Dictionary with Github details and the date.
"""
@ -125,6 +131,7 @@ def github_stats_as_dict(github):
def tracker(args):
"""Main function to track metrics.
Args:
args (obj): Parsed arguments.
"""

Просмотреть файл

@ -195,7 +195,7 @@ class ParameterSweeper:
one_cycle_policy=True,
)
def __init__(self, **kwargs) -> None:
def __init__(self, metric_name="accuracy", **kwargs) -> None:
"""
Initialize class with default params if kwargs is empty.
Otherwise, initialize params with kwargs.
@ -214,6 +214,8 @@ class ParameterSweeper:
one_cycle_policy=[self.default_params.get("one_cycle_policy")],
)
self.metric_name = metric_name
self.param_order = tuple(self.params.keys())
self.update_parameters(**kwargs)
@ -411,8 +413,8 @@ class ParameterSweeper:
Otherwise overwrite the corresponding self.params key.
"""
for k, v in kwargs.items():
if k not in self.params.keys():
raise Exception("Parameter {k} is invalid.")
if k not in set(self.params.keys()):
raise Exception(f"Parameter {k} is invalid.")
if v is None:
continue
self.params[k] = v
@ -420,7 +422,11 @@ class ParameterSweeper:
return self
def run(
self, datasets: List[Path], reps: int = 3, early_stopping: bool = False
self,
datasets: List[Path],
reps: int = 3,
early_stopping: bool = False,
metric_fct=None,
) -> pd.DataFrame:
""" Performs the experiment.
Iterates through the number of specified <reps>, the list permutations
@ -440,8 +446,8 @@ class ParameterSweeper:
res = dict()
for rep in range(reps):
res[rep] = dict()
for i, permutation in enumerate(self.permutations):
print(
f"Running {i+1} of {len(self.permutations)} permutations. "
@ -462,15 +468,20 @@ class ParameterSweeper:
dataset, permutation, early_stopping
)
_, metric = learn.validate(
learn.data.valid_dl, metrics=[accuracy]
)
if metric_fct is None:
_, metric = learn.validate(
learn.data.valid_dl, metrics=[accuracy]
)
else:
metric = metric_fct(learn)
res[rep][stringified_permutation][data_name][
"duration"
] = duration
res[rep][stringified_permutation][data_name][
"accuracy"
self.metric_name
] = float(metric)
learn.destroy()