Staging (#442)
* Removed submodules. * Add back submodules using https:// * Update FAQ.md * added object detection readme * fixes to ic (#437) * fixes to ic * Matplotlib bug fix * Matplotlib matrix plot bug fix * Fix 01 notebook heatmap * revert env * revert env yml * remove matplotlib
|
@ -1,6 +1,6 @@
|
|||
[submodule "contrib/crowd_counting/crowdcounting/third_party/tf-pose-estimation"]
|
||||
path = contrib/crowd_counting/crowdcounting/third_party/tf-pose-estimation
|
||||
url = git@github.com:lixzhang/tf-pose-estimation.git
|
||||
url = https://github.com/lixzhang/tf-pose-estimation.git
|
||||
[submodule "contrib/crowd_counting/crowdcounting/third_party/mcnn"]
|
||||
path = contrib/crowd_counting/crowdcounting/third_party/mcnn
|
||||
url = git@github.com:lixzhang/crowdcount-mcnn.git
|
||||
url = https://github.com/lixzhang/crowdcount-mcnn.git
|
||||
|
|
|
@ -34,7 +34,6 @@ dependencies:
|
|||
- pre-commit>=1.14.4
|
||||
- pyyaml>=5.1.2
|
||||
- requests>=2.22.0
|
||||
- cython>=0.29.1
|
||||
- pip:
|
||||
- nvidia-ml-py3
|
||||
- nteract-scrapbook
|
||||
|
|
|
@ -73,7 +73,7 @@
|
|||
],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"sys.path.append(\"../../../\")\n",
|
||||
"sys.path.append(\"../../\")\n",
|
||||
"import io\n",
|
||||
"import os\n",
|
||||
"import time\n",
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
"source": [
|
||||
"In this notebook, we'll cover how to test different hyperparameters for a particular dataset and how to benchmark different parameters across a group of datasets.\n",
|
||||
"\n",
|
||||
"For an example of how to scale up with remote GPU clusters on Azure Machine Learning, please view [24_exploring_hyperparameters_on_azureml.ipynb](../24_exploring_hyperparameters_on_azureml).\n",
|
||||
"For an example of how to scale up with remote GPU clusters on Azure Machine Learning, please view [24_exploring_hyperparameters_on_azureml.ipynb](24_exploring_hyperparameters_on_azureml.ipynb).\n",
|
||||
"## Table of Contents\n",
|
||||
"\n",
|
||||
"* [Testing hyperparameters](#hyperparam)\n",
|
||||
|
@ -53,7 +53,7 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"Ensure edits to libraries are loaded and plotting is shown in the notebook."
|
||||
]
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
|
|
После Ширина: | Высота: | Размер: 41 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 49 KiB |
После Ширина: | Высота: | Размер: 73 KiB |
После Ширина: | Высота: | Размер: 55 KiB |
После Ширина: | Высота: | Размер: 40 KiB |
После Ширина: | Высота: | Размер: 37 KiB |
После Ширина: | Высота: | Размер: 125 KiB |
После Ширина: | Высота: | Размер: 67 KiB |
После Ширина: | Высота: | Размер: 118 KiB |
После Ширина: | Высота: | Размер: 30 KiB |
После Ширина: | Высота: | Размер: 23 KiB |
После Ширина: | Высота: | Размер: 228 KiB |
После Ширина: | Высота: | Размер: 124 KiB |
После Ширина: | Высота: | Размер: 83 KiB |
После Ширина: | Высота: | Размер: 128 KiB |
После Ширина: | Высота: | Размер: 62 KiB |
После Ширина: | Высота: | Размер: 52 KiB |
После Ширина: | Высота: | Размер: 125 KiB |
Двоичные данные
scenarios/classification/media/deployment/application_insights_all_charts.jpg
Normal file
После Ширина: | Высота: | Размер: 118 KiB |
После Ширина: | Высота: | Размер: 103 KiB |
Двоичные данные
scenarios/classification/media/deployment/experiment_run_recorded_metrics.jpg
Normal file
После Ширина: | Высота: | Размер: 151 KiB |
После Ширина: | Высота: | Размер: 92 KiB |
Двоичные данные
scenarios/classification/media/deployment/failures_requests_line_chart.jpg
Normal file
После Ширина: | Высота: | Размер: 114 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
Двоичные данные
scenarios/classification/media/deployment/imagedatabunch_batchsize_error.jpg
Normal file
После Ширина: | Высота: | Размер: 11 KiB |
Двоичные данные
scenarios/classification/media/deployment/logs_failed_request_details.jpg
Normal file
После Ширина: | Высота: | Размер: 154 KiB |
После Ширина: | Высота: | Размер: 104 KiB |
После Ширина: | Высота: | Размер: 44 KiB |
После Ширина: | Высота: | Размер: 140 KiB |
После Ширина: | Высота: | Размер: 181 KiB |
После Ширина: | Высота: | Размер: 84 KiB |
Двоичные данные
scenarios/classification/media/deployment/webservice_performance_metrics.jpg
Normal file
После Ширина: | Высота: | Размер: 130 KiB |
После Ширина: | Высота: | Размер: 50 KiB |
После Ширина: | Высота: | Размер: 49 KiB |
После Ширина: | Высота: | Размер: 55 KiB |
После Ширина: | Высота: | Размер: 114 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 38 KiB |
После Ширина: | Высота: | Размер: 51 KiB |
После Ширина: | Высота: | Размер: 154 KiB |
После Ширина: | Высота: | Размер: 47 KiB |
После Ширина: | Высота: | Размер: 81 KiB |
После Ширина: | Высота: | Размер: 76 KiB |
После Ширина: | Высота: | Размер: 18 KiB |
После Ширина: | Высота: | Размер: 104 KiB |
После Ширина: | Высота: | Размер: 41 KiB |
После Ширина: | Высота: | Размер: 181 KiB |
После Ширина: | Высота: | Размер: 80 KiB |
После Ширина: | Высота: | Размер: 130 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 105 KiB |
После Ширина: | Высота: | Размер: 63 KiB |
|
@ -16,6 +16,10 @@ This document tries to answer frequent questions related to object detection. Fo
|
|||
* [Intersection-over-Union overlap metric](#intersection-over-union-overlap-metric)
|
||||
* [Non-maxima suppression](#non-maxima-suppression)
|
||||
* [Mean Average Precision](#mean-average-precision)
|
||||
|
||||
* Training
|
||||
* [How to improve accuracy?](#how-to-improve-accuracy)
|
||||
|
||||
|
||||
## General
|
||||
|
||||
|
@ -85,3 +89,14 @@ Detection results with confidence scores before (left) and after non-maxima Supp
|
|||
|
||||
### Mean Average Precision
|
||||
Once trained, the quality of the model can be measured using different criteria, such as precision, recall, accuracy, area-under-curve, etc. A common metric which is used for the Pascal VOC object recognition challenge is to measure the Average Precision (AP) for each class. Average Precision takes confidence in the detections into account and hence assigns a smaller penalty to false detections with low confidence. For a description of Average Precision see [Everingham et. al](http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf). The mean Average Precision (mAP) is then computed by taking the average over all APs.
|
||||
|
||||
|
||||
## Training
|
||||
|
||||
### How to improve accuracy?
|
||||
One way to improve accuracy is by optimizing the model architecture or the training procedure. The following parameters tend to have the highest influence on accuracy:
|
||||
- Image resolution: increase to e.g. 1200 pixels input resolution by setting `IM_SIZE = 1200`.
|
||||
- Number of proposals: increase to e.g. these values: `rpn_pre_nms_top_n_train = rpn_post_nms_top_n_train = 10000` and `rpn_pre_nms_top_n_test = rpn_post_nms_top_n_test = 5000`.
|
||||
- Learning rate and number of epochs: the respective default values specified e.g. in the 01 notebook should work well in most cases. However, one could try somewhat higher/smaller values for learning rate and epochs.
|
||||
|
||||
See also the [image classification FAQ](../classification/FAQ.md) for more suggestions to improve model accuracy or to increase inference/training speed.
|
||||
|
|
|
@ -2,17 +2,13 @@
|
|||
|
||||
This directory provides examples and best practices for building object detection systems. Our goal is to enable the users to bring their own datasets and train a high-accuracy model easily and quickly. To this end, we provide example notebooks with pre-set default parameters shown to work well on a variety of datasets, and extensive documentation of common pitfalls, best practices, etc.
|
||||
|
||||
Object Detection is one of the main problems in Computer Vision. Traditionally, this required expert knowledge to identify and implement so called “features” that highlight the position of objects in the image. Starting in 2012 with the famous AlexNet paper, Deep Neural Networks are used to automatically find these features. This lead to a huge improvement in the field for a large range of problems.
|
||||
Object Detection is one of the main problems in Computer Vision. Traditionally, this required expert knowledge to identify and implement so called “features” that highlight the position of objects in the image. Starting in 2012 with the famous AlexNet and Fast(er) R-CNN papers, Deep Neural Networks are used to automatically find these features. This lead to a huge improvement in the field for a large range of problems.
|
||||
|
||||
This repository uses [torchvision's](https://pytorch.org/docs/stable/torchvision/index.html) Faster R-CNN implementation which has been shown to work well on a wide variety of Computer Vision problems. See the [FAQ](FAQ.md) for an explanation of the underlying data science aspects.
|
||||
|
||||
We recommend running these samples on a machine with GPU, on either Windows or Linux. While a GPU is technically not required, training gets prohibitively slow even when using only a few dozens of images.
|
||||
We recommend running these samples on a machine with GPU, on either Linux or (~20% slower) Windows. While a GPU is technically not required, training gets prohibitively slow even when using only a few dozens of images.
|
||||
|
||||
|
||||
```diff
|
||||
+ (August 2019) This is work-in-progress and more functionality and documentation will be added continuously.
|
||||
```
|
||||
|
||||
|
||||
## Frequently asked questions
|
||||
|
||||
|
@ -27,8 +23,10 @@ We provide several notebooks to show how object detection algorithms can be desi
|
|||
| --- | --- |
|
||||
| [00_webcam.ipynb](./00_webcam.ipynb)| Quick-start notebook which demonstrates how to build an object detection system using a single image or webcam as input.
|
||||
| [01_training_and_evaluation_introduction.ipynb](./01_training_and_evaluation_introduction.ipynb)| Notebook which explains the basic concepts around model training and evaluation.|
|
||||
| [02_mask_rcnn.ipynb](./02_mask_rcnn.ipynb) | In addition to detecting objects, also find their precise pixel-masks in an image. |
|
||||
| [11_exploring_hyperparameters_on_azureml.ipynb](./11_exploring_hyperparameters_on_azureml.ipynb)| Performs highly parallel parameter sweeping using AzureML's HyperDrive. |
|
||||
|
||||
| [12_hard_negative_sampling.ipynb](./12_hard_negative_sampling.ipynb) | Demonstrates how to sample hard negatives to improve model performance. |
|
||||
| [20_deployment_on_kubernetes.ipynb](./20_deployment_on_kubernetes.ipynb) | Deploys a trained model using AzureML. |
|
||||
|
||||
## Contribution guidelines
|
||||
|
||||
|
|