Best Practices, code samples, and documentation for Computer Vision.
Перейти к файлу
PatrickBue 25ae90a467
Fixed conda environment.yaml file (#322)
2019-09-16 16:41:15 +00:00
.ci Merge pull request #289 from microsoft/miguelgfierro-patch-1 2019-08-14 13:14:58 +00:00
.github Added github templates (copied and slightly modified from recommender) (#210) 2019-06-03 14:58:50 -04:00
classification Object detection code (stagingOD) (#280) 2019-09-03 20:32:45 -04:00
contrib Update README.md 2019-09-02 19:27:02 -04:00
detection bug fixes in detection code / 01 notebook (#314) 2019-09-10 12:37:31 -04:00
media added figure source 2019-06-10 14:58:14 -04:00
similarity Update README.md 2019-08-22 11:08:00 -04:00
tests Object detection code (stagingOD) (#280) 2019-09-03 20:32:45 -04:00
tools Update README.md 2019-09-04 09:45:13 -04:00
utils_cv changed so that label_idx 0 is never used (#315) 2019-09-10 23:04:11 +00:00
.flake8 [#171 & #172] dsvm tool (#222) 2019-09-02 19:26:10 -04:00
.gitignore Update pip installable utils_cv 2019-08-13 11:35:04 +08:00
.pre-commit-config.yaml Adding pre-commit package to yml, more rules to flake8 lint config and adding pre-commit config file 2019-03-13 14:21:15 -04:00
CONTRIBUTING.md reorganize coding guidelines (#279) 2019-08-19 11:48:17 -04:00
LICENSE Initial commit 2019-02-11 08:23:56 -08:00
MANIFEST.in Make utils_cv pip-installable 2019-08-13 11:35:04 +08:00
README.md bug fixes in detection code / 01 notebook (#314) 2019-09-10 12:37:31 -04:00
environment.yml Fixed conda environment.yaml file (#322) 2019-09-16 16:41:15 +00:00
pyproject.toml set black config to use 79 chars per line 2019-02-28 20:50:52 +00:00
setup.py Revert environment.yml, update README, change Python version >=3.6 2019-08-13 11:35:04 +08:00

README.md

Computer Vision

This repository provides examples and best practice guidelines for building computer vision systems. All examples are given as Jupyter notebooks, and use PyTorch as the deep learning library.

Overview

The goal of this repository is to accelerate the development of computer vision applications. Rather than creating implementions from scratch, the focus is on providing examples and links to existing state-of-the-art libraries. In addition, having worked in this space for many years, we aim to answer common questions, point out frequently observed pitfalls, and show how to use the cloud for training and deployment.

Scenarios

The following is a summary of commonly used Computer Vision scenarios that are covered in this repository. For each of these scenarios, we give you the tools to effectively build your own model. This includes tasks such as fine-tuning your own model on your own data, to more complex tasks such as hard-negative mining and even model deployment.

Scenario Description
Classification Image Classification is a supervised machine learning technique that allows you to learn and predict the category of a given image.
Similarity Image Similarity is a way to compute a similarity score given a pair of images. Given an image, it allows you to identify the most similar image in a given dataset.
Detection Object Detection is a supervised machine learning technique that allows you to detect the bounding box of an object within an image.

Getting Started

To get started:

  1. (Optional) Create an Azure Data Science Virtual Machine with e.g. a V100 GPU (instructions, price table).
  2. Install Anaconda with Python >= 3.6. Miniconda. This step can be skipped if working on a Data Science Virtual Machine.
  3. Clone the repository
    git clone https://github.com/Microsoft/ComputerVision
    
  4. Install the conda environment, you'll find the environment.yml file in the root directory. To build the conda environment:

    If you are on Windows uncomment - git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI before running the following If you are on Linux uncomment - pycocotools>=2.0

    conda env create -f environment.yml
    
  5. Activate the conda environment and register it with Jupyter:
    conda activate cv
    python -m ipykernel install --user --name cv --display-name "Python (cv)"
    
    If you would like to use JupyterLab, install jupyter-webrtc widget:
    jupyter labextension install jupyter-webrtc
    
  6. Start the Jupyter notebook server
    jupyter notebook
    
  7. At this point, you should be able to run the notebooks in this repo.

As an alternative to the steps above, and if one wants to install only the 'utils_cv' library (without creating a new conda environment), this can be done by running

pip install git+https://github.com/microsoft/ComputerVision.git@master#egg=utils_cv

or by downloading the repo and then running pip install . in the root directory.

Introduction

Note that for certain computer vision problems, you may not need to build your own models. Instead, pre-built or easily customizable solutions exist which do not require any custom coding or machine learning expertise. We strongly recommend evaluating if these can sufficiently solve your problem. If these solutions are not applicable, or the accuracy of these solutions is not sufficient, then resorting to more complex and time-consuming custom approaches may be necessary.

The following Microsoft services offer simple solutions to address common computer vision tasks:

  • Vision Services are a set of pre-trained REST APIs which can be called for image tagging, face recognition, OCR, video analytics, and more. These APIs work out of the box and require minimal expertise in machine learning, but have limited customization capabilities. See the various demos available to get a feel for the functionality (e.g. Computer Vision).

  • Custom Vision is a SaaS service to train and deploy a model as a REST API given a user-provided training set. All steps including image upload, annotation, and model deployment can be performed using either the UI or a Python SDK. Training image classification or object detection models can be achieved with minimal machine learning expertise. The Custom Vision offers more flexibility than using the pre-trained cognitive services APIs, but requires the user to bring and annotate their own data.

Build Your Own Computer Vision Model

If you need to train your own model, the following services and links provide additional information that is likely useful.

  • Azure Machine Learning service (AzureML) is a service that helps users accelerate the training and deploying of machine learning models. While not specific for computer vision workloads, the AzureML Python SDK can be used for scalable and reliable training and deployment of machine learning solutions to the cloud. We leverage Azure Machine Learning in several of the notebooks within this repository (e.g. deployment to Azure Kubernetes Service)

  • Azure AI Reference architectures provide a set of examples (backed by code) of how to build common AI-oriented workloads that leverage multiple cloud components. While not computer vision specific, these reference architectures cover several machine learning workloads such as model deployment or batch scoring.

Computer Vision Domains

Most applications in computer vision (CV) fall into one of these 4 categories:

  • Image classification: Given an input image, predict what object is present in the image. This is typically the easiest CV problem to solve, however classification requires objects to be reasonably large in the image.

       Image classification visualization

  • Object Detection: Given an input image, identify and locate which objects are present (using rectangular coordinates). Object detection can find small objects in an image. Compared to image classification, both model training and manually annotating images is more time-consuming in object detection, since both the label and location are required.

       Object detect visualization

  • Image Similarity Given an input image, find all similar objects in images from a reference dataset. Here, rather than predicting a label and/or rectangle, the task is to sort through a reference dataset to find objects similar to that found in the query image.

       Image similarity visualization

  • Image Segmentation Given an input image, assign a label to every pixel (e.g., background, bottle, hand, sky, etc.). In practice, this problem is less common in industry, in large part due to time required to label the ground truth segmentation required in order to train a solution.

       Image segmentation visualization

Build Status

VM Testing

Build Type Branch Status Branch Status
Linux GPU master Build Status staging Build Status
Linux CPU master Build Status staging Build Status
Windows GPU master Build Status staging Build Status
Windows CPU master Build Status staging Build Status
AzureML Notebooks master Build Status staging Build Status

AzureML Testing

Build Type Branch Status Branch Status
Linxu GPU master Build Status staging Build Status
Linux CPU master Build Status staging Build Status
Notebook unit GPU master Build Status staging Build Status
Nightly GPU master Build Status staging Build Status

Contributing

This project welcomes contributions and suggestions. Please see our contribution guidelines.

Data/Telemetry

The Azure Machine Learning image classification notebooks (20_azure_workspace_setup, 21_deployment_on_azure_container_instances, 22_deployment_on_azure_kubernetes_service, 23_aci_aks_web_service_testing, and 24_exploring_hyperparameters_on_azureml) collect browser usage data and send it to Microsoft to help improve our products and services. Read Microsoft's privacy statement to learn more.

To opt out of tracking, please go to the raw .ipynb files and remove the following line of code (the URL will be slightly different depending on the file):

    "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/ComputerVision/classification/notebooks/21_deployment_on_azure_container_instances.png)"