Best Practices, code samples, and documentation for Computer Vision.
Перейти к файлу
Yazeed Alaudah b3d6673002
Fixing various broken links (#674)
2022-09-08 22:41:52 +03:00
.github Update PULL_REQUEST_TEMPLATE.md (#624) 2020-09-15 09:04:44 -04:00
contrib Fixing various broken links (#674) 2022-09-08 22:41:52 +03:00
data/misc Move data split files to data/misc; update notebook accordingly. 2020-01-23 16:00:01 -05:00
docker Staging (#390) 2019-11-06 10:39:48 -05:00
scenarios Fixing various broken links (#674) 2022-09-08 22:41:52 +03:00
tests Update assets location (#637) 2021-01-21 13:12:19 -05:00
utils_cv adding support for void label to unet segmentation 2021-08-18 12:52:11 -04:00
.flake8 [#171 & #172] dsvm tool (#222) 2019-09-02 19:26:10 -04:00
.gitignore Video Dataset / Model refactor + framework for action tests (#477) 2020-03-26 13:06:55 -04:00
.gitmodules Add back submodules using https:// 2020-01-23 15:59:58 -05:00
.pre-commit-config.yaml Adding pre-commit package to yml, more rules to flake8 lint config and adding pre-commit config file 2019-03-13 14:21:15 -04:00
AUTHORS.md Updates to readmes 2020-06-26 14:58:36 -04:00
CONTRIBUTING.md Updates to readmes 2020-06-26 14:58:36 -04:00
LICENSE Initial commit 2019-02-11 08:23:56 -08:00
MANIFEST.in Make utils_cv pip-installable 2019-08-13 11:35:04 +08:00
NOTICE.txt Update NOTICE.txt (#599) 2020-07-14 10:10:57 -04:00
README.md Fix typos in README - Staging branch (#625) 2020-09-17 16:36:36 -04:00
SETUP.md Updates to readmes 2020-06-26 14:58:36 -04:00
_config.yml Set theme jekyll-theme-cayman 2019-11-17 15:23:59 -05:00
environment.yml Video Dataset / Model refactor + framework for action tests (#477) 2020-03-26 13:06:55 -04:00
pyproject.toml set black config to use 79 chars per line 2019-02-28 20:50:52 +00:00
setup.py Revert environment.yml, update README, change Python version >=3.6 2019-08-13 11:35:04 +08:00

README.md

+ Update July: Added support for action recognition and tracking
+              in the new release v1.2.

Computer Vision

In recent years, we've see an extra-ordinary growth in Computer Vision, with applications in face recognition, image understanding, search, drones, mapping, semi-autonomous and autonomous vehicles. A key part to many of these applications are visual recognition tasks such as image classification, object detection and image similarity.

This repository provides examples and best practice guidelines for building computer vision systems. The goal of this repository is to build a comprehensive set of tools and examples that leverage recent advances in Computer Vision algorithms, neural architectures, and operationalizing such systems. Rather than creating implementations from scratch, we draw from existing state-of-the-art libraries and build additional utility around loading image data, optimizing and evaluating models, and scaling up to the cloud. In addition, having worked in this space for many years, we aim to answer common questions, point out frequently observed pitfalls, and show how to use the cloud for training and deployment.

We hope that these examples and utilities can significantly reduce the “time to market” by simplifying the experience from defining the business problem to development of solution by orders of magnitude. In addition, the example notebooks would serve as guidelines and showcase best practices and usage of the tools in a wide variety of languages.

These examples are provided as Jupyter notebooks and common utility functions. All examples use PyTorch as the underlying deep learning library.

Examples

This repository supports various Computer Vision scenarios which either operate on a single image:

Some supported CV scenarios

As well as scenarios such as action recognition which take a video sequence as input:

Target Audience

Our target audience for this repository includes data scientists and machine learning engineers with varying levels of Computer Vision knowledge as our content is source-only and targets custom machine learning modelling. The utilities and examples provided are intended to be solution accelerators for real-world vision problems.

Getting Started

To get started, navigate to the Setup Guide, which lists instructions on how to setup the compute environment and dependencies needed to run the notebooks in this repo. Once your environment is setup, navigate to the Scenarios folder and start exploring the notebooks. We recommend to start with the image classification notebooks, since this introduces concepts which are also used by the other scenarios (e.g. pre-training on ImageNet).

Alternatively, we support Binder Binder which makes it easy to try one of our notebooks in a web-browser simply by following this link. However, Binder is free, and as a result only comes with limited CPU compute power and without GPU support. Expect the notebook to run very slowly (this is somewhat improved by reducing image resolution to e.g. 60 pixels but at the cost of low accuracies).

Scenarios

The following is a summary of commonly used Computer Vision scenarios that are covered in this repository. For each of the main scenarios ("base"), we provide the tools to effectively build your own model. This includes simple tasks such as fine-tuning your own model on your own data, to more complex tasks such as hard-negative mining and even model deployment.

Scenario Support Description
Classification Base Image Classification is a supervised machine learning technique to learn and predict the category of a given image.
Similarity Base Image Similarity is a way to compute a similarity score given a pair of images. Given an image, it allows you to identify the most similar image in a given dataset.
Detection Base Object Detection is a technique that allows you to detect the bounding box of an object within an image.
Keypoints Base Keypoint detection can be used to detect specific points on an object. A pre-trained model is provided to detect body joints for human pose estimation.
Segmentation Base Image Segmentation assigns a category to each pixel in an image.
Action recognition Base Action recognition to identify in video/webcam footage what actions are performed (e.g. "running", "opening a bottle") and at what respective start/end times. We also implemented the i3d implementation of action recognition that can be found under (contrib)[contrib].
Tracking Base Tracking allows to detect and track multiple objects in a video sequence over time.
Crowd counting Contrib Counting the number of people in low-crowd-density (e.g. less than 10 people) and high-crowd-density (e.g. thousands of people) scenarios.

We separate the supported CV scenarios into two locations: (i) base: code and notebooks within the "utils_cv" and "scenarios" folders which follow strict coding guidelines, are well tested and maintained; (ii) contrib: code and other assets within the "contrib" folder, mainly covering less common CV scenarios using bleeding edge state-of-the-art approaches. Code in "contrib" is not regularly tested or maintained.

Computer Vision on Azure

Note that for certain computer vision problems, you may not need to build your own models. Instead, pre-built or easily customizable solutions exist on Azure which do not require any custom coding or machine learning expertise. We strongly recommend evaluating if these can sufficiently solve your problem. If these solutions are not applicable, or the accuracy of these solutions is not sufficient, then resorting to more complex and time-consuming custom approaches may be necessary.

The following Microsoft services offer simple solutions to address common computer vision tasks:

  • Vision Services are a set of pre-trained REST APIs which can be called for image tagging, face recognition, OCR, video analytics, and more. These APIs work out of the box and require minimal expertise in machine learning, but have limited customization capabilities. See the various demos available to get a feel for the functionality (e.g. Computer Vision). The service can be used through API calls or through SDKs (available in .NET, Python, Java, Node and Go languages)

  • Custom Vision is a SaaS service to train and deploy a model as a REST API given a user-provided training set. All steps including image upload, annotation, and model deployment can be performed using an intuitive UI or through SDKs (available in .NEt, Python, Java, Node and Go languages). Training image classification or object detection models can be achieved with minimal machine learning expertise. The Custom Vision offers more flexibility than using the pre-trained cognitive services APIs, but requires the user to bring and annotate their own data.

If you need to train your own model, the following services and links provide additional information that is likely useful.

  • Azure Machine Learning service (AzureML) is a service that helps users accelerate the training and deploying of machine learning models. While not specific for computer vision workloads, the AzureML Python SDK can be used for scalable and reliable training and deployment of machine learning solutions to the cloud. We leverage Azure Machine Learning in several of the notebooks within this repository (e.g. deployment to Azure Kubernetes Service)

  • Azure AI Reference architectures provide a set of examples (backed by code) of how to build common AI-oriented workloads that leverage multiple cloud components. While not computer vision specific, these reference architectures cover several machine learning workloads such as model deployment or batch scoring.

Build Status

AzureML Testing

Build Type Branch Status Branch Status
Linux GPU master Build Status staging Build Status
Linux CPU master Build Status staging Build Status
Notebook unit GPU master Build Status staging Build Status

Contributing

This project welcomes contributions and suggestions. Please see our contribution guidelines.