* getting started restructure'

* update ci

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* test
This commit is contained in:
JS 2019-07-08 12:27:14 -04:00 коммит произвёл GitHub
Родитель bb0fcfeac8
Коммит 806fc69ead
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
6 изменённых файлов: 36 добавлений и 41 удалений

Просмотреть файл

@ -6,7 +6,7 @@ steps:
displayName: Add Conda to PATH
- bash: |
conda env create -f classification/environment.yml
conda env create -f environment.yml
source activate cv
conda env list
displayName: 'Create and activate conda environment'

Просмотреть файл

@ -18,11 +18,41 @@ The goal of this repository is to help speed up development of Computer Vision a
Currently the main investment/priority is around image classification and to a lesser extend image segmentation. We are also actively working on providing a basic (but often sufficiently accurate) example for image similarity. Object detection is scheduled to start once image classification is completed. See the [projects](https://github.com/Microsoft/ComputerVision/projects) and [milestones](https://github.com/Microsoft/ComputerVision/milestones) pages in this repository for more details.
## Getting Started
## Getting started
Instructions on how to get started, as well as our example notebooks and discussions are provided in the [classification](classification/README.md) subfolder.
To get started on your local machine:
Note that for certain Computer Vision problems, ready-made or easily customizable solutions exist which do not require any custom coding or machine learning expertise. We strongly recommend evaluating if these can sufficiently solve your problem. If these solutions are not applicable, or the accuracy of these solutions is not sufficient, then resorting to more complex and time-consuming custom approaches may be necessary.
1. Install Anaconda with Python >= 3.6. [Miniconda](https://conda.io/miniconda.html) is a quick way to get started.
1. Clone the repository
```
git clone https://github.com/Microsoft/ComputerVision
```
1. Install the conda environment, you'll find the `environment.yml` file in the root directory. To build the conda environment:
```
conda env create -f environment.yml
```
1. Activate the conda environment and register it with Jupyter:
```
conda activate cv
python -m ipykernel install --user --name cv --display-name "Python (cv)"
```
If you would like to use [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/), install `jupyter-webrtc` widget:
```
jupyter labextension install jupyter-webrtc
```
1. Start the Jupyter notebook server
```
jupyter notebook
```
1. At this point, you should be able to run the notebooks in this repo. Explore our notebooks on the following computer vision domains. Make sure to change the kernel to "Python (cv)".
- [/classification](classification#notebooks)
- [/similarity](similarity#notebooks)
- /object detection [coming_soon]
- /segmentation [coming_soon]
## Services
Note that for certain Computer Vision problems, you may not need to build your own models. Instead, ready-made or easily customizable solutions exist which do not require any custom coding or machine learning expertise. We strongly recommend evaluating if these can sufficiently solve your problem. If these solutions are not applicable, or the accuracy of these solutions is not sufficient, then resorting to more complex and time-consuming custom approaches may be necessary.
The following Microsoft services offer simple solutions to address common Computer Vision tasks:
@ -69,4 +99,4 @@ To opt out of tracking, please go to the raw `.ipynb` files and remove the follo
```sh
"![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/ComputerVision/classification/notebooks/21_deployment_on_azure_container_instances.png)"
```
This URL will be slightly different depending on the file.
This URL will be slightly different depending on the file.

Просмотреть файл

@ -32,35 +32,6 @@ We have also found that some browsers do not render Jupyter widgets correctly. I
| [22_deployment_on_azure_kubernetes_service.ipynb](notebooks/22_deployment_on_azure_kubernetes_service.ipynb)| Deploys a trained model exposed on a REST API using the Azure Kubernetes Service (AKS). |
| [23_aci_aks_web_service_testing.ipynb](notebooks/23_aci_aks_web_service_testing.ipynb)| Tests the deployed models on either ACI or AKS. |
## Getting started
To get started on your local machine:
1. Install Anaconda with Python >= 3.6. [Miniconda](https://conda.io/miniconda.html) is a quick way to get started.
1. Clone the repository
```
git clone https://github.com/Microsoft/ComputerVision
cd ComputerVision/classification
```
1. Install the conda environment, you'll find the `environment.yml` file in the `classification` subdirectory. From there:
```
conda env create -f environment.yml
```
1. Activate the conda environment and register it with Jupyter:
```
conda activate cv
python -m ipykernel install --user --name cv --display-name "Python (cv)"
```
If you would like to use [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/), install `jupyter-webrtc` widget:
```
jupyter labextension install jupyter-webrtc
```
1. Start the Jupyter notebook server
```
cd notebooks
jupyter notebook
```
1. Start with the [00_webcam](notebooks/00_webcam.ipynb) image classification notebook under the `notebooks` folder. Make sure to change the kernel to "Python (cvbp)".
## Azure-enhanced notebooks
Azure products and services are used in certain notebooks to enhance the efficiency of developing classification systems at scale.

Просмотреть файл

Просмотреть файл

@ -6,7 +6,6 @@ The majority of state-of-the-art systems for image similarity use DNNs to comput
A major difference between modern image similarity approaches is how the DNN is trained. A simple but quite powerful approach is to use a standard image classification loss - this is the approach taken in this repository, and explained in the [classification](../classification/README.md) folder. More accurate similarity measures are based on DNNs which are trained explicitly for image similarity, such as the [FaceNet](https://arxiv.org/pdf/1503.03832.pdf) work which uses a Siamese network architecture. FaceNet-like approaches will be added to this repository at a later point.
## Notebooks
We provide several notebooks to show how image similarity algorithms can be designed and evaluated.
@ -16,11 +15,6 @@ We provide several notebooks to show how image similarity algorithms can be desi
| [00_webcam.ipynb](./notebooks/00_webcam.ipynb)| Quick start notebook which demonstrates how to build an image retrieval system using a single image or webcam as input.
| [01_training_and_evaluation_introduction.ipynb](./notebooks/01_training_and_evaluation_introduction.ipynb)| Notebook which explains the basic concepts around model training and evaluation, based on using DNNs trained for image classification.|
## Getting Started
To setup on your local machine follow the [Getting Started](../classification/#getting-started) section in the image classification folder. Furthermore, basic image classification knowledge explained by the notebooks [01_training_introduction.ipynb](../classification/notebooks/01_training_introduction.ipynb) and [03_training_accuracy_vs_speed.ipynb](../classification/notebooks/03_training_accuracy_vs_speed.ipynb) is assumed.
## Coding guidelines
See the [coding guidelines](../classification/#coding-guidelines) in the image classification folder.

Просмотреть файл

@ -14,7 +14,7 @@ def test_generate_yaml():
"""Tests creation of deployment-specific yaml file
from existing image_classification/environment.yml"""
generate_yaml(
directory=os.path.join(str(root_path()), "classification"),
directory=str(root_path()),
ref_filename="environment.yml",
needed_libraries=["fastai", "pytorch"],
conda_filename="mytestyml.yml",