initial commit
|
@ -0,0 +1,3 @@
|
|||
{
|
||||
"CurrentProjectSetting": null
|
||||
}
|
59
README.md
|
@ -1,31 +1,46 @@
|
|||
# Visual Studio Tools for AI
|
||||
Visual Studio Tools for AI is an extension to build, test, and deploy Deep Learning / AI solutions. It seamlessly integrates with Azure Machine Learning for robust experimentation capabilities, including but not limited to submitting data preparation and model training jobs transparently to different compute targets. Additionally, it provides support for custom metrics and run history tracking, enabling data science reproducibility and auditing. Enterprise ready collaboration, allow to securely work on project with other people.
|
||||
|
||||
# Contributing
|
||||
Get started with deep learning using [Microsoft Cognitive Toolkit (CNTK)](http://www.microsoft.com/en-us/cognitive-toolkit), [Google TensorFlow](https://www.tensorflow.org), or other deep-learning frameworks today.
|
||||
|
||||
## Develop, debug and deploy deep learning models and AI solutions
|
||||
Use the productivity features of Visual Studio to accelerate AI innovation today. Use built-in code editor features like syntax highlighting, IntelliSense and text auto formatting. You can interactively test your deep learning application in your local environment using step-through debugging on local variables and models.
|
||||
|
||||
This project welcomes contributions and suggestions. Most contributions require you to agree to a
|
||||
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
|
||||
the rights to use your contribution. For details, visit https://cla.microsoft.com.
|
||||
![deep learning ide](/docs/media/ide.png)
|
||||
|
||||
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
|
||||
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
|
||||
provided by the bot. You will only need to do this once across all repos using our CLA.
|
||||
[Learn more about creating deep learning projects in Visual Studio](/docs/quickstart-02-project-from-template.md)
|
||||
|
||||
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
||||
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
|
||||
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
||||
## Get started quickly with the Azure Machine Learning Sample Gallery
|
||||
Visual Studio Tools for AI is integrated with Azure Machine Learning to make it easy to browse through a gallery of sample experiments using CNTK, TensorFlow, MMLSpark and more.
|
||||
|
||||
# Legal Notices
|
||||
![sample explorer](/docs/media/gallery.png)
|
||||
|
||||
Microsoft and any contributors grant you a license to the Microsoft documentation and other content
|
||||
in this repository under the [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/legalcode),
|
||||
see the [LICENSE](LICENSE) file, and grant you a license to any code in the repository under the [MIT License](https://opensource.org/licenses/MIT), see the
|
||||
[LICENSE-CODE](LICENSE-CODE) file.
|
||||
[Learn more about creating projects from the sample gallery](/docs/quickstart-00-project-from-AzureMachineLearning-gallery.md)
|
||||
|
||||
Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation
|
||||
may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries.
|
||||
The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks.
|
||||
Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.
|
||||
## Scale out deep learning model training and/or inferencing to the cloud
|
||||
This extension makes it easy to train models on your local computer or you can submit jobs to the cloud by using our integration with Azure Machine Learning. You can submit jobs to different compute targets like Spark clusters, Azure GPU virtual machines and more
|
||||
|
||||
![submit job](/docs/media/submitjobs.png)
|
||||
|
||||
Privacy information can be found at https://privacy.microsoft.com/en-us/
|
||||
[Learn more about training models in the cloud](/docs/tensorflow-vm.md)
|
||||
|
||||
Microsoft and any contributors reserve all others rights, whether under their respective copyrights, patents,
|
||||
or trademarks, whether by implication, estoppel or otherwise.
|
||||
# Supported Operating Systems
|
||||
Currently this extension supports Windows 64-bit operating systems
|
||||
|
||||
# Support
|
||||
Support for this extension is provided on our [GitHub Issue Tracker](http://github.com/Microsoft/vs-tools-for-ai/issues). You can submit a bug report, a feature suggestion or participate in discussions.
|
||||
|
||||
## Code of Conduct
|
||||
This project has adopted the [Microsoft Open Source Code of Conduct]. For more information see the [Code of Conduct FAQ] or contact [opencode@microsoft.com] with any additional questions or comments.
|
||||
|
||||
## Privacy Statement
|
||||
The [Microsoft Enterprise and Developer Privacy Statement] describes the privacy statement of this software.
|
||||
|
||||
## License
|
||||
This extension is [licensed under the MIT License] and subject to the terms of the [End User License Agreement](/docs/license.txt)
|
||||
|
||||
[Microsoft Enterprise and Developer Privacy Statement]:https://go.microsoft.com/fwlink/?LinkId=786907&lang=en7
|
||||
[licensed under the MIT License]: /LICENSE
|
||||
[Microsoft Open Source Code of Conduct]:https://opensource.microsoft.com/codeofconduct/
|
||||
[Code of Conduct FAQ]:https://opensource.microsoft.com/codeofconduct/faq/
|
||||
[opencode@microsoft.com]:mailto:opencode@microsoft.com
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
- name: Visual Studio Tools for AI
|
||||
href: index.yml
|
||||
items:
|
||||
- name: Overview
|
||||
items:
|
||||
- name: About AI Tools
|
||||
href: about-ai-tools.md
|
||||
- name: Installation
|
||||
href: installation.md
|
||||
- name: Quickstarts
|
||||
items:
|
||||
- name: Tensorflow + Python
|
||||
href: tensorflow-local.md
|
||||
- name: Tutorials
|
||||
items:
|
||||
- name: TensorFlow + Azure Deep Learning VM
|
||||
href: tensorflow-vm.md
|
||||
- name: Concepts
|
||||
items:
|
||||
- name: Cloud compute targets
|
||||
href: cloud-targets.md
|
||||
- name: How-to guides
|
||||
items:
|
||||
- name: Train your model
|
||||
href: train-your-model.md
|
||||
- name: Manage projects
|
||||
href: manage-projects.md
|
||||
- name: Manage storage
|
||||
href: manage-storage.md
|
||||
- name: Samples
|
||||
items:
|
||||
- name: GitHub
|
||||
href: https://github.com/Microsoft/samples-for-ai
|
||||
- name: Machine Learning Gallery
|
||||
href: https://gallery.cortanaintelligence.com/projects
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
# Monitoring GPU Utilization
|
||||
To monitor GPU utilization of remote Linux machines:
|
||||
|
||||
1. In **Server Explorer**, expand **Remote Machines**
|
||||
2. **Right click** the remote machine you want to monitor
|
||||
|
||||
![gpu heatmap](/media/gpu-heatmap-0.png)
|
||||
|
||||
2. Click **Show Heat Map**
|
||||
|
||||
![gpu heatmap](/media/heatmap.png)
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
### YamlMime:YamlDocument
|
||||
documentType: LandingData
|
||||
title: AI Tools for Visual Studio
|
||||
metadata:
|
||||
document_id: 83ceabb5-fb4f-a709-3fd2-37d024409f31
|
||||
title: Visual Studio Tools for AI | Microsoft Docs
|
||||
meta.description:
|
||||
services: ai-tools-visual-studio
|
||||
author: lisawong19
|
||||
manager: routlaw
|
||||
ms.service: visual studio
|
||||
ms.tgt_pltfrm: na
|
||||
ms.devlang: na
|
||||
ms.topic: landing-page
|
||||
ms.date: 10-25-2017
|
||||
ms.author: liwong
|
||||
abstract:
|
||||
description: AI Tools for Visual Studio is a cross-platform integrated development environment to build, test, and deploy deep learning solutions. Learn how to use AI Tools with our quickstarts, tutorials, and samples.
|
||||
sections:
|
||||
- title: 5-Minute Quickstarts
|
||||
items:
|
||||
- type: paragraph
|
||||
text: 'Learn how to run deep learning solution with:'
|
||||
- type: list
|
||||
style: icon48
|
||||
items:
|
||||
- image:
|
||||
src: media/Tensorflow_logo.png
|
||||
text: TensorFlow and Python
|
||||
href: tensorflow-local.md
|
||||
- title: Step-by-Step Tutorials
|
||||
items:
|
||||
- type: paragraph
|
||||
text: 'Learn how to build a deep learning solution:'
|
||||
- type: list
|
||||
style: ordered
|
||||
items:
|
||||
- html: <a href="/visualstudio/ai/tensorflow-vm">Run a TensorFlow model in the cloud</a>
|
||||
- title: Samples
|
||||
items:
|
||||
- type: list
|
||||
style: unordered
|
||||
className: spaced noBullet
|
||||
items:
|
||||
- html: <a href="https://github.com/Microsoft/samples-for-ai">Samples Repository</a>
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
title: AI Tools for Visual Studio
|
||||
description: Installation of AI Tools for Visual Studio
|
||||
keywords: ai, visual studio
|
||||
author: lisawong19
|
||||
ms.author: liwong
|
||||
manager: routlaw
|
||||
ms.date: 11/13/2017
|
||||
ms.topic: article
|
||||
ms.technology: visual studio
|
||||
ms.devlang: multiple
|
||||
ms.service: multiple
|
||||
---
|
||||
|
||||
# Installing Visual Studio Tools for AI
|
||||
|
||||
This extension works with [Visual Studio](https://docs.microsoft.com/en-us/visualstudio/) Community edition or higher.
|
||||
|
||||
To install, download from the [Visual Studio MarketPlace](http://aka.ms/vstoolsforai) or from within Visual Studio
|
||||
|
||||
1. **Tools**> **Extensions and Updates**
|
||||
|
||||
![extensions](media\installation\extensions.png)
|
||||
|
||||
1. **Search** in upper right-hand corner for "Tools for AI"
|
||||
2. Select **Visual Studio Tools for AI**
|
||||
3. Click **Download**
|
|
@ -0,0 +1,11 @@
|
|||
# View recent job performance and details
|
||||
Once the jobs are submitted, you can view the list of jobs to see their status, duration and more.
|
||||
|
||||
1. In the **Server Explorer** expand the specific compute context
|
||||
1. Double-click **Jobs**
|
||||
1. You will see the list of jobs submitted to that compute context.
|
||||
1. Select a specific **Job** in the list to view details
|
||||
|
||||
![monitor jobs](/media/monitor-jobs.png)
|
||||
|
||||
> Job history submitted to Linux VMs is stored on the VM in the /tmp directory. Therefore, whenever it is rebooted the job history is cleared. For a permanent record of your job history please configure your VM as a compute context in Azure Machine learning, then Submit Job to Azure Machine Learning (selecting your VM as the compute context)
|
|
@ -0,0 +1,110 @@
|
|||
# Manage projects
|
||||
|
||||
## Project templates
|
||||
|
||||
OpenMind Studio provide the following project templates:
|
||||
|
||||
1. "From Existing Python code" template for creating applications by importing existing Python files in a folder.
|
||||
|
||||
1. "Caffe2 Python Application" template for creating Caffe2 applications with Python language.
|
||||
|
||||
1. "CNTK BrainScript Application" template for creating CNTK applications with BrainScript language.
|
||||
|
||||
1. "CNTK Python Application" template for creating CNTK applications with Python language.
|
||||
|
||||
1. "Keras Application (CNTK backend)" template for creating Keras applications with Python language using CNTK backend.
|
||||
|
||||
1. "Keras Application (TensorFlow backend)" template for creating Keras applications with Python language using TensorFlow backend.
|
||||
|
||||
1. "Keras Application (Theano backend)" template for creating Keras applications with Python language using Theano backend.
|
||||
|
||||
1. "TensorFlow Application" template for creating TensorFlow applications with Python language.
|
||||
|
||||
1. "Theano Application" template for creating Theano applications with Python language.
|
||||
|
||||
## Create New Projects from Templates <a id="create_new_project"></a>
|
||||
|
||||
To create a new OpenMind project, go to ***File > New > Project***.
|
||||
On the left explorer pane, OpenMind projects can be found under ***Installed-> Templates->OpenMind***.
|
||||
|
||||
<center>![](images/image18.png)</center>
|
||||
<center>Figure: Create a new OpenMind project.</center>
|
||||
|
||||
## Create New Projects by Importing Existing Code <a id="import_new_project"></a>
|
||||
|
||||
OpenMind Studio provides a wizard for users to create a new Deep Learning project by importing existing source files and keeping hierarchy.
|
||||
|
||||
<center>![](images/image19.png)</center>
|
||||
<center>Figure: Create a new project by importing existing code.</center>
|
||||
|
||||
<center>![](images/image20.png)</center>
|
||||
<center>Figure: Import wizard.</center>
|
||||
|
||||
## Setup Projects <a id="setup_project"></a>
|
||||
|
||||
### Project Properties <a id="project_property"></a>
|
||||
|
||||
Users right click on the project node and select the ***Properties*** context menu.
|
||||
Then project scoped parameters are showed in the following pages:
|
||||
|
||||
1. "General" property page:
|
||||
|
||||
<center>![](images/image21.png)</center>
|
||||
<center>Figure: "General" property page.</center>
|
||||
|
||||
a. "Startup File": the name of the file to start when launching your application.
|
||||
|
||||
b. "Working Directory": working directory for debugging and executing.
|
||||
|
||||
c. "Windows Application": is Windows Application?
|
||||
|
||||
d. "Interpreter:"
|
||||
|
||||
1. "Debug" property page:
|
||||
|
||||
<center>![](images/image22.png)</center>
|
||||
<center>Figure: "Debug" property page.</center>
|
||||
|
||||
a. "Search Paths": users could specify additional directories which are added to Python sys.path for making libraries available for importing.
|
||||
|
||||
b. "Script Arguments": command line arguments to be passed into the application on project start.
|
||||
|
||||
c. "Interpreter Arguments": command line arguments to be passed to the interpreter.
|
||||
|
||||
d. "Interpreter Path": the interpreter which is used to start the project.
|
||||
|
||||
e. "Environment Variables": this Specifies environment variables to be set in the spawned process in the form:
|
||||
|
||||
NAME1=value1
|
||||
NAME2=value2
|
||||
…
|
||||
|
||||
f. "Enable native code debugging": users could debug into native code written by C/C++.
|
||||
|
||||
1. "Publish" property page:
|
||||
|
||||
<center>![](images/image23.png)</center>
|
||||
<center>Figure: "Publish" property page.</center>
|
||||
|
||||
a. "Publish Location": location where the project could be published to.
|
||||
|
||||
1. "Misc" property page:
|
||||
|
||||
Keras projects have an additional "Misc" property page for users to change backends.
|
||||
|
||||
<center>![](images/image24.png)</center>
|
||||
<center>Figure: "Misc" property page for Keras project.</center>
|
||||
|
||||
a. "Backend": users could switch to another backend.
|
||||
|
||||
### Include multiple projects in a Solution <a id="include_multiple_projects"></a>
|
||||
|
||||
To add a new project to the solution, right-click on the solution in the ***Solution Explorer*** and select ***Add***.
|
||||
|
||||
A Startup Project runs alongside a solution, and the startup script in the startup project will be executed.
|
||||
|
||||
### Include multiple scripts in one project <a id="include_multiple_scripts"></a>
|
||||
|
||||
To create a new script within a project, right-click on the project in Solution Explorer and select ***Add > New Item***. To add an existing script to a OpenMind project, right-click on the project and select ***Add > Existing Item***.
|
||||
|
||||
Users set one of the scripts as the startup script by right-clicking it and select "**Set as Startup File**". The startup file serves as the main entry of the whole project. E.g. when users press F5 to run a project, the startup script is executed. The startup file node uses bolding font.
|
|
@ -0,0 +1,18 @@
|
|||
# Browse storage to upload data or download models and logs
|
||||
|
||||
You can browse all storage on the remote machine or Azure file share to enable uploading data or downloading models and logs. Or if you want to access logs and job outputs for a specific job you can do that as well in the job browser
|
||||
|
||||
## To access all data on the remote machine or file share
|
||||
1. Open the **Server Explorer**
|
||||
2. Expand the remote machine or Batch AI compute context
|
||||
3. Right click **Storage** then click **Browse**
|
||||
|
||||
![storage](/media/browse-storage.png)
|
||||
|
||||
## To access job specific data on the remote machine or file share
|
||||
1. Open the [Job History](job-history.md)
|
||||
2. Select the job
|
||||
3. Click **Working Folder** or click StdOut / Stderr for quick access to these important log files
|
||||
|
||||
![storage](/media/job-workingfolder.png)
|
||||
|
После Ширина: | Высота: | Размер: 198 KiB |
После Ширина: | Высота: | Размер: 308 KiB |
После Ширина: | Высота: | Размер: 357 KiB |
После Ширина: | Высота: | Размер: 16 KiB |
После Ширина: | Высота: | Размер: 189 KiB |
После Ширина: | Высота: | Размер: 138 KiB |
После Ширина: | Высота: | Размер: 28 KiB |
После Ширина: | Высота: | Размер: 4.3 KiB |
После Ширина: | Высота: | Размер: 198 KiB |
После Ширина: | Высота: | Размер: 308 KiB |
После Ширина: | Высота: | Размер: 189 KiB |
После Ширина: | Высота: | Размер: 357 KiB |
После Ширина: | Высота: | Размер: 352 KiB |
После Ширина: | Высота: | Размер: 266 KiB |
После Ширина: | Высота: | Размер: 15 KiB |
После Ширина: | Высота: | Размер: 15 KiB |
После Ширина: | Высота: | Размер: 161 KiB |
После Ширина: | Высота: | Размер: 121 KiB |
После Ширина: | Высота: | Размер: 23 KiB |
После Ширина: | Высота: | Размер: 153 KiB |
После Ширина: | Высота: | Размер: 30 KiB |
После Ширина: | Высота: | Размер: 23 KiB |
После Ширина: | Высота: | Размер: 26 KiB |
После Ширина: | Высота: | Размер: 33 KiB |
После Ширина: | Высота: | Размер: 461 KiB |
После Ширина: | Высота: | Размер: 11 KiB |
После Ширина: | Высота: | Размер: 42 KiB |
После Ширина: | Высота: | Размер: 94 KiB |
После Ширина: | Высота: | Размер: 204 KiB |
После Ширина: | Высота: | Размер: 192 KiB |
После Ширина: | Высота: | Размер: 9.2 KiB |
После Ширина: | Высота: | Размер: 11 KiB |
После Ширина: | Высота: | Размер: 168 KiB |
После Ширина: | Высота: | Размер: 62 KiB |
После Ширина: | Высота: | Размер: 84 KiB |
После Ширина: | Высота: | Размер: 370 KiB |
После Ширина: | Высота: | Размер: 18 KiB |
После Ширина: | Высота: | Размер: 112 KiB |
После Ширина: | Высота: | Размер: 292 KiB |
После Ширина: | Высота: | Размер: 21 KiB |
После Ширина: | Высота: | Размер: 17 KiB |
После Ширина: | Высота: | Размер: 204 KiB |
После Ширина: | Высота: | Размер: 192 KiB |
После Ширина: | Высота: | Размер: 76 KiB |
После Ширина: | Высота: | Размер: 44 KiB |
После Ширина: | Высота: | Размер: 41 KiB |
После Ширина: | Высота: | Размер: 352 KiB |
После Ширина: | Высота: | Размер: 17 KiB |
После Ширина: | Высота: | Размер: 28 KiB |
После Ширина: | Высота: | Размер: 58 KiB |
После Ширина: | Высота: | Размер: 56 KiB |
После Ширина: | Высота: | Размер: 38 KiB |
После Ширина: | Высота: | Размер: 266 KiB |
После Ширина: | Высота: | Размер: 5.8 KiB |
После Ширина: | Высота: | Размер: 71 KiB |
После Ширина: | Высота: | Размер: 61 KiB |
После Ширина: | Высота: | Размер: 82 KiB |
После Ширина: | Высота: | Размер: 74 KiB |
После Ширина: | Высота: | Размер: 32 KiB |
После Ширина: | Высота: | Размер: 20 KiB |
После Ширина: | Высота: | Размер: 155 KiB |
После Ширина: | Высота: | Размер: 37 KiB |
После Ширина: | Высота: | Размер: 16 KiB |
|
@ -0,0 +1,216 @@
|
|||
# Preparing your environment
|
||||
|
||||
Before training deep learning models on your local computer you should make sure you have the latest applicable prerequisites installed. This includes making sure the latest drivers and libraries for your NVIDIA GPU (if you have one). You should also ensure you have installed Python and Python libraries such as NumPy, SciPy, and appropriate deep learning frameworks such as Microsoft Cognitive Toolkit (CNTK), TensorFlow, Caffe2, MXNet, Keras, Theano, PyTorch and/or Chainer.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> Software introduction in the following subsectons is excerpted from their homepages.
|
||||
|
||||
## NVIDIA GPU driver, CUDA and cuDNN
|
||||
|
||||
### NVIDIA GPU driver
|
||||
|
||||
Deep learning frameworks take advantage of NVIDIA GPU to let machines learn at a speed, accuracy, and scale towards true artificial intelligence. If your computer has NVIDIA GPU cards, please visit [here](http://www.nvidia.com/Download/index.aspx) or try OS update to install the latest driver.
|
||||
|
||||
### CUDA
|
||||
|
||||
[CUDA](https://developer.nvidia.com/cuda-zone) is a parallel computing platform and programming model invented by NVIDIA.
|
||||
It enables dramatic increases in computing performance by harnessing the power of the GPU.
|
||||
Currently, CUDA Toolkit 8.0 is required by deep learning frameworks.
|
||||
|
||||
To install CUDA
|
||||
|
||||
- Visit this [site](https://developer.nvidia.com/cuda-80-ga2-download-archive), download CUDA and install it.
|
||||
- Make sure to install the CUDA runtime libraries, and then add CUDA binary path to the %PATH% or $Path environment variable.
|
||||
- On Windows, this path is "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin" by default.
|
||||
|
||||
![install CUDA on Windows](media\prepare-local-machine\install_cuda_win.png)
|
||||
|
||||
### cuDNN
|
||||
|
||||
[cuDNN](https://developer.nvidia.com/cudnn) (CUDA Deep Neural Network library) is a GPU-accelerated library of primitives for deep neural networks by NVIDIA. cuDNN v6 is required by latest deep learning frameworks.
|
||||
|
||||
To install cuDNN
|
||||
- Visit [here](https://developer.nvidia.com/rdp/cudnn-download) to download and install the latest package.
|
||||
- Ensure to add the directory containing cuDNN binary to the %PATH% or $Path environment variable.
|
||||
- On Windows, you can copy cudnn64_6.dll to "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin".
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> Previous deep learning frameworks such as CNTK 2.0 and TensorFlow 1.2.1 need cuDNN v5.1.
|
||||
> However, you can install multiple cuDNN versions together.
|
||||
|
||||
|
||||
## Python
|
||||
|
||||
Python has been the primary programming language for deep learning applications.
|
||||
**64-bit** Python distribution is required, and [Python 3.5.4](https://www.python.org/downloads/release/python-354/) is recommended for the best compatibility.
|
||||
|
||||
### To install Python on Windows
|
||||
- We suggest to install the Python launcher for yourself only, and add Python to the %PATH% environment variable.
|
||||
- Please ensure to install pip, which is the package management system to install and manage software packages written in Python.
|
||||
|
||||
Deep learning frameworks rely on pip for their own installation.
|
||||
|
||||
![install Python on Windows](media\prepare-local-machine\install_python_win.png)
|
||||
|
||||
Then, we need to verify whether Python 3.5 is installed correctly, and upgrade pip to the latest version by executing the following commands in a terminal:
|
||||
|
||||
- **Windows**
|
||||
```cmd
|
||||
C:\Users\test>python -V
|
||||
Python 3.5.4
|
||||
|
||||
C:\Users\test>pip3.5 -V
|
||||
pip 9.0.1 from c:\users\test\appdata\local\programs\python\python35\lib\site-packages (python 3.5)
|
||||
|
||||
C:\Users\test>python -m pip install -U pip
|
||||
```
|
||||
|
||||
- **macOS**
|
||||
```bash
|
||||
MyMac:~ test$ python3.5 -V
|
||||
Python 3.5.4
|
||||
|
||||
MyMac:~ test$ pip3.5 -V
|
||||
pip 9.0.1 from /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages (python 3.5)
|
||||
|
||||
MyMac:~ test$ python3.5 -m pip install -U pip
|
||||
```
|
||||
|
||||
## Python on Visual Studio
|
||||
|
||||
Python is fully supported in Visual Studio through extensions.
|
||||
Learn more about install [Python for Visual Studio Tools](https://docs.microsoft.com/en-us/visualstudio/python/installation) for more details.
|
||||
|
||||
## NumPy and SciPy
|
||||
|
||||
- **NumPy** is a general-purpose array-processing package designed to efficiently manipulate large multi-dimensional arrays of arbitrary records without sacrificing too much speed for small multi-dimensional arrays.
|
||||
|
||||
- **SciPy** (pronounced "Sigh Pie") is open-source software for mathematics, science, and engineering, depending on NumPy.
|
||||
Starting from version 1.0.0, SciPy now has official prebuilt wheel package for Windows.
|
||||
|
||||
To install NumPy and SciPy, run the following command in a terminal:
|
||||
```bash
|
||||
pip3.5 install -U numpy scipy
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> The above command will upgrade existing old or unofficial (e.g. third party packages from http://www.lfd.uci.edu/~gohlke/pythonlibs/ for Windows) NumPy and SciPy to the latest official ones.
|
||||
|
||||
## Microsoft Cognitive Toolkit (CNTK)
|
||||
|
||||
The [Microsoft Cognitive Toolkit](https://cntk.ai) is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. CNTK supports both Python and BrainScript programming languages.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> CNTK currently does not support macOS.
|
||||
|
||||
To install CNTK Python package, see [how to install CNTK](https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine)
|
||||
|
||||
## TensorFlow
|
||||
|
||||
[TensorFlow](https://www.tensorflow.org/) is an open source software library for numerical computation using data flow graphs.
|
||||
Please refer to [here](https://www.tensorflow.org/install/) for detailed installation.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> As of version 1.2, TensorFlow no longer provides GPU support for macOS.
|
||||
|
||||
## Caffe2
|
||||
|
||||
[Caffe2](https://caffe2.ai/) is a lightweight, modular, and scalable deep learning framework.
|
||||
Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.
|
||||
|
||||
Currently, there's no prebuilt Caffe2 python wheel package available.
|
||||
|
||||
Please visit [here](https://caffe2.ai/docs/getting-started.html) to build from source code.
|
||||
|
||||
## MXNet
|
||||
|
||||
[Apache MXNet (incubating)](https://mxnet.incubator.apache.org/) is a deep learning framework designed for both efficiency and flexibility.
|
||||
It allows you to **mix** [symbolic and imperative programming](http://mxnet.io/architecture/index.html#deep-learning-system-design-concepts) to maximize efficiency and productivity.
|
||||
|
||||
To install MXNet, run the following command in a terminal:
|
||||
- With GPU
|
||||
```bash
|
||||
pip3.5 install mxnet-cu80==0.12.0
|
||||
```
|
||||
- Without GPU
|
||||
```bash
|
||||
pip3.5 install mxnet==0.12.0
|
||||
```
|
||||
|
||||
## Keras
|
||||
|
||||
[Keras](https://keras.io/) is a high-level neural networks API, written in Python and capable of running on top of CNTK, TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
|
||||
|
||||
To install Keras, please run the following command in a terminal:
|
||||
```bash
|
||||
pip3.5 install Keras==2.0.9
|
||||
```
|
||||
|
||||
## Theano
|
||||
|
||||
[Theano](http://deeplearning.net/software/theano/) is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
|
||||
|
||||
To install Theano, please run the following command in a terminal:
|
||||
```bash
|
||||
pip3.5 install Theano==0.9.0
|
||||
```
|
||||
|
||||
## PyTorch
|
||||
|
||||
[PyTorch](http://pytorch.org/) is a python package that provides two high-level features:
|
||||
- Tensor computation (like numpy) with strong GPU acceleration
|
||||
- Deep Neural Networks built on a tape-based autograd system
|
||||
|
||||
To install PyTorch, please run the following command in a terminal:
|
||||
|
||||
- **Windows**
|
||||
- There is no official wheel package yet. You may download a third-party [Anaconda PyTorch package](https://anaconda.org/peterjc123/pytorch/0.2.1/download/win-64/pytorch-0.2.1-py35h24644ff_0.2.1cu80.tar.bz2).
|
||||
- Decompress it to your home directory, e.g. "C:\Users\test\pytorch".
|
||||
- Add "C:\Users\test\pytorch\Lib\site-packages" to the %PYTHONPATH% environment variable.
|
||||
|
||||
- **macOS**
|
||||
```bash
|
||||
pip3.5 install http://download.pytorch.org/whl/torch-0.2.0.post3-cp35-cp35m-macosx_10_7_x86_64.whl
|
||||
```
|
||||
> [!NOTE]
|
||||
>
|
||||
> macOS binaries dont support CUDA, install from source if CUDA is needed
|
||||
|
||||
- **Linux**
|
||||
```bash
|
||||
pip3.5 install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
|
||||
```
|
||||
> [!NOTE]
|
||||
>
|
||||
> This single package supports both GPU and CPU.
|
||||
|
||||
Finally, install torchvision on non-Windows:
|
||||
```bash
|
||||
pip3.5 install torchvision
|
||||
```
|
||||
|
||||
## Chainer
|
||||
|
||||
[Chainer](https://chainer.org/) is a Python-based deep learning framework aiming at flexibility.
|
||||
It provides automatic differentiation APIs based on the **define-by-run approach** (a.k.a. dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks.
|
||||
|
||||
To enable CUDA support, install [CuPy](https://github.com/cupy/cupy):
|
||||
```bash
|
||||
pip3.5 install cupy
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> On Windows, you need **2015** version of [Microsoft Visual Studio](https://www.visualstudio.com/)
|
||||
or [Microsoft Visual C++ Build Tools](http://landinghub.visualstudio.com/visual-cpp-build-tools)
|
||||
to compile CuPy with CUDA 8.0.
|
||||
|
||||
To install Chainer, please run the following command in a terminal:
|
||||
```bash
|
||||
pip3.5 install chainer==3.0.0
|
||||
```
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
title: "Quickstart: Create a Python project from a template in Visual Studio | Microsoft Docs"
|
||||
ms.custom: ""
|
||||
ms.date: 09/25/2017
|
||||
ms.reviewer: ""
|
||||
ms.suite: ""
|
||||
ms.technology:
|
||||
- "devlang-python"
|
||||
ms.devlang: python
|
||||
ms.tgt_pltfrm: ""
|
||||
ms.topic: "article"
|
||||
ms.assetid: 3f4b66c5-3ad8-4067-90cd-0100205700a7
|
||||
caps.latest.revision: 1
|
||||
author: "kraigb"
|
||||
ms.author: "kraigb"
|
||||
manager: ghogen
|
||||
---
|
||||
|
||||
# Quickstart: create an AI project from the Azure Machine Learning Gallery in Visual Studio
|
||||
|
||||
Azure Machine Learning is integrated with Visual Studio Tools for AI. You can use it to submit machine learning jobs to remote compute targets like Azure virtual machines, Spark clusters, and more. Learn more about [Azure Machine Learning Experimentation](https://docs.microsoft.com/en-us/azure/machine-learning/preview/experimentation-service-configuration)
|
||||
|
||||
Once you've [installed Visual Studio Tools for AI](installation.md), it's easy to create a new Python project using pre-made recipes in the Azure Machine Learning Sample Gallery.
|
||||
|
||||
> ! Azure Machine Learning Workbench must be installed. To install it please see the [Azure Machine Learning installation quickstart](https://docs.microsoft.com/en-us/azure/machine-learning/preview/quickstart-installation)
|
||||
|
||||
1. Launch Visual Studio. Open the **Server Explorer** by opening the **AI Tools** menu and choosing **Select Cluster**
|
||||
|
||||
![Cluster chooser](media/select-cluster.png)
|
||||
|
||||
1. Sign in to your Azure Machine Learning subscription by right-clicking the **Azure Machine Learning** node in the Server Explorer then select **Login** and follow the directions.
|
||||
|
||||
![login](media/azureml-login.png)
|
||||
|
||||
2. Select **AI Tools > Azure Machine Learning Sample Gallery**.
|
||||
|
||||
![Sample gallery](media/gallery.png)
|
||||
|
||||
1. For this Quickstart, select the "**MNIST using TensorFlow**" sample and click **Install**. Provide the
|
||||
2.
|
||||
- **Resource Group**: Azure resource group where your metadata will be stored
|
||||
- **Account**: Azure Machine Learning experimentation Account
|
||||
- **Workspace**: Azure Machine Learning workspace
|
||||
- **Project Type**: The machine learning framework. In this case choose **TensorFlow**
|
||||
- **Add to Solution**: determines whether to add to your current Visual Studio Solution or a create and open a new solution
|
||||
- **Project Path**: Location to save the code
|
||||
- **Project Name**: Type **TensorFlowMNIST**
|
||||
|
||||
|
||||
![Resulting project when using the Python Application template](media/new-AzureSampleProject.png)
|
||||
|
||||
1. Visual Studio creates the project file (a `.pyproj` file on disk) along with other files defined in the sample. With the "MNIST" template, the project contains several files.
|
||||
|
||||
![mnist](media/azml-mnist.png)
|
||||
|
||||
1. Submit the job to Azure Machine Learning.
|
||||
|
||||
![mnist](media/submit-azml.png)
|
||||
|
||||
1. Run in a Docker container or on your local machine
|
||||
|
||||
![mnist](media/azml-local.png)
|
|
@ -0,0 +1,32 @@
|
|||
# Quickstart: Create an AI project from existing code
|
||||
|
||||
Once you've [installed Visual Studio Tools for AI](installation.md), it's easy to bring existing Python code into a Visual Studio project.
|
||||
|
||||
> [!Important]
|
||||
> The process described here does not move or copy the original source files. If you want to work with a copy, duplicate the folder first.
|
||||
|
||||
1. Launch Visual Studio and select **File > New > Project**.
|
||||
|
||||
1. In the **New Project** dialog, search for "**AI Tools**", select the "**From Existing Python code**" template, give the project a name and location, and select **OK**.
|
||||
|
||||
![New Project from Existing Code, step 1](../media/new-ai-project.png)
|
||||
|
||||
1. In the wizard that appears, set the path to your existing code, set a filter for file types, and specify any search paths that your project requires, then select **OK**. If you don't know what search paths are, leave that field blank.
|
||||
|
||||
|
||||
![New Project from Existing Code, step 2](../media/azurebatch-newproject.png)
|
||||
|
||||
> If your existing code is part of an Azure Machine Learning project, check the "**Is Azure Machine Learning folder**" to ensure successful conversion of important Azure Machine Learning configuration details like which Experimentation account, which Workspace, the compute contexts to use and more.
|
||||
|
||||
1. To set a startup file, locate the file in Solution Explorer, right-click, and select **Set as Startup File**.
|
||||
|
||||
8. If desired, run the program by pressing Ctrl+F5 or selecting **Debug > Start Without Debugging**.
|
||||
|
||||
## Next Steps
|
||||
|
||||
> [!div class="nextstepaction"]
|
||||
> [Tutorial: Working with Python in Visual Studio](https://docs.microsoft.com/en-us/visualstudio/python/vs-tutorial-01-00)
|
||||
|
||||
## See Also
|
||||
|
||||
- [Creating an environment for an existing Python interpreter](https://docs.microsoft.com/en-us/visualstudio/python/python-environments#creating-an-environment-for-an-existing-interpreter)
|
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
title: "Quickstart: Create a Python project from a template in Visual Studio | Microsoft Docs"
|
||||
ms.custom: ""
|
||||
ms.date: 09/25/2017
|
||||
ms.reviewer: ""
|
||||
ms.suite: ""
|
||||
ms.technology:
|
||||
- "devlang-python"
|
||||
ms.devlang: python
|
||||
ms.tgt_pltfrm: ""
|
||||
ms.topic: "article"
|
||||
ms.assetid: 3f4b66c5-3ad8-4067-90cd-0100205700a7
|
||||
caps.latest.revision: 1
|
||||
author: "kraigb"
|
||||
ms.author: "kraigb"
|
||||
manager: ghogen
|
||||
---
|
||||
|
||||
# Quickstart: create an AI project from a template in Visual Studio
|
||||
|
||||
Once you've [installed Visual Studio Tools for AI](installation.md), it's easy to create a new Python project using a variety of templates.
|
||||
|
||||
1. Launch Visual Studio.
|
||||
|
||||
1. Select **File > New > Project** (Ctrl+Shift+N). In the **New Project** dialog, search for "**AI Tools**", and select the template you want. Note that selecting a template displays a short description of what the template provides.
|
||||
|
||||
![VS2017 New Project dialog with Python template](media/new-ai-project.png)
|
||||
|
||||
1. For this Quickstart, select the "**TensorFlow Application**" template, give the project a name (such as "MNIST") and location, and select **OK**.
|
||||
|
||||
1. Visual Studio creates the project file (a `.pyproj` file on disk) along with any other files as described by the template. With the "TensorFlow Application" template, the project contains one file named the same as your project. The file is open in the Visual Studio editor by default.
|
||||
|
||||
![Resulting project when using the Python Application template](media/new-tensorflowapp.png)
|
||||
|
||||
1. Notice the code already imports several libraries including TensorFlow, numpy, sys and os. Additionally it starts your application ready with some input arguments to easily enable switching the location of input training data, output models and log files. These params are useful when you submit your jobs to multiple compute contexts (ie different directory on your local dev box than on an Azure File Share).
|
||||
|
||||
1. Your project also has some properties created to make it easy to debug your app by automatically passing commandline arguments to these input parameters. **Right click** your project then select **Properties**
|
||||
|
||||
![Properties](media/project-properties.png)
|
||||
|
||||
1. Click the **Debug** tab to see the Script Arguments automatically added. you may change them as needed to where your input data is located and where you would like your output stored.
|
||||
|
||||
![Properties](media/project-properties_1.png)
|
||||
|
||||
1. Run the program by pressing Ctrl+F5 or selecting **Debug > Start Without Debugging** on the menu. The results are displayed in a console window.
|
|
@ -0,0 +1,37 @@
|
|||
# Quickstart: Clone a repository of Python code in Visual Studio
|
||||
|
||||
Once you've [Visual Studio Tools for AI](installation.md), you can easily clone a repository of Python code and create a project from it.
|
||||
|
||||
1. To connect to GitHub repositories, run the Visual Studio installer, select **Modify**, and select the **Individual components** tab. Scroll down to the **Code tools** section, select **GitHub extension for Visual Studio**, and select **Modify**.
|
||||
|
||||
![Selecting the GitHub extension in the Visual Studio installer](/media/installation-github-extension.png)
|
||||
|
||||
2. Launch Visual Studio.
|
||||
|
||||
3. Select **View > Team Explorer...** to open the **Team Explorer** window in which you can connect to GitHub or Visual Studio Team Services, or clone a repository.
|
||||
|
||||
![Team explorer window showing Visual Studio Team Services, GitHub, and cloning a repository](media/team-explorer.png)
|
||||
|
||||
4. In the URL field under **Local Git Repositories**, enter `https://github.com/Microsoft/samples-for-ai`, enter a folder for the cloned files, and select **Clone**.
|
||||
|
||||
> [!Tip]
|
||||
> The folder you specify in Team Explorer is the specific folder to receive the cloned files. Unlike the `git clone` command, creating a clone in Team Explorer does not automatically create a subfolder with the name of the repository.
|
||||
|
||||
5. When cloning is complete, double-click the repository folder at the bottom of Team Explorer to navigate to the repository dashboard. Under **Solutions**, select **New...**.
|
||||
|
||||
![Team explorer window, creating a new project from a clone](media/team-explorer-new-project.png)
|
||||
|
||||
6. In the **New Project** dialog that appears, select "**From Existing Python Code**", specify a name for the project, set **Location** to the same folder as the repository, and select **OK**. In the wizard that appears, select **Finish**.
|
||||
|
||||
7. Select **View > Solution Explorer** from the menu.
|
||||
|
||||
8. In Solution Explorer, expand the `TensorFlow Examples> MNIST` node, right-click `convolutional.py`, and select **Set as Startup File**. This step tells Visual Studio which file it should use when running the project.
|
||||
|
||||
10. Press Ctrl+F5 or select **Debug > Start Without Debugging** to run the program. If you see an `, re-check the working directory setting in the previous step.
|
||||
|
||||
|
||||
11. When the program runs successfully, you'll see it start to download your training and test dataset, then train the model and output your error rate. You want error rate to decrease over tinme
|
||||
|
||||
![First output from the Python MNIST program](media/TensorFlow-MNIST-Running.png)
|
||||
|
||||
> If you are using Anaconda and get an error about missing numpy, you may need to change your python environment you may need to [change your python environment to use Anaconda](https://docs.microsoft.com/en-us/visualstudio/python/python-environments)
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
title: "Quickstart: Create a Python project from a template in Visual Studio | Microsoft Docs"
|
||||
ms.custom: ""
|
||||
ms.date: 09/25/2017
|
||||
ms.reviewer: ""
|
||||
ms.suite: ""
|
||||
ms.technology:
|
||||
- "devlang-python"
|
||||
ms.devlang: python
|
||||
ms.tgt_pltfrm: ""
|
||||
ms.topic: "article"
|
||||
ms.assetid: 3f4b66c5-3ad8-4067-90cd-0100205700a7
|
||||
caps.latest.revision: 1
|
||||
author: "kraigb"
|
||||
ms.author: "kraigb"
|
||||
manager: ghogen
|
||||
---
|
||||
|
||||
# Quickstart: train AI models in Azure Batch AI
|
||||
|
||||
Batch AI is a managed service that enables data scientists and AI researchers to train AI and other machine learning models on clusters of Azure virtual machines, including VMs with GPU support. You describe the requirements of your job, where to find the inputs and store the outputs, and Batch AI handles the rest. [Learn more about Azure Batch AI](https://docs.microsoft.com/en-us/azure/batch-ai/overview)
|
||||
|
||||
It's integrated with Visual Studio Tools for AI so you can dynamically scale out training models in Azure. Once you've [installed Visual Studio Tools for AI](installation.md), it's easy to create a new Python project using pre-made recipes in the Azure Machine Learning Sample Gallery.
|
||||
|
||||
1. Launch Visual Studio. Open the **Server Explorer** by opening the **AI Tools** menu and choosing **Select Cluster**
|
||||
|
||||
![Cluster chooser](media/select-cluster.png)
|
||||
|
||||
|
||||
2. Expand **AI Tools**. Any Batch AI resources you have will be auto-detected and appear in the Server Explorer.
|
||||
|
||||
![Sample gallery](media/batchai.png)
|
||||
|
||||
3. Select **View > Team Explorer...** to open the **Team Explorer** window in which you can connect to GitHub or Visual Studio Team Services, or clone a repository.
|
||||
|
||||
![Team explorer window showing Visual Studio Team Services, GitHub, and cloning a repository](media/team-explorer.png)
|
||||
|
||||
4. In the URL field under **Local Git Repositories**, enter `https://github.com/Microsoft/samples-for-ai`, enter a folder for the cloned files, and select **Clone**.
|
||||
|
||||
> [!Tip]
|
||||
> The folder you specify in Team Explorer is the specific folder to receive the cloned files. Unlike the `git clone` command, creating a clone in Team Explorer does not automatically create a subfolder with the name of the repository.
|
||||
|
||||
5. When cloning is complete, click **File > Open Solution > Project / Solution**
|
||||
|
||||
![Sample gallery](media/open-solution.png)
|
||||
|
||||
5. Open **samples-for-ai\TensorFlowExamples\TensorFlowExamples.sln** in the directory you cloned the repository
|
||||
|
||||
![Sample gallery](media/tensorflowexamples.png)
|
||||
|
||||
5. Set MNIST project as the **Startup Project **
|
||||
|
||||
![Sample gallery](media/mnist-startup.png)
|
||||
|
||||
1. **Right-click **MNIST project, **Submit Job**
|
||||
|
||||
![Sample gallery](media/submit-job.png)
|
||||
|
||||
1. Select your **Azure Batch AI** cluster, then click **Import**. Select the `AzureBatchAI_TF_MNIST.json` file to quickly populate some default values like which Docker Image to use. Then click **Submit**
|
||||
|
||||
![Sample gallery](media/submit-batch.png)
|
|
@ -0,0 +1,43 @@
|
|||
|
||||
# Run a TensorFlow model locally
|
||||
|
||||
In this quickstart, we will run a TensorFlow model with the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset locally in AI Tools.
|
||||
The MNIST database has a training set of 60,000 examples, and a test set of 10,000 examples of handwritten digits.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have the following installed:
|
||||
|
||||
### Google TensorFlow
|
||||
|
||||
Run the following command in a terminal.
|
||||
```cmd
|
||||
C:\>pip.exe install tensorflow==1.2.1
|
||||
```
|
||||
|
||||
### NumPy and SciPy
|
||||
Install [NumPy](https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy) and [SciPy](https://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy).
|
||||
|
||||
### Download sample code
|
||||
Download this [GitHub repository](https://github.com/Microsoft/samples-for-ai) containing samples for getting started with deep learning across TensorFlow, CNTK, Theano and more.
|
||||
|
||||
## Load and run model
|
||||
|
||||
- Launch Visual Studio and select **File > Open > Project/Solution**.
|
||||
|
||||
- Select the **Tensorflow Examples** folder from the samples repository dowloaded and open the **TensorflowExamples.sln** file.
|
||||
|
||||
![Open project](media\tensorflow-local\open-project.png)
|
||||
|
||||
![Open solution](media\tensorflow-local\open-solution.png)
|
||||
|
||||
- Find the MNIST Project in the **Solution Explorer**, right click and select **Set as StartUp Project**.
|
||||
|
||||
- Click **Start**.
|
||||
|
||||
- The output will be printed in the console.
|
||||
|
||||
![Sample output from console](media\tensorflow-local\console-output.png)
|
||||
|
||||
> [!div class="nextstepaction"]
|
||||
> [Run a TensorFlow model in the cloud](tensorflow-vm.md)
|
|
@ -0,0 +1,87 @@
|
|||
# Run a TensorFlow model in the cloud
|
||||
In this tutorial, we will run a TensorFlow model using the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) in an Azure [Deep Learning](https://docs.microsoft.com/azure/machine-learning/data-science-virtual-machine/deep-learning-dsvm-overview) virtual machine.
|
||||
|
||||
The MNIST database has a training set of 60,000 examples, and a test set of 10,000 examples of handwritten digits.
|
||||
|
||||
## Prerequisites
|
||||
Before you begin, ensure you have the following installed and configured:
|
||||
|
||||
### Setup Azure Deep Learning Virtual Machine
|
||||
|
||||
> [!NOTE]
|
||||
> Set **Location** to US West 2 and **OS type** as Linux.
|
||||
|
||||
Instuctions for setting up Deep Learning Virtual Machine can be found [here](https://docs.microsoft.com/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm).
|
||||
|
||||
### Install cuDNN
|
||||
Connect into the deep learning virtual machine and install cuDNN.
|
||||
|
||||
```bash
|
||||
wget http://developer.download.nvidia.com/compute/redist/cudnn/v6.0/cudnn-8.0-linux-x64-v6.0.tgz
|
||||
tar -xzvf ./cudnn-8.0-linux-x64-v6.0.tgz
|
||||
sudo mkdir /usr/local/cudnn-6.0
|
||||
sudo cp -r cuda /usr/local/cudnn-6.0
|
||||
```
|
||||
|
||||
### Edit Bash RC to support not running interactively (comment out the case statement)
|
||||
|
||||
```bash
|
||||
# If not running interactively, don't do anything
|
||||
#case $- in
|
||||
# *i*) ;;
|
||||
# *) return;;
|
||||
#esac
|
||||
```
|
||||
|
||||
### Add Path Variables
|
||||
```bash
|
||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
|
||||
export PATH=$PATH:/usr/local/cuda/bin
|
||||
export LD_LIBRARY_PATH=/usr/local/cudnn-6.0/cuda/lib64:$LD_LIBRARY_PATH
|
||||
export PATH=/anaconda/envs/py35/bin:$PATH
|
||||
```
|
||||
|
||||
### Download sample code
|
||||
Download this [GitHub repository](https://github.com/Microsoft/samples-for-ai) containing samples for getting started with deep learning across TensorFlow, CNTK, Theano and more.
|
||||
|
||||
## Open project
|
||||
|
||||
- Launch Visual Studio and select **File > Open > Project/Solution**.
|
||||
|
||||
- Select the **Tensorflow Examples** folder from the samples repository dowloaded and open the **TensorflowExamples.sln** file.
|
||||
|
||||
![Open project](media\tensorflow-local\open-project.png)
|
||||
|
||||
![Open solution](media\tensorflow-local\open-solution.png)
|
||||
|
||||
## Add Azure Remote VM
|
||||
|
||||
In Server Explorer, right click the **Remote Machines** node under the AI Tools node and select "Add…". Enter the Remote Machine display name, IP host, SSH port, user name and password/key file.
|
||||
|
||||
![Add a new remote machine](media\tensorflow-vm\add-remote-vm.png)
|
||||
|
||||
## Submit job to Azure VM
|
||||
Right click on MNIST project in **Solution Explorer** and select **Submit Job**.
|
||||
|
||||
![Job submission to a remote machine](media\tensorflow-vm\job-submission.png)
|
||||
|
||||
In the submission window:
|
||||
|
||||
- In the list of **Cluster to use**, select the remote machine (with "rm:" prefix) to submit the job to.
|
||||
|
||||
- Enter a **Job name**.
|
||||
|
||||
- Click **Submit**.
|
||||
|
||||
## Check status of job
|
||||
To see status and details of jobs: expand the virtual machine you submitted the job to in the **Server Explorer**. Double click on **Jobs**.
|
||||
|
||||
![Job browser](media\tensorflow-vm\job-browser.png)
|
||||
|
||||
## Clean up resources
|
||||
|
||||
Stop the VM if you plan on using it in the near future. If you are finished with this tutorial, run the following command to clean up your resources:
|
||||
|
||||
```azure-interactive
|
||||
az group delete --name myResourceGroup
|
||||
```
|