Merge branch 'dev' of https://msasg.visualstudio.com/OpenMind%20Studio/OpenMind%20Studio%20Team/_git/vs-tools-for-ai into dev
This commit is contained in:
Коммит
c55579887a
|
@ -30,7 +30,7 @@ Get started with deep learning using [Microsoft Cognitive Toolkit (CNTK)](http:/
|
|||
- [TensorFlow + Azure Deep Learning VM](/docs/tensorflow-vm.md)
|
||||
- [Infuse Apps, Websites and Bots with Microsoft Cognitive Services](/docs/cognitive-services.md)
|
||||
- [Build Intelligent Apps with Pre-trained AI Models](/docs/model-inference.md)
|
||||
- [Convert AI Models Between Frameworks](/docs/model-converter.md)
|
||||
- [Convert trained models to ONNX](/docs/model-converter.md)
|
||||
- [View Network Architecture and Parameters of AI Models](/docs/model-viewer.md)
|
||||
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ Once you've [installed Visual Studio Tools for AI](installation.md), it will cre
|
|||
|
||||
- <a id="list-services">Discover (list) all your subscribed cognitive services</a>. Refreshing (Double clicking or right-clicking and selecting refreshing) will list all the cognitive services subscribed in your account.
|
||||
|
||||
*Note: If there are so many subscriptions (or resource groups) in your account that the refreshing goes too slow, please filter the subscriptions (and resource groups) by right clicking **Azure Cognitive Services** and selecting **Select Subscription**.*
|
||||
*Note: If you have many subscriptions (or resource groups) in your account, you can filter the subscriptions (and resource groups) by right clicking **Azure Cognitive Services** and selecting **Select Subscription**.* This can make finding the service you are looking for easier.
|
||||
|
||||
- <a id="service-properties">Query the basic information of cognitive service</a>. After listing the subscribed cognitive service, you could get further information of the service of interest. Right the the cognitive service and select **Documentation**, **Properties** or **Subscription Keys** to retrieve the information you want to query. The subscription keys and **Endpoint Location** (**Properties**) are necessary to authenticate your applications.
|
||||
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
# How to add models to your application via NuGet
|
||||
|
||||
After [generating code from your trained model](model-inference.md), you can use Visual Studio to add a project reference to the model inferencing library project directly. Alternatively, you can publish it as a NuGet package to easily include in multiple applications or share with others in their applications like web services, desktop programs, etc. [Learn more about NuGet here](https://docs.microsoft.com/en-us/nuget/what-is-nuget)
|
||||
|
||||
## To create a NuGet package
|
||||
Right click the Model Inference Library project node in Solution Explorer and activate the menu item named “Export to NuGet Package”.
|
||||
|
||||
![Click context menu item to package project](/media/model-inference/create_nupkg.png)
|
||||
|
||||
A pop-up dialog pops up and you need provide meta information of the package to create. Please refer to [Required Metadata Elements](https://docs.microsoft.com/en-us/nuget/schema/nuspec#required-metadata-elements) in NuGet WIKI.
|
||||
|
||||
Visual Studio Tools for AI will set all other configuration items of NuGet package automatically for you.
|
||||
|
||||
![Fill in the meta information of the NuGet package to create](/media/model-inference/package_dialog.png)
|
||||
|
||||
Clicking the OK button, Visual Studio Tools for AI will first build your project. If built successfully, it will generate the nuspec file and corresponding build.targets file in background. Then, the built-in nuget.exe is called to create the NuGet package of Model Inference Library project.
|
||||
|
||||
Finally, a Windows Resource Manager window pops up redirecting to the directory containing built NuGet package. The output path is decided by your MSBuild configuration for the project.
|
||||
|
||||
![Where to find the created NuGet package](/media/model-inference/output_folder.png)
|
|
@ -1,8 +1,23 @@
|
|||
# Model Converter
|
||||
You can convert machine learning and deep learning framework models to [ONNX](https://onnx.ai/). Currently, we support Core ML/TensorFlow/Scikit-Learn/XGBoost/LIBSVM.
|
||||
# Converting models to ONNX
|
||||
[ONNX](https://onnx.ai/) is an open format to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners. To learn more about ONNX see [http://onnx.ai/](https://onnx.ai/)
|
||||
|
||||
Converting models to ONNX makes it easy to use them in a wide variety of optimized applications. Visual Studio Tools for AI will [generate code from ONNX models and TensorFlow models](model-inference.md) to make it easy to include models in your applications. However, only ONNX models are supported by [WindowsML](https://docs.microsoft.com/en-us/windows/uwp/machine-learning/)
|
||||
|
||||
Visual Studio Tools for AI makes it easy to convert trained models to ONNX by leveraging existing model converters. You can learn more about the [model converter utilities here](https://github.com/onnx/onnx), or simply use the wizard in Visual Studio to create your ONNX model.
|
||||
|
||||
Currently, Visual Studio Tools for AI supports converting machine learning and deep learning framework models to ONNX from the following frameworks:
|
||||
- Core ML
|
||||
- TensorFlow
|
||||
- Scikit-Learn
|
||||
- XGBoost
|
||||
- LIBSVM
|
||||
|
||||
## Prerequisites
|
||||
Before converting models, go to the third-party web site to install unofficial [XGBoost](https://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) and [LIBSVM](https://www.lfd.uci.edu/~gohlke/pythonlibs/#libsvm) 64-bit Windows packages, and then run the following command in a terminal:
|
||||
To install prerequisites for converting XGBoost and LibSVM models
|
||||
- Install [XGBoost](https://www.lfd.uci.edu/~gohlke/pythonlibs/#xgboost) 64-bit Windows package
|
||||
- Install [LIBSVM](https://www.lfd.uci.edu/~gohlke/pythonlibs/#libsvm) 64-bit Windows package
|
||||
|
||||
Then run the following from your command line:
|
||||
```cmd
|
||||
pip3 install tensorflow==1.5.0 scikit-learn onnx "git+https://github.com/apple/coremltools@v0.8" onnxmltools winmltools "git+https://github.com/onnx/tensorflow-onnx.git@r0.1"
|
||||
```
|
||||
|
@ -15,12 +30,15 @@ pip3 install tensorflow==1.5.0 scikit-learn onnx "git+https://github.com/apple/c
|
|||
### Convert Core ML model
|
||||
- Input graph name for ONNX model.
|
||||
|
||||
![Convert CoreML](./media/model-converter/coreml.png)
|
||||
![Convert CoreML](./media/model-converter/coreml.png)
|
||||
|
||||
### Convert TensorFlow model
|
||||
- We support two kinds of TensorFlow models: frozen protobuf model and MetaGraphDef model. Select checkpoint directory if model format is MetaGraphDef.
|
||||
You can convert two types of TensorFlow models to ONNX:
|
||||
- Frozen protobuf model (file extension is *.pb)
|
||||
- Checkpoint MetaGraphDef model (file extension is *.meta )
|
||||
|
||||
![Open folder](./media/model-converter/tensorflow-checkpoint.png)
|
||||
|
||||
![Open folder](./media/model-converter/tensorflow-checkpoint.png)
|
||||
- Add input nodes and output nodes. The node name must in the graph.
|
||||
|
||||
![Convert TensorFlow](./media/model-converter/tensorflow.png)
|
||||
|
@ -34,6 +52,6 @@ pip3 install tensorflow==1.5.0 scikit-learn onnx "git+https://github.com/apple/c
|
|||
### Start Converting
|
||||
- Click OK button, it will check dependencies first.
|
||||
- If dependencies are all installed, a converter task will be added to task list explorer.
|
||||
- When converter task succeds, it will open a file explorer with target model selected.
|
||||
- When converter task succeeds, it will open a file explorer with target model selected.
|
||||
|
||||
![Convert Tasklist](./media/model-converter/tasklist.png)
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
## Overview
|
||||
# Generate code from trained models
|
||||
|
||||
Building intelligent applications in Visual Studio is as easy as adding your pre-trained model to your app, just like any other library or resource. Visual Studio Tools for AI includes an ML scoring library that offers simplified consistent APIs across TensorFlow and ONNX models.
|
||||
Building intelligent applications in Visual Studio is as easy as adding your pre-trained model to your app, just like any other library or resource. Visual Studio Tools for AI generates code from your trained model to make it easy to get started, and includes the [Microsoft.ML.Scoring](https://www.nuget.org/packages/Microsoft.ML.Scoring/) library that offers simplified consistent APIs across TensorFlow and ONNX models.
|
||||
|
||||
The library allows users to automatically optimize their models for model serving, improving the model size for use in inferencing applications. Moreover, VS Tools for AI generates a C# stub class to simplify interaction with models in your app. These Model Inference Library projects can be further deployed as NuGet packages for convenient distribution.
|
||||
|
||||
## Supported model framework versions
|
||||
VS Tools for AI supports building apps using Tensorflow and ONNX models. Currently, the following versions are supported:
|
||||
|
||||
- ONNX
|
||||
- Version: 1.0.1
|
||||
- CPU (Intel MKL enabled) only
|
||||
|
@ -15,9 +17,7 @@ VS Tools for AI supports building apps using Tensorflow and ONNX models. Current
|
|||
- For TensorFlow Checkpoint - all files including a checkpoint file, a meta file, and data files should be stored under the same folder. If your model contains TensorFlow lookup operations, please copy your vocabulary file to this folder as well.
|
||||
- For TensorFlow SavedModel - all files including a pb file, data files and asset files should be stored under the same folder. Please do not import SavedModel files that were previously optimized by the library -this can result in unexpected errors.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> [Intel MKL](https://software.intel.com/en-us/mkl) is licensed under the [Intel Simplified Software License](https://software.intel.com/en-us/license/intel-simplified-software-license).
|
||||
|
||||
|
||||
## How to Create a Model Inference Library Project
|
||||
|
||||
|
@ -63,28 +63,16 @@ In the interface editor dialog, please provide required data fields:
|
|||
+ Inputs: This is collection of nodes in inference graph that holds input data like text, image, audio. The name column is the friendly name used in C# code and description is optional. Most important field is the internal name which is the full name of the input tensor or operation set by your training toolkit like TensorFlow or CNTK.
|
||||
+ Outputs: This is collection of nodes in inference graph that produces result like category, value. Columns share the same semantic definition as that of Inputs.
|
||||
|
||||
![Fill in information of model interface to call](/docs/media/model-inference/interface_dialog.png)
|
||||
![Fill in information of model interface to call](/media/model-inference/interface_dialog.png)
|
||||
|
||||
The Interface Editor provides auto completion feature for your finding proper internal names. Typing several characters in the target TensorFlow operation name, a dropdown list will pops up and you can select the right one in this interface.
|
||||
|
||||
![Auto completion list for internal names](/docs/media/model-inference/auto_completion.png)
|
||||
![Auto completion list for internal names](/media/model-inference/auto_completion.png)
|
||||
|
||||
After filling all required fields, just click the OK button in the Wizard and Visual Studio client begins to create project, add ML scoring NuGet reference, prepare content files and generate code template that wraps interfaces of the model imported.
|
||||
|
||||
## How to Include your Model Inference Library Project in apps
|
||||
|
||||
Once you finish writing and refining your code, you can add a project reference to the library to build and run applications directly. Alternatively, you can publish it as a NuGet Package for others to consume in their applications like web services, desktop programs, etc.
|
||||
|
||||
Right click the Model Inference Library project node in Solution Explorer and activate the menu item named “Export to NuGet Package”.
|
||||
|
||||
![Click context menu item to package project](/docs/media/model-inference/create_nupkg.png)
|
||||
|
||||
A pop-up dialog pops up and you need provide meta information of the package to create. Please refer to [Required Metadata Elements](https://docs.microsoft.com/en-us/nuget/schema/nuspec#required-metadata-elements) in NuGet WIKI. Visual Studio Tools for AI will set all other configuration items of NuGet package automatically for you.
|
||||
|
||||
![Fill in the meta information of the NuGet package to create](/docs/media/model-inference/package_dialog.png)
|
||||
|
||||
Clicking the OK button, Visual Studio Tools for AI will first build your project. If built successfully, it will generate the nuspec file and corresponding build.targets file in background. Then, the built-in nuget.exe is called to create the NuGet package of Model Inference Library project.
|
||||
|
||||
Finally, a Windows Resource Manager window pops up redirecting to the directory containing built NuGet package. The output path is decided by your MSBuild configuration for the project.
|
||||
|
||||
![Where to find the created NuGet package](/docs/media/model-inference/output_folder.png)
|
||||
> [!NOTE]
|
||||
>
|
||||
> [Intel MKL](https://software.intel.com/en-us/mkl) is licensed under the [Intel Simplified Software License](https://software.intel.com/en-us/license/intel-simplified-software-license).
|
Загрузка…
Ссылка в новой задаче