Samples and Tools for Windows ML.
Перейти к файлу
Ori Levari 1938fa9dce add untracked files 2019-01-08 09:57:58 -08:00
.github Create CODEOWNERS 2018-09-22 13:25:07 -07:00
Samples add untracked files 2019-01-08 09:57:58 -08:00
SharedContent Autoscale input images to match model input, with command line arguments and message to indicate autoscaling if image input dimensions mismatch the model inputs. (#86) 2018-11-15 13:53:56 -08:00
Testing Autoscale input images to match model input, with command line arguments and message to indicate autoscaling if image input dimensions mismatch the model inputs. (#86) 2018-11-15 13:53:56 -08:00
Tools/WinMLRunner Save per-iteration Performance Results (#90) 2018-12-28 10:47:39 -08:00
.gitignore Mnist wad tests (#36) 2018-09-12 12:44:51 -07:00
LICENSE Initial commit 2018-03-01 12:07:51 -08:00
README.md User/orilevari/update debug readme (#109) 2018-12-17 17:37:53 -08:00
azure-pipelines-samples.yml Add azure-pipelines CI build for samples 2018-10-14 22:32:34 -07:00

README.md

Sample/Tool Status
All Samples Build status
WinmlRunner Build status

Windows ML

Welcome to the Windows ML repo! Windows ML allows you to use trained machine learning models in your Windows apps (C#, C++, Javascript). The Windows ML inference engine evaluates trained models locally on Windows devices. Hardware optimizations for CPU and GPU additionally enable high performance for quick evaluation results.

In this repo, you will find sample apps that demonstrate how to use Windows ML to build machine learning applications, and tools that help verify models and troubleshoot issues during development on Windows 10.

For additional information on Windows ML, including step-by-step tutorials and how-to guides, please visit the Windows ML documentation.

Requirements

Sample apps

These generic examples show how to use various models and input feeds with Windows ML. We have both C++ native desktop apps and C# and Javascript UWP samples

Developer Tools

  • WinMLRunner: a command-line tool that can run .onnx or .pb models where the input and output variables are tensors or images. It is a very handy tool to quickly validate an ONNX model. It will attempt to load, bind, and evaluate a model and print out helpful messages. It also captures performance measurements.

Using the samples

The easiest way to use these samples without using Git is to download the zip file containing the current version (using the following link or by clicking the "Download ZIP" button on the repo page). You can then unzip the entire archive and use the samples in Visual Studio 2017.

Download the samples ZIP

Notes: Before you unzip the archive, right-click it, select Properties, and then select Unblock. Be sure to unzip the entire archive, and not just individual samples. The samples all depend on the SharedContent folder in the archive. In Visual Studio 2017, the platform target defaults to ARM, so be sure to change that to x64 or x86 if you want to test on a non-ARM device.

Reminder: If you unzip individual samples, they will not build due to references to other portions of the ZIP file that were not unzipped. You must unzip the entire archive if you intend to build the samples.

Feedback

Release Notes

Build 17723

  • Requires ONNX v1.2 or higher.
  • Supports F16 datatypes with GPU-based model inferences for better performance and reduced model footprint. You can use WinMLTools to convert your models from FP32 to FP16.
  • Allows desktop apps to consume Windows.AI.MachineLearning APIs with WinRT/C++.

Contributing

We're always looking for your help to fix bugs and improve the samples. Create a pull request, and we'll be happy to take a look.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.