Windows-Machine-Learning/Samples/StyleTransfer
Linnea May b6617d8285 whoops forgot a newline 2020-08-13 17:19:01 -04:00
..
Assets add logo pic and instructions on generating assets 2020-08-13 15:05:55 -04:00
Properties clean up project structure 2020-07-14 15:06:11 -04:00
Strings/en-US clean up project structure 2020-07-14 15:06:11 -04:00
VideoEffect/StyleTransferEffectComponent update readme and add newlines 2020-08-13 17:12:57 -04:00
App.xaml clean up project structure 2020-07-14 15:06:11 -04:00
App.xaml.cs properly discard of mediacapture but get an exception on device/resource busy when startasync 2020-07-09 20:29:32 -04:00
AppModel.cs whoops forgot a newline 2020-08-13 17:19:01 -04:00
AppViewModel.cs update readme and add newlines 2020-08-13 17:12:57 -04:00
BooleanToVisibilityConverter.cs add thread configurer and add lock for copying to output frame 2020-08-12 12:37:45 -04:00
ImageHelper.cs clean up project structure 2020-07-14 15:06:11 -04:00
MainPage.xaml add thread configurer and add lock for copying to output frame 2020-08-12 12:37:45 -04:00
MainPage.xaml.cs pr comments pt1: mostly naming conventions + aesthetics 2020-08-11 13:53:44 -04:00
Package.appxmanifest remove pictures 2020-08-13 14:49:23 -04:00
README.md update readme and add newlines 2020-08-13 17:12:57 -04:00
StyleTransfer.csproj remove pictures 2020-08-13 14:49:23 -04:00
StyleTransfer.sln clean up Evals when switching between cpu/gpu 2020-08-03 14:33:21 -04:00

README.md

Real-Time Style Transfer Sample

This UWP application uses the Microsoft.AI.MachineLearning Nuget pacakge to perform style transfer on user-provided input images or web camera streams. The VideoEffect/StyleTranfserEffectCpp project implements the (IBasicVideoEffect)[https://docs.microsoft.com/en-us/uwp/api/windows.media.effects.ibasicvideoeffect?view=winrt-19041] interface in order to create a video effect that can be plugged in to the media streaming pipeline.

For how-tos, tutorials and additional information, see the Windows ML documentation. To learn more about creating custom video effects, see this walkthrough.

Building the Sample

  1. If you download the samples ZIP, be sure to unzip the entire archive, not just the folder with the sample you want to build.
  2. Start Microsoft Visual Studio 2017 and select File > Open > Project/Solution.

Build the Video Effect

  1. Starting in the folder where you unzipped the samples, go to the Samples subfolder, then the subfolder for this specific sample (eg. StyleTranfer). Navigate to the VideoEffect/StyleTranferEffectCpp and open the .sln file.
  2. Confirm that you are set for the right configuration and platform (eg. Debug, x64).
  3. Build the solution (Ctrl+Shift+B).
  4. Take note of where the .winmd file is built to (eg. StyleTransferEffectCpp/x64/Debug/StyleTransferEffectCpp.winmd).

Build the Sample App

  1. Open a new Visual Studio window and select File > Open > Project/Solution.
  2. Starting in the folder where you unzipped the samples, go to the Samples suboflder, then the subfolder for this specific example (eg. StyleTransfer). Open the Visual Studio (.sln) file.
  3. In the Solution Explorer, right-click on the References tab and select "Add Reference..."
  4. Click Browse and navigate to the .winmd file you built in the previous section (eg. StyleTransferEffectCpp/x64/Debug/StyleTransferEffectCpp.winmd). Click OK.
  5. In the solution explorer, select the package.appxmanifest file. Select the the Visual Assets tab and select Assets\logo.png. Set the destination to Assets, generate all assets and scale options, and hit generate.
  6. You should now be able to run the sample app!

Requirements

Contributing

We're always looking for your help to fix bugs and improve the samples. Create a pull request, and we'll be happy to take a look.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

MIT. See LICENSE file.

The machine learning models in this sample are used with permission from Justin Johnson. For additional information on these models, refer to: