7b2155d516 | ||
---|---|---|
.. | ||
CustomTensorization | ||
.gitignore | ||
CustomTensorization.sln | ||
readme.md |
readme.md
Custom Tensorization Sample
Overview
This sample tells how to tensorize the input image by using WinML public APIs on both CPU and GPU.
Background
Typically, when doing input image binding, we would
- Load the image from disk.
- Convert the input to SoftwareBitmap(CPU resource) or IDXGISurface(GPU resource).
- Convert to VideoFrame.
- Convert ImageFeatureValue.
- Use the VideoFrame or ImageFeatureValue as bind object.
In this way, we are using Internal APIs to do CPU/GPU tensorization during these process.
However, in some cases,
- People do not have to use VideoFrame. They could put their input image into CPU/GPU resource directly.
- People want to preprocess their input data, like normalizing the pixel values from range 0-255 to range 0-1 during tensorization
So this sample gives a way of how to manually tensorize input image data with WinML public APIs.
Assumptions
- This samples supposes that we are using a model(fns-candy.onnx) that takes input in the format of BGR.
- This samples supposes the pixel range is 0-255.
- d3dx12.h need to be downloaded from Microsoft/DirectX-Graphics-Samples and is included in helper.cpp. Windows SDK contains Direct3D SDK but not the d3dx12.h headers.
Steps to run the sample
- Load the
CustomTensorization.sln
into Visual Studio - Build and run the solution
- Check the output images, named as
output_cpu.png
andoutput_gpu.png
, in the same folder.
For better understanding of how to tensorize manually on CPU or GPU, Going through the comments and code in
helper.*
is expected to be necessary.
Code Understanding
Basically, this sample encapsulates two chunks of code, tensorization on CPU and GPU separately. In order to make it runnable, we levaraged the tutorial about how to write a WinML desktop appliction.
-
main.cpp
follows the tutorial to create a windows machine learning desktop application on public doc.
void BindModel( VideoFrame imageFrame, string deviceName)
And inside function BindModel, we could specify the device, on which we tensorize.
if ("GPU" == deviceName) { deviceKind = LearningModelDeviceKind::DirectX; inputTensor = TensorizationHelper::LoadInputImageFromGPU(imageFrame.SoftwareBitmap()); } else { deviceKind = LearningModelDeviceKind::Default; inputTensor = TensorizationHelper::LoadInputImageFromCPU(imageFrame.SoftwareBitmap()); }
The outputs of the model will be saved to disk for check.
-
TensorConvertor.cpp
has the implementations of tensorization on both CPU and GPU
winrt::Windows::AI::MachineLearning::TensorFloat SoftwareBitmapToSoftwareTensor( winrt::Windows::Graphics::Imaging::SoftwareBitmap softwareBitmap); winrt::Windows::AI::MachineLearning::TensorFloat SoftwareBitmapToDX12Tensor( winrt::Windows::Graphics::Imaging::SoftwareBitmap softwareBitmap);