Update DML Nuget version and DML EP Doc (#3945)

Update DML Nuget version and DML EP Doc
This commit is contained in:
Jeff Bloomfield 2020-05-14 17:33:46 -07:00 коммит произвёл GitHub
Родитель 782c6c24b2
Коммит e6da5946d1
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
4 изменённых файлов: 11 добавлений и 10 удалений

2
cmake/external/dml.cmake поставляемый
Просмотреть файл

@ -20,7 +20,7 @@ if (NOT onnxruntime_USE_CUSTOM_DIRECTML)
set(NUGET_CONFIG ${PROJECT_SOURCE_DIR}/../NuGet.config)
set(PACKAGES_CONFIG ${PROJECT_SOURCE_DIR}/../packages.config)
get_filename_component(PACKAGES_DIR ${CMAKE_CURRENT_BINARY_DIR}/../packages ABSOLUTE)
set(DML_PACKAGE_DIR ${PACKAGES_DIR}/DirectML.0.0.4)
set(DML_PACKAGE_DIR ${PACKAGES_DIR}/DirectML.2.1.0)
# Restore nuget packages, which will pull down the DirectML redist package
add_custom_command(

Просмотреть файл

@ -1,4 +1,4 @@
# DirectML Execution Provider (Preview)
# DirectML Execution Provider
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.
@ -6,11 +6,11 @@ When used standalone, the DirectML API is a low-level DirectX 12 library and is
The *DirectML Execution Provider* is an optional component of ONNX Runtime that uses DirectML to accelerate inference of ONNX models. The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.
The DirectML Execution Provider is currently in preview.
The DirectML Execution Provider currently uses DirectML version 2.1.0.
## Table of contents
- [DirectML Execution Provider (Preview)](#directml-execution-provider-preview)
- [DirectML Execution Provider](#directml-execution-provider)
- [Table of contents](#table-of-contents)
- [Minimum requirements](#minimum-requirements)
- [Building from source](#building-from-source)
@ -48,7 +48,7 @@ To build onnxruntime with the DML EP included, supply the `--use_dml` parameter
The DirectML execution provider supports building for both x64 (default) and x86 architectures.
Note that building onnxruntime with the DirectML execution provider enabled causes the the DirectML redistributable package to be automatically downloaded as part of the build. This package contains a pre-release version of DirectML, and its use is governed by a license whose text may be found as part of the NuGet package.
Note that building onnxruntime with the DirectML execution provider enabled causes the the DirectML redistributable package to be automatically downloaded as part of the build. Its use is governed by a license whose text may be found as part of the NuGet package.
@ -83,7 +83,7 @@ Creates a DirectML Execution Provider using the given DirectML device, and which
### ONNX opset support
The DirectML execution provider currently supports ONNX opset 9 ([ONNX v1.4](https://github.com/onnx/onnx/releases/tag/v1.4.0)). Evaluating models which require a higher opset version is not supported, and may produce unexpected results.
The DirectML execution provider currently supports ONNX opset 11 ([ONNX v1.6](https://github.com/onnx/onnx/releases/tag/v1.6.0)). Evaluating models which require a higher opset version is not supported, and may produce unexpected results.
### Multi-threading and supported session options
@ -114,8 +114,9 @@ The DirectML execution provider works most efficiently when tensor shapes are kn
Normally when the shapes of model inputs are known during session creation, the shapes for the rest of the model are inferred by OnnxRuntime when a session is created. However if a model input contains a free dimension (such as for batch size), steps must be taken to retain the above performance benefits.
In this case, there are two options:
- Edit the model to replace an input's free dimension (specified through ONNX using "dim_param") with a fixed size.
In this case, there are three options:
- Edit the model to replace an input's free dimension (specified through ONNX using "dim_param") with a fixed size (specified through ONNX using "dim_value").
- Specify values of named dimensions within model inputs when creating the session using the OnnxRuntime *AddFreeDimensionOverrideByName* ABI.
- Edit the model to ensure that an input's free dimension has a [denotation](https://github.com/onnx/onnx/blob/master/docs/DimensionDenotation.md) (such as "DATA_BATCH," or a custom denotation). Then when creating the session, specify the dimension size for each denotation. This can be done using the OnnxRuntime *AddFreeDimensionOverride* ABI.

Просмотреть файл

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="DirectML" version="0.0.4" targetFramework="native" />
<package id="DirectML" version="2.1.0" targetFramework="native" />
<package id="GoogleTestAdapter" version="0.17.1" targetFramework="net46" />
</packages>

Просмотреть файл

@ -196,7 +196,7 @@ def generate_files(list, args):
'" target="runtimes\\win-' + args.target_architecture + '\\native" />')
files_list.append('<file src=' + '"' + os.path.join(args.native_build_path, 'DirectML.pdb') +
'" target="runtimes\\win-' + args.target_architecture + '\\native" />')
files_list.append('<file src=' + '"' + os.path.join(args.packages_path, 'DirectML.0.0.4\\LICENSE.txt') +
files_list.append('<file src=' + '"' + os.path.join(args.packages_path, 'DirectML.2.1.0\\LICENSE.txt') +
'" target="DirectML_LICENSE.txt" />')
if includes_winml: