Граф коммитов

54 Коммитов

Автор SHA1 Сообщение Дата
Changming Sun 88676e62b9
Remove nsync (#20413)
### Description
1. Remove the onnxruntime::OrtMutex class and replace it with
~absl::Mutex~ std::mutex.
2. After this change, most source files will not include <Windows.h>
indirectly.


### Motivation and Context
To reduce the number of deps we have, and address some Github issues
that are related to build ONNX Runtime from source.
In PR #3000 , I added a custom implementation of std::mutex . It was
mainly because at that time std::mutex's default constructor was not
trivial on Windows. If you had such a mutex as a global var, it could
not be initialized at compile time. Then VC++ team fixed this issue.
Therefore we don't need this custom implementation anymore.

This PR also removes nsync. I ran several models tests on Linux. I
didn't see any perf difference.
This PR also reverts PR #21005 , which is no longer needed since conda
has updated its msvc runtime DLL.

This PR unblocks #22173 and resolves #22092 . We have a lot of open
issues with nsync. This PR can resolve all of them.
2024-10-21 15:32:14 -07:00
mindest 5b9369e93c
Fix typos according to reviewdog report. (#21335)
### Description
Fix typos based on reviewdog report but with some
exceptions/corrections.
2024-07-22 13:37:32 -07:00
Andrew Grigorev a6611409cc
Fix HalideIR title in third party notices reference (#20190) 2024-04-05 11:12:43 -07:00
Changming Sun 04afe77305
Update ThirdPartyNotices.txt: Add Intel neural-speed (#19332)
Add Intel neural-speed to ThirdPartyNotices.txt because it will be
shipped in the default build in most of our packages.
2024-01-30 12:40:30 -08:00
aciddelgado 44101e8771
Flash Attention v2 MHA (#17227)
### Description
Integrate Flash Attention V2 to PackedMultiHeadAttention,
MultiHeadAttention and Attention operators.

Flash Attention v2 source code is from
https://github.com/Dao-AILab/flash-attention/tree/main/csrc/flash_attn/src.
We did some change to remove dependency on Torch, then removed backward
and bfloat16 related code.

Add benchmark script (see benchmark_mha.sh) to compare different
attention kernels for MultiHeadAttention operator.

Current limitations for Flash Attention in PackedMultiHeadAttention,
MultiHeadAttention and Attention operators:
* Relative Position Bias is not supported
* Different hidden size for Q and V is not supported
* Only float16 is supported
* Padding/attention mask is not supported
* For MultiHeadAttention, when there is past or present input, bias
shall be provided to activate flash attention
* For Attention, past or present inputs will deactivate flash attention
* Causal is not supported

Some limitations (like attention mask and causal) might be removed
later.

Currently, Flash Attention v2 only works in Linux. For Windows, we will
enable later with Cutlass 3.2.

Two environment variables can be used for testing purpose:
(1) `ORT_DISABLE_FLASH_ATTENTION` to disable flash attention. Default
value is 0 (enable). Set it to "1" to disable it.
(2) `ORT_MIN_SEQ_LEN_FLASH_ATTENTION_PACKED_QKV`. Default value is
"513", which means that we only enable flash attention when sequence
length is larger than 512 for packed QKV format. Set it to "0" if you
want to use flash attention v2 whenever possible.

### Speedup

The following result is from Standard_ND96amsr_A100_v4 VM
(A100-SXM4-80GB GPU) using benchmark_mha.sh. The metric is TFLOPs per
second for MultiHeadAttention operator.

There are 3 input formats:
* `Q,K,V` means separated inputs query, key and value of BxSxNH
* `Q,KV` means packed KV, where key is 5D: BxSxNx2xH
* `QKV` means packed QKV, where query is 5D: BxSxNx3xH

Note that flash attention cannot use packed QKV format, so extra
Transpose is needed. We found that TensorRT kernel is faster for
sequence length <= 512 for packed QKV. The reason might be no transpose
is needed for TensorRT kernel in this format.

We also notice that, TensorRT kernel is faster for stable diffusion
512x512 image (see seq_len=4096, heads=8, head_dim=40 below), while
flash attention v2 is faster for 1024x1024 image (see seq_len=16384,
heads=8, head_dim=40 below).

input format | batch size | sequence length | heads | head dim |
flash_v2 (TFLOPs/s) | TensorRT (TFLOPs/s) | Memory Efficient Attention
(TFLOPs/s)
-- | -- | -- | -- | -- | -- | -- | --
Q,K,V | 32 | 512 | 64 | 32 | 78.1 | 60.0 | 39.3
Q,K,V | 32 | 512 | 128 | 16 | 46.8 | 44.1 | 21.7
Q,K,V | 16 | 1024 | 64 | 32 | 99.0 | 72.8 | 44.3
Q,K,V | 16 | 1024 | 128 | 16 | 54.7 | 49.2 | 23.4
Q,K,V | 8 | 2048 | 64 | 32 | 113.8 | 81.2 | 47.8
Q,K,V | 8 | 2048 | 128 | 16 | 59.7 | 51.9 | 24.7
Q,K,V | 4 | 4096 | 64 | 32 | 122.5 | 85.6 | 49.7
Q,K,V | 4 | 4096 | 128 | 16 | 62.5 | 53.3 | 25.3
Q,K,V | 2 | 8192 | 64 | 32 | 127.4 | 87.5 | 50.7
Q,K,V | 2 | 8192 | 128 | 16 | 64.0 | 54.2 | 25.6
Q,K,V | 1 | 16384 | 64 | 32 | 129.5 | 91.0 | 51.2
Q,K,V | 1 | 16384 | 128 | 16 | 64.7 | 54.5 | 25.8
Q,K,V | 1 | 4096 | 8 | 40 | 51.0 | 43.6 | 36.8
Q,K,V | 1 | 4096 | 8 | 80 | 97.7 | 77.0 | 55.5
Q,K,V | 1 | 4096 | 8 | 160 | 120.0 | 39.7 | 57.8
Q,K,V | 4 | 4096 | 8 | 40 | 89.0 | 84.4 | 49.2
Q,K,V | 4 | 4096 | 8 | 80 | 133.0 | 92.2 | 63.2
Q,K,V | 4 | 4096 | 8 | 160 | 164.8 | 42.7 | 63.8
Q,K,V | 1 | 16384 | 8 | 40 | 96.9 | 91.3 | 52.1
Q,K,V | 1 | 16384 | 8 | 80 | 142.9 | 101.5 | 65.6
Q,K,V | 1 | 16384 | 8 | 160 | 177.4 | 44.2 | 65.7
Q,K,V | 128 | 128 | 12 | 64 | 29.0 | 26.9 | 25.7
Q,K,V | 64 | 128 | 12 | 64 | 23.1 | 10.8 | 21.3
Q,K,V | 128 | 384 | 12 | 64 | 83.5 | 60.8 | 55.7
Q,K,V | 64 | 384 | 12 | 64 | 72.6 | 40.5 | 52.8
Q,K,V | 128 | 512 | 12 | 64 | 98.9 | 77.9 | 62.1
Q,K,V | 64 | 512 | 12 | 64 | 94.7 | 75.6 | 60.4
Q,KV | 32 | 512 | 64 | 32 | 85.9 | 41.1 | 41.1
Q,KV | 32 | 512 | 128 | 16 | 47.1 | 21.6 | 21.6
Q,KV | 16 | 1024 | 64 | 32 | 104.4 | 45.8 | 45.8
Q,KV | 16 | 1024 | 128 | 16 | 54.7 | 23.6 | 23.6
Q,KV | 8 | 2048 | 64 | 32 | 116.8 | 48.5 | 48.5
Q,KV | 8 | 2048 | 128 | 16 | 59.8 | 24.7 | 24.7
Q,KV | 4 | 4096 | 64 | 32 | 124.2 | 50.1 | 50.1
Q,KV | 4 | 4096 | 128 | 16 | 62.6 | 25.3 | 25.3
Q,KV | 2 | 8192 | 64 | 32 | 128.5 | 50.8 | 50.9
Q,KV | 2 | 8192 | 128 | 16 | 64.1 | 25.6 | 25.6
Q,KV | 1 | 16384 | 64 | 32 | 129.4 | 51.2 | 51.2
Q,KV | 1 | 16384 | 128 | 16 | 64.8 | 25.8 | 25.8
Q,KV | 1 | 4096 | 8 | 40 | 67.5 | 37.7 | 37.5
Q,KV | 1 | 4096 | 8 | 80 | 101.3 | 56.7 | 56.6
Q,KV | 1 | 4096 | 8 | 160 | 124.0 | 58.6 | 58.6
Q,KV | 4 | 4096 | 8 | 40 | 90.8 | 49.8 | 49.8
Q,KV | 4 | 4096 | 8 | 80 | 135.6 | 63.8 | 63.8
Q,KV | 4 | 4096 | 8 | 160 | 166.3 | 64.5 | 64.5
Q,KV | 1 | 16384 | 8 | 40 | 97.5 | 52.3 | 52.3
Q,KV | 1 | 16384 | 8 | 80 | 143.5 | 65.9 | 65.8
Q,KV | 1 | 16384 | 8 | 160 | 178.4 | 65.9 | 65.8
Q,KV | 128 | 128 | 12 | 64 | 26.8 | 48.1 | 30.9
Q,KV | 64 | 128 | 12 | 64 | 28.0 | 38.9 | 25.0
Q,KV | 128 | 384 | 12 | 64 | 97.7 | 61.1 | 61.0
Q,KV | 64 | 384 | 12 | 64 | 89.5 | 57.8 | 57.9
Q,KV | 128 | 512 | 12 | 64 | 111.9 | 66.7 | 66.9
Q,KV | 64 | 512 | 12 | 64 | 107.2 | 64.9 | 64.8
QKV | 32 | 512 | 64 | 32 | 77.2 | 84.7 | 39.3
QKV | 32 | 512 | 128 | 16 | 43.4 | 53.1 | 20.9
QKV | 16 | 1024 | 64 | 32 | 98.8 | 87.4 | 44.6
QKV | 16 | 1024 | 128 | 16 | 52.0 | 54.1 | 23.2
QKV | 8 | 2048 | 64 | 32 | 113.1 | 89.0 | 47.9
QKV | 8 | 2048 | 128 | 16 | 58.2 | 54.6 | 24.5
QKV | 4 | 4096 | 64 | 32 | 120.6 | 89.7 | 49.7
QKV | 4 | 4096 | 128 | 16 | 61.7 | 54.6 | 25.2
QKV | 2 | 8192 | 64 | 32 | 125.9 | 89.5 | 50.7
QKV | 2 | 8192 | 128 | 16 | 63.6 | 54.8 | 25.5
QKV | 1 | 16384 | 64 | 32 | 128.5 | 92.0 | 51.2
QKV | 1 | 16384 | 128 | 16 | 64.6 | 54.8 | 25.7
QKV | 1 | 4096 | 8 | 40 | 60.2 | **69.8** | 38.1
QKV | 1 | 4096 | 8 | 80 | 101.6 | 75.2 | 56.7
QKV | 1 | 4096 | 8 | 160 | 130.2 | 41.2 | 58.4
QKV | 4 | 4096 | 8 | 40 | 90.6 | **91.0** | 49.5
QKV | 4 | 4096 | 8 | 80 | 133.6 | 98.1 | 62.8
QKV | 4 | 4096 | 8 | 160 | 165.3 | 43.7 | 63.9
QKV | 1 | 16384 | 8 | 40 | 97.2 | 92.8 | 52.1
QKV | 1 | 16384 | 8 | 80 | 143.0 | 103.1 | 65.6
QKV | 1 | 16384 | 8 | 160 | 177.6 | 44.5 | 65.7
QKV | 128 | 128 | 12 | 64 | 31.1 | 65.9 | 27.6
QKV | 64 | 128 | 12 | 64 | 26.1 | 49.8 | 23.5
QKV | 128 | 384 | 12 | 64 | 84.6 | 88.5 | 56.1
QKV | 64 | 384 | 12 | 64 | 79.1 | 80.3 | 53.5
QKV | 128 | 512 | 12 | 64 | 97.3 | 114.2 | 62.2
QKV | 64 | 512 | 12 | 64 | 95.9 | 110.7 | 60.6
QKV | 4 | 2048 | 32 | 128 | 125.26 | 44.72 | 78.15
QKV | 4 | 4096 | 32 | 128 | 141.62 | 46.29 | 85.84
QKV | 8 | 2048 | 32 | 128 | 127.40 | 45.49 | 78.75
QKV | 8 | 4096 | 32 | 128 | 144.24 | 46.60 | 86.95

### Known Issues

NVCC uses huge memory while compiling flash attention CUDA kernel. Linux
build with CUDA might fail when machine has limited memory while number
of CPUs is large. Walkaround is to use a build machine with larger
memory, or use argument like `--nvcc_threads 1` to limit nvcc threads in
build.

### Motivation and Context
Increases speed and efficiency of MHA or Packed MHA.

---------

Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: tlwu@microsoft.com <tlwu@a100.crj0ad2y1kku1j4yxl4sj10o4e.gx.internal.cloudapp.net>
2023-08-31 13:52:21 -07:00
cloudhan bf8b1681f9
Build nuget pkg for ROCm (#16791)
Add nuget pkg building and publishing for ROCm EP

---------
Co-authored-by: Yi Zhang <zhanyi@microsoft.com>
2023-08-28 13:35:08 +08:00
Wang, Mengni fe463d4957
Support SmoothQuant for ORT static quantization (#16288)
### Description

Support SmoothQuant for ORT static quantization via intel neural
compressor

> Note:
Please use neural-compressor==2.2 to try SmoothQuant function.

### Motivation and Context
For large language models (LLMs) with gigantic parameters, the
systematic outliers make quantification of activations difficult. As a
training free post-training quantization (PTQ) solution, SmoothQuant
offline migrates this difficulty from activations to weights with a
mathematically equivalent transformation. Integrating SmoothQuant into
ORT quantization can benefit the accuracy of INT8 LLMs.

---------

Signed-off-by: Mengni Wang <mengni.wang@intel.com>
2023-07-26 18:56:45 -07:00
RandySheriffH 7c4e8267e7
Implement openAI endpoint invoker for nuget (#15797)
Implement openAI audio endpoint, and enable nuget packaging.

---------

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
2023-05-11 22:04:02 -07:00
Yulong Wang 14cc02c65c
[js/web] WebGPU backend via JSEP (#14579)
### Description
This change introduced the following new components into ONNX Runtime
Web:
- JavaScript Execution Provider (JSEP)
  - Asynchronized inferencing execution powered by Emscripten's Asyncify
- WebGPU backend implemented in TypeScript
  - initial implementation of kernels:
    - elementwise operators (22)
    - binary operators (5)
    - tensor: Shape, Reshape, Transpose, Gemm
    - nn: Conv, {Global}Maxpool, {Global}AveragePool


Code need to be polished. still working on it.

## Q&A
What is JSEP?
> JSEP, aka JavaScript Execution Provider, is a new ONNXRuntime
execution provider that specifically works on Web environment
(browsers). JSEP allows JavaScript code to kick in from various places
when ONNX Runtime inferences a model.

Why JSEP?
> JSEP is a hybrid mode EP that contains both C/C++ and
TypeScript/JavaScript implementation. There are 2 strong reasons why we
introduces JSEP:
> 1. the C/C++ part helps JSEP to leverage ONNX Runtime's capabilities
as much as possible including graph transformer, optimizers and also the
capabilities to fallback to CPU EP. TypeScript/JavaScript helps JSEP to
develop and debug much easier in the browser for the kernel
implementation.
> 2. the requirement of asynchronized execution from JavaScript API (eg.
`buffer.mapAsync()`) makes it impossible to run `OrtRun()` in a
synchronized context (see "async problem" section below). This is done
by using Emscripten's Asyncify.

What is WebGPU?
> WebGPU is the new GPU API that available in browser. It's one of the
only 2 APIs that currently available to access the GPU from browser (the
other is WebGL).
> WebGPU is designed with more advanced and stronger features comparing
to WebGL and is potentially solution that offer the best GPU performance
for model inferencing that currently available.

What is the async problem and why we have the problem?
> The "async problem" is a problem that you cannot call an async
function in a synchronous context. Think about the following C++ code:
> ```c
> // C-style declarations (API)
> typedef void (*ON_COMPLETE)(PVOID state, DATA *data);
> void read_data_from_file(FILEHANDLE file, ON_COMPLETE on_complete);
> 
> // implementation
> DATA * my_impl_read_data_from_file_sync(FILEHANDLE file) {
>   // how to implement?
> }
> ```
> The answer is, it's impossible to implement this function. Usually we
try to find a sync version API, or launch a thread to call the async
function and sync-wait on the main thread. Unfortunately, in browser
environment, neither is possible.
>
> WebGPU does not offer any synchronized API for data downloading (GPU
to CPU). This is the only operation that MUST be async. As `OrtRun()`
will eventually call into DataTransfer for copy data from GPU to CPU,
and `OrtRun()` is a synchronized function, this cannot be done in normal
way.

What is Emscripten? How is the Asyncify feature resolved the problem?
> Emscripten is the C/C++ compiler for WebAssembly. It's what we use to
compile ORT and generates the WebAssembly artifacts which runs on
browsers.
>
> Asyncify is a [compiler
feature](https://emscripten.org/docs/porting/asyncify.html) that allows
calling async functions from a synchronized context. In short, it
generates code to unwind and rewind call stack to emulate async
execution. With this feature, we are able to call the async function
inside `OrtRun()` call.

## Design Overview

**Inter-op**

JSEP is doing pretty much same thing to just another EP. It exposes an
interface for inter-op with JavaScript, which is defined in
onnxruntime/wasm/js_internal_api.js:
```js
// init JSEP
Module["jsepInit"] = function (backend, alloc, free, copy, copyAsync, createKernel, releaseKernel, run) {
    Module.jsepBackend = backend;
    Module.jsepAlloc = alloc;
    Module.jsepFree = free;
    Module.jsepCopy = copy;
    Module.jsepCopyAsync = copyAsync;
    Module.jsepCreateKernel = createKernel;
    Module.jsepReleaseKernel = releaseKernel;
    Module.jsepRun = run;
};
```
This simple JavaScript snippet defines all language barrier level
functions that requires by JSEP to achieve implementing kernels and data
transfers using JavaScript inside ONNX Runtime:
- `jsepBackend`: assign the singleton object to webassembly module
- `jsepAlloc` and `jsepFree`: implementation of data transfer's Alloc()
and Free()
- `jsepCopy`: synchronized copy ( GPU to GPU, CPU to GPU)
- `jsepCopyAsync`: asynchronized copy ( GPU to CPU)
- `jsepCreateKernel` and `jsepReleaseKernel`: a corresponding object
that maintained in JS to match lifecycle of Kernel in ORT
- `jsepRun`: OpKernel::Compute() should call into this

The abstraction above allows to tie as little as possible connections
and dependencies between C/C++ and TypeScript/JavaScript.

**Resource Management**

Lifecycle of tensor data and kernels are managed by ORT(C/C++) but the
implementation are left to JavaScript. JavaScript code are responsible
to implement the callbacks correctly.

For WebGPU, the GPU data is managed by JavaScript using a singleton map
(tensot_data_id => GPUBuffer). GPU pipeline is managed as singleton.
Shaders are managed using a singletonmap (shader_key => gpu_program),
while shader_key is generated by cache_key (OP specific, including
attributes) and input shapes.

**about data transfer**
`js::DataTransfer::CopyTensor` implemented to call either synchronized
or asynchronized copy callback, depending on the destination is GPU or
not. Emscripten's macro `EM_ASYNC_JS` is used to wrap the async function
to be called in the synchronized context.

**run kernel in JS**

Kernel class constructor calls once `jsepCreateKernel()` with an
optional per-kernel specific serialization to pass attributes into
JavaScript.

`Compute()` are implemented in a way that a metadata serialization is
performed in a base class and JavaScript code can access the data using
the Emscripten specific builtin macro `EM_ASM_*`.

**disabled features**
memory pattern is force disabled, because the WebGPU data is not
presented by a general memory model (a buffer can be represented by
offset + size).
concurrent run support is disabled. WebGPU is stateful and it also has
async function call. To support concurrent run will significantly
increase the complexity and we don't get any real benefit from it.

**prefer channels last**
JSEP prefers channels last and returns `DataLayout::NHWC` in method
`GetPreferredLayout()`. This will let the graph transformers to
preprocess the graph into a channels last form so that a more optimized
WebGPU shader can be used.

**Testing code**
It's impossible to test JSEP directly because JSEP itself does not
contain any kernel implementation. However, it has the kernel
registration which need to work together with the corresponding
JavaScript code. There are unit tests that run onnx models from
JavaScript API.

---------

Co-authored-by: Scott McKay <skottmckay@gmail.com>
2023-04-24 15:21:18 -07:00
RandySheriffH b6bec54341
Revert mimalloc from v2.0.9 to v2.0.3 (#14603)
Revert mimalloc from v2.0.9 to v2.0.3 to silence build error in
[post-merge
](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=273075&view=logs&j=f019f681-ae8f-5ee4-d119-02530df66a84&t=6c90c65c-2ab2-56af-633f-b5631256a8e1&l=351)
pipeline.
New dependency version was generated
[here](https://aiinfra.visualstudio.com/Lotus/_artifacts/feed/Lotus/UPack/onnxruntime_build_dependencies/overview/1.0.29).

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Co-authored-by: rui-ren <ruiren1225@gmail.com>
2023-02-07 09:58:25 -08:00
RandySheriffH 01cafe89f0
Specify deps in deps.txt and manifest (#14530)
Specify new deps and update cgmanifest.json.

---------

Co-authored-by: Randy Shuai <rashuai@microsoft.com>
2023-02-02 09:44:57 -08:00
Tianlei Wu 414b012f42
Add memory efficient attention from CUTLASS (#14343)
### Description
Add memory efficient attention from CUTLASS.

TODO (in next pull request): 
(1) Need performance tests on different GPUs, then add a sequence length
threshold (only activate it for long sequence length).
(2) Merge changes from https://github.com/NVIDIA/cutlass/pull/773 when
it is in cutlass master.
2023-01-20 12:33:01 -08:00
Rui Ren c4e693c4b7
update gsl-lite license (#14318)
### Description
- Update gsl-lite license with MS GSL's License




### Motivation and Context
- Work Item:
https://aiinfra.visualstudio.com/ONNX%20Runtime/_workitems/edit/10175
- Release ORT 1.14.0
2023-01-18 15:49:13 -08:00
JiCheng 4f309f05ca
[CPU] Resize of Opset 18 (#13890)
### Description

To Implement Resize 18.
This PR depends on https://github.com/microsoft/onnxruntime/pull/13765.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-01-14 08:57:23 +10:00
Scott McKay b9ecd428c1
Add ability to register custom ops by specifying a function name (#14177)
### Description
<!-- Describe your changes. -->
Use dlsym/GetProcAddress to lookup a custom ops registration function by
name and call it.

This will be better on mobile platforms where the custom ops library is
linked against, and there isn't necessarily a filesystem that a library
path can be loaded from.

Alternative is to wire up passing in the address of the function, but
that has multiple complications which differ by platform.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Enable using ort and ort-ext packages on mobile platforms.

Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
2023-01-12 15:11:34 +10:00
Guenther Schmuelling 6b8c72cfa6
pin ort-ext to 81e7799c69044c745239202085eb0a98f102937b (#14044)
pin onnxruntime-extension to 81e7799c69044c745239202085eb0a98f102937b in
preparation to in enable extension in wasm build.
2023-01-10 10:10:17 -08:00
Edward Chen 87eef1fe21
Use updated ONNX license in ThirdPartyNotices.txt. (#13919)
Use updated ONNX license in ThirdPartyNotices.txt. It got changed to the Apache license.
Copied LICENSE file content from onnx submodule at cmake/external/onnx.
2022-12-09 17:46:37 -08:00
Changming Sun b1e1b25e04
Delete CUB (#13534)
### Description
Delete CUB

### Motivation and Context
Because it is already in CUDA SDK.
2022-11-02 13:06:22 -07:00
Yufeng Li 0c72f1cd5a
add copyright (#9943) (#9970) 2021-12-08 14:34:53 -08:00
Changming Sun f92b8e2ac8
Clean up optional-lite references (#9534) 2021-10-25 21:05:45 -07:00
Suffian Khan 9fa0d8392a
Extend node debugging utilities to push tensors and node placement to SQL database (#8672)
* adding support for tracing to sqldb instead of files

* use compiled statements

* script to pull tensors from db

* link sqlite3

* remove node info redundant with onnx graph

* addressing PR comments

* address PR comments and include program counter

* third party notice

* use find_pacakge

* add to cgmanifests.json

* address thread safety and add pid suffix

* build fi

* python script to select on devicetype

* remove unpopulated and redundant Shape and Type fields

* comment

* comment

* PR comments

* add graph execution counter to session state

* move increment to inference session

* std::endl to \n

* ifdef on graph execution counter

* add ifdef to inference session

* move DEBUG_NODE_INPUTS_OUTPUTS to CMakeLists.txt
2021-08-21 00:40:12 -07:00
Chen Fu df4cb6f301
Adding pytorch cpuinfo as dependency (#8178)
Pytorch cpuinfo library allows us to query current cpu features, micro-architecture and cache size, etc. These information is needed for targeted performance optimizations.

Unfortunately it does not work under Windows/ARM. We need to develop our own later
2021-07-12 14:21:12 -07:00
Sunghoon 88d2fc8f1e
ONNX Runtime React Native Library (#7564)
* onnxruntime react native binding

* add react native backend

* fix lint comments

* fix react native backend for ios

* remove unnecessary files to check in

* move onnxruntime-common to devDependency

* create two podspec files for iphoneos and iphonesimulator

* revise README.md and add third party notices for react native

* rename a package

* rename a package and revise README

* add a license into package.json

* revise README and comments

* fix typo

* fix lint errors

* fix lint errors

* add a prepack script. touch index.tsx and App.tsx to resolve CI issue

* remove a unsupported tsx format from clang-format

* fix a type and add steps tp publish a react native npm package

* resolve comments

* fix clang format

* remove promise wrap. change prepack to typescript
2021-05-11 10:34:40 -07:00
Guoyu Wang 370f9b88c2
Enable CoreML EP for minimal extended mode (#7266)
* Enable CoreML EP for minimal extended mode

* minor code formatting

* Fix CI run failure

* Address CR comments

* remove redundant ifdef
2021-04-08 17:45:22 -07:00
Yulong Wang 405ca49012
build ONNXRuntime into WebAssembly (#6478)
* Simplified version of WebAssembly support to keep most of existing data structures and add cmake using Ninja and emcmake

* Clean up CMakeLists.txt and add an example to create and compute a kernel

* Load a model from bytes and remove graph building steps

* Add all cpu and contrib ops with mlas library

* WebAssembly build with Onnxruntime C/CXX API

* Use protobuf cmakefile directory instead of adding every necessary source file

* Fix invalid output at example

* add missing files

* Change an example to use Teams model and support ort mobile format

* add API for javascript

* fix input releasing in _ort_run()

* update API

* Let onnxruntime cmake build WebAssembly with option '--wasm'

* allow one-step building for wasm

* Make build script working on Linux and MacOS

* Fix broken build from Windows command

* Enable unit test on building WebAssembly

* Resolve comments

* update build flags

* wasm conv improvement from: 1) GemmV; 2) Depthwise direct convolution 3x3; 3) Direct convolution 3x3

* Cleaned mlas unittest.

* use glob

* update comments

* Update baseline due to loss scale fix (#6948)

* fix stream sync issue (#6954)

* Enable type reduction in EyeLike, Mod, random.cc CPU kernels. (#6960)

* Update EyeLike CPU kernel.

* Update Mod CPU kernel.

* Update Multinomial CPU kernel.

* Slight improvement to Pad CPU kernel binary size.

* Update RandomNormal[Like], RandomUniform[Like] CPU kernels.

* Fix warning from setting multiple MSVC warning level options. (#6917)

Fix warning from setting multiple MSVC warning level options. Replace an existing /Wn flag instead of always appending a new one.

* MLAS: quantized GEMM update (#6916)

Various updates to the int8_t GEMMs:

1) Add ARM64 udot kernel to take advantage of dot product instructions available in newer cores. Some models run 4x faster than the stock implementation we used before.
2) Refactor the x64 kernels to share common code for AVX2(u8u8/u8s8/avxvnni) vs AVX512(u8u8/u8s8/avx512vnni) to reduce binary size.
3) Extend kernels to support per-column zero points for matrix B. This is not currently wired to an operator.

* Implement QLinearAveragePool with unit tests. (#6896)

Implement QLinearAveragePool with unit tests.

* Attention fusion detect num_heads and hidden_size automatically (#6920)

* fixed type to experimental session constructor (#6950)

* fixed type to experimental session constructor

Co-authored-by: David Medine <david.medine@brainproducts.com>

* Update onnxruntime_perf_test.exe to accept free dimension overrides (#6962)

Co-authored-by: Ori Levari <orlevari@microsoft.com>

* Fix possible fd leak in NNAPI (#6966)

* Release buffers for prepacked tensors (#6820)

Unsolved problems:

1. One test failure was caused by a bug in Cudnn rnn kernels, when they can allocate a buffer and partially initialize it, the garbage data near tail of the buffer caused problem in some of the hardware. To attack this problem in a broader sense, should we add code in our allocators, and during a memory fuzzing test, fill an allocated buffer with garbage before returning to the caller?


2. Prepacking is used more widely than we know. For instance, Cudnn rnn kernels also cache their weights. They mix several weight tensors together into a single buffer, and never touch the original weight tensor anymore. This is the same idea with pre-pack, but they didn't override the virtual function, and they never tried to release those weight tensors, leading to memory waste. It also seems to me that there are some other kernels have similar behavior. Wonder how much memory we can save if we try to cleanup those too.

3. Turning off memory pattern planning does increase memory fragmentation, leading to out of memory error in some training test cases. Perhaps we can revisit the idea of pushing kernels-creation stage earlier, and then during initializer deserialization, we only avoid tracing those that will be prepacked.

* Enable type reduction for Range, ReverseSequence, ScatterND, Split, and Unique CPU kernels. (#6963)

* add CI

* fix test in ci

* fix flags for nsync in wasm build

* add copyright banner

* fix wasm source glob

* add missing exports

* resolve comments

* Perf gain by make packb wide to 4 from 16 on GEMM for WASM.
Remove no need direct conv in previous perf tuning.

* fix buildbreak introduced from latest master merge

* fix buildbreak in mlasi.h

* resolve all comments except MLAS

* rewrite packb related 3 functions for WASM_SCALAR seperately rather than using #ifdef in each.
and other changes according to PR feedback in mlas.

* More complete scalar path in sgemm from Tracy.

* Fix edge case handling in depthwise conv2d kernel 3x3. where:
  *) support input W==1 and H==1
  *) recalc in accurate pad_right and pad_bottom
  *) support hidden pad_right == 2 or pad_bottom == 2 when W == 1 or H==1 and no pad left/top

* Add more test coverage for conv depthwise from Tracy.
Fix one typo according to PR.

* resolve comments

* replace typedef by using

* do not use throw in OrtRun()

* output error message

Co-authored-by: Sunghoon <35605090+hanbitmyths@users.noreply.github.com>
Co-authored-by: Lei Zhang <zhang.huanning@hotmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: David Medine <david.eric.medine@gmail.com>
Co-authored-by: David Medine <david.medine@brainproducts.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Ori Levari <orlevari@microsoft.com>
Co-authored-by: Guoyu Wang <62914304+gwang-msft@users.noreply.github.com>
Co-authored-by: Chen Fu <chenfucs@gmail.com>
2021-04-06 16:18:10 -07:00
Thiago Crepaldi 3348b8485f Post merge update for ORTModule
Changes include:
* Revert Event Pool changes
* Add copyright and revert unrelated changes
* Add DLPack as submodule and remove to_dlpack and from_dlpack from public API
* Update golden numbers for DHP Parallel tests
* Update ORTTrainer unit test numbers
* Rollback to DLPack v0.3
* Disable flaky test
* Update third party notices and CG manifest file
* Minor refactoring of ORTValue API
2021-03-16 20:11:59 -07:00
Changming Sun aa31ba5774
Merge CPU packaging pipelines (#6480)
1. Merge Nuget CPU pipeline, Java CPU pipeline, C-API pipeline into a single one.
2. Enable compile warnings for cuda files(*.cu) on Windows.
3. Enable static code analyze for the Windows builds in these jobs. For example, this is our first time scanning the JNI code.
4. Fix some warnings in the training code.
5. Enable code sign for Java. Previously we forgot it.
6. Update TPN.txt to remove Jemalloc.
2021-02-04 08:38:56 -08:00
Tracy Sharpe fcd9fc9b6d
remove gemmlowp submodule (#6341) 2021-01-13 15:54:37 -08:00
S. Manohar Karlapalem ff58f621fa
Remove nGraph Execution Provider (#5858)
* Remove nGraph Execution Provider

Pursuant to nGraph deprecation notice: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md#deprecation-notice

**Deprecation Notice**

| | |
| --- | --- |
| Deprecation Begins	| June 1, 2020 |
| Removal Date |	December 1, 2020 |

Starting with the OpenVINO™ toolkit 2020.2 release, all of the features
previously available through nGraph have been merged into the OpenVINO™
toolkit. As a result, all the features previously available through
ONNX RT Execution Provider for nGraph have been merged with ONNX RT
Execution Provider for OpenVINO™ toolkit.

Therefore, ONNX RT Execution Provider for **nGraph** will be deprecated
starting June 1, 2020 and will be completely removed on December 1,
2020. Users are recommended to migrate to the ONNX RT Execution Provider
for OpenVINO™ toolkit as the unified solution for all AI inferencing on
Intel® hardware.

* Remove nGraph Licence info from ThirdPartyNotices.txt

* Use simple Test.Run() for tests without EP exclusions

To be consistent with rest of test code.

* Remove nGraph EP functions from Java code
2020-11-19 16:47:55 -08:00
edgchen1 07bd4ef470
Upgrade optional implementation to https://github.com/martinmoene/optional-lite. (#5563) 2020-11-03 15:27:47 -08:00
Wenbing Li ed102e9d88
Add iOS test pipeline and a sample app. (#5298)
* Add iOS test pipeline and a sample app.

* clean up the unused code.

* clean up.

* revert the unknown change

* disable the shared library for iOS.

* add open source notice text.

* ignore the skipped test.

* extract the common ortenv setup
2020-09-29 13:53:11 -07:00
Scott McKay b5c2932ae8
Last major set of ORT format model changes (#5056)
* Add minimal build option to build.py
Group some of the build settings so binary size reduction options are all together
Make some cmake variable naming more consistent
Replace usage of std::hash with murmurhash3 for kernel. std::hash is implementation dependent so can't be used.
Add initial doco and ONNX to ORT model conversion script
Misc cleanups of minimal build breaks.
2020-09-05 07:59:01 +10:00
Thiago Crepaldi ef20efe015
Register cerberus license into ThirdPartyNotices.txt (#4828)
Governance Compliance component shows cerberus is ok:
https://dev.azure.com/onnxruntime/onnxruntime/_componentGovernance/112016/53457366

As this is installed by pip, I am assuming we don't need to update
cgmanifest.json file too.
2020-08-17 15:03:54 -07:00
Sergii Dymchenko de2685261b
Install AzureML support and commonly used packages in the training image. (#4790) 2020-08-13 16:48:48 -07:00
suffiank 005fa5c3ae
Add initial Dockerfile for distributed training targets (#4578)
* add training dockerfile tested for examples repo

* forgot pytorch patch for build from source

* make apt-get update -y adjacent apt-get install -y due to Docker caching rules

* comment for mellanox libraries

* mpi4py comment as I forgot where it came from

* apparently curl not included anymore

* grr.. nvidia change nccl location

* dont need findnccl.patch after nvidia changed nccl location

* pr comment /opt/ompi4 => /opt/openmpi-xxx

* switch to pip install pytorch

* use Release instead of RelWithDebInfo

* comment wording

* wordin

* missed RelWithDebInfo => Release

* replace Mellanox with libibverbs

* stale comment

* ordering

* no more ninja

* add / at end of copy

* update cgmanifest.json

* pr comments

Co-authored-by: suffian khan <sukha@OrtTrainingDev1.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2020-08-05 18:54:54 -07:00
Pranav Sharma 1f1384f8a9
Update dependency introduced by fuzzing change. (#4438) 2020-07-06 21:56:40 -07:00
gwang-msft 9e0f5fc7af
The initial PR for NNAPI EP (#4287)
* Move nnapi dnnlib to subfolder

* dnnlib compile settings

* add nnapi buildin build.py

* add onnxruntime_USE_NNAPI_BUILTIN

* compile using onnxruntime_USE_NNAPI_BUILTIN

* remove dnnlib from built in code

* Group onnxruntime_USE_NNAPI_BUILTIN sources

* add file stubs

* java 32bit compile error

* built in nnapi support 5-26

* init working version

* initializer support

* fix crash on free execution

* add dynamic input support

* bug fixes for dynamic input shape, add mul support, working on conv and batchnorm

* Add batchnormalization, add overflow check for int64 attributes

* add global average/max pool and reshape

* minor changes

* minor changes

* add skip relu and options to use different type of memory

* small bug fix for in operator relu

* bug fix for nnapi

* add transpose support, minor bug fix

* Add transpose support

* minor bug fixes, depthwise conv weight fix

* fixed the bug where the onnx model input has mismatch order than the nnapi model input

* add helper to add scalar operand

* add separated opbuilder to handle single operator

* add cast operator

* fixed reshape, moved some logs to verbose

* Add softmax and identity support, change shaper calling signature, and add support for int32 output

* changed the way to execute the NNAPI

* move NNMemory and InputOutputInfo into Model class

* add limited support for input dynamic shape

* add gemm support, fixed crash when allocating big array on stack

* add abs/exp/floor/log/sigmoid/neg/sin/sqrt/tanh support

* better dynamic input shape support;

* add more check for IsOpSupportedImpl, refactored some code

* some code style fix, switch to safeint

* Move opbuilders to a map with single instance, minor bug fixes

* add GetUniqueName for new temp tensors

* change from throw std to ort_throw

* build settings change and 3rd party notice update

* add readme for nnapi_lib, move to ort log, add comments to public functions, clean the code

* add android log sink and more logging changes, add new string for NnApiErrorDescription

* add nnapi execution options/fp16 relax

* fix a dnnlibrary build break

* addressed review comments

* address review comments, changed adding output for subgraph in NnapiExecutionProvider::GetCapability, minor issue fixes

* formatting in build.py

* more formatting fix in build.py, return fail status instead of throw in compute_func

* moved android_log_sink to platform folder, minor coding style changes

* addressed review comments
2020-06-26 00:02:39 -07:00
manashgoswami cab21223b3
Updated TPN for OpenMPI and cleanup (#3932)
* Update README.md

* Update ReleaseManagement.md

* Updated Third Party Notice for training feature

Added Open MPI license
2020-05-14 11:42:44 -07:00
Scott McKay 932ecaea34
Some documentation updates. (#3060) 2020-02-21 20:07:39 +10:00
Changming Sun ec07fdc5b0
Update ThirdPartyNotices.txt for eigen (#2985) 2020-02-06 16:09:10 -08:00
Hariharan Seshadri 5c2e474751
Add provision in ORT for session options to be parsed when available via model file (#2449)
* Initial commit

* Fix gitmodules

* Nits

* Nits

* Updates

* Update

* More changes

* Updates

* Update

* Some updates

* More changes

* Update

* Update

* Merge

* Update

* Updates

* More changes

* Update

* Fix nits

* Updates

* Fix warning

* Fix build

* Add comment

* PR feedback

* PR feedback

* Updates

* Updates

* Update

* More changes

* Fix build break

* Comment test for now

* Updates

* Updates

* PR feedback

* Updates

* Nits

* Add tests

* Fix build

* Fix build

* Fix build

* Fix build break

* Fix build

* Nits

* PR feedback

* More change

* Expose GetSessionOptions in pybind logic and add unit test for python

* Fix build

* PR feedback

* PR feedback
2019-12-03 16:56:07 -08:00
Sreekanth Yalachigere 31ea11a696 Renaming MKL-DNN as DNNL (#2515)
* DNNL: Moving Files to rename file names

* DNNL name change

* azure pipeline updated

* disable ceil/dialation and enable Opset10

* disable ceil/dialation tests in Python

* mlperf_ssd_resnet34_1200 disabled
2019-12-03 07:34:23 -08:00
Adrian Tsai 4090d0d0de
Add DirectML Execution Provider (#2057)
This change adds a new execution provider powered by [DirectML](https://aka.ms/DirectML).

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers.

The DirectML execution provider is capable of greatly improving evaluation time of models using commodity GPU hardware, without sacrificing broad hardware support or requiring vendor-specific extensions to be installed.

**Note** that the DML EP code was moved verbatim from the existing WindowsAI project, which is why it doesn't yet conform to the onnxruntime coding style. This is something that can be fixed later; we would like to keep formatting/whitespace changes to a minimum for the time being to make it easier to port fixes from WindowsAI to ORT during this transition.

Summary of changes:
* Initial commit of DML EP files under onnxruntime/core/providers/dml
* Add cmake entries for building the DML EP and for pulling down the DirectML redist using nuget
* Add a submodule dependency on the Windows Implementation Library (WIL)
* Add docs under docs/execution_providers/DirectML-ExecutionProvider.md
* Add support for DML EP to provider tests and perf tests
* Add support for DML EP to fns_candy_style_transfer sample
* Add entries to the C ABI for instantiating the DML EP
2019-10-15 06:13:07 -07:00
Yulong Wang e6ce384402
add dependency 'cub' as submodule (#1924) 2019-09-26 16:10:39 +08:00
Pranav Sharma 0b609d3e68
Add make_unique implementation for use with C++11. (#1793)
* Mention OrtCreateSessionFromArray in C API doc

* Add make_unique implementation for use with C++11

* Add cgmanifest and TPN files as well

* Add annotation to cgmanifest to identify the component that uses the dependency
2019-09-09 23:55:44 -07:00
daquexian bbf64c2c45 Update cgmanifest.json and ThirdPartyNotices.txt for DNNLibrary (#1431) 2019-07-18 14:58:43 -07:00
Pranav Sharma 71560843f8
Add Boost dependency for onnxruntime server. (#1112) 2019-05-28 11:37:49 -07:00
nivas-x86 a4d7052aeb Add nGraph Execution Provider (#832)
* Add nGraph Execution Provider

* feedback changes 1

* feedback2

* Feedback and upgrade nGraph

* Feedback 4

* Fix CI

* Disable new ops
2019-04-20 17:02:35 -07:00
manashgoswami bdc2bbb207 Build details for TensorRT execution provider. (#632)
* Fixed typos in docs for 'onnx_test_runner'

* TensorRT Execution Provider (preview) release

Updated build instructions and component governence and third party notices for TensorRT execution provider release.

* test runner option for tensorrt

updated to add option for tensorrt.

* Introduction to TensorRT Execution Provider

Intro README for TensorRT Execution Provider.

* Update BUILD.md

* Update TensorRT-ExecutionProvicer.md

* corrected typo in the filename

* corrected typos

* updated with corrections.

* removed conflicting edits.

* Update BUILD.md
2019-03-15 11:29:00 -07:00
Faith Xu 3a2f6c6964 Update cgmanifest and TPN (#529)
* Update cgmanifest 

...to include all custom registered CG components

* Update cgmanifest.json

* Updates based on feedback

* Remove .NET (auto-detected)

* Add registrations

* Update TPN

* Remove google/protobuf 

dup of protocolbuffers/protobuf

* Update TPN with google/re2

* Remove libpng
2019-03-05 17:22:39 -08:00