Граф коммитов

12054 Коммитов

Автор SHA1 Сообщение Дата
Yulong Wang d3bc3180d8
[js/node] fix CUDA artifact installation script for Linux/x64 (#22984)
### Description

This PR updates installation script to fix it for CUDA v12. However, it
may be difficult for CUDA v11 since the steps are quite complicated to
automate. Added a few lines of instructions instead.

fixes #22877
2024-12-03 16:07:43 -08:00
Prathik Rao 5c644d3747
[WebGPU EP] Flatten implementation (#22964)
Implements flatten operator for native webgpu.
2024-12-03 14:40:57 -08:00
Jian Chen 9ed0c7fe26
Redo "Update Gradle version 8.7 and java version 17 within onnxruntime/java" (#22923)
### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-12-02 18:34:25 -08:00
Edward Chen e2356a0403
Use UTF8 string encoding in ORTSaveCodeAndDescriptionToError(). (#22982)
Update from ASCII to UTF8 string encoding when creating the `NSString` description.
2024-12-02 17:41:52 -08:00
Kee 8c52fa3924
[VSINPU]Split/Pad and some element-wise OPs support (#22916)
### Description
-Add split/pad/neg/not/ceil/round/min/max op support
-Fix conv2d op default pads value issue
-Add VSINPU EP to support python bindings


### Motivation and Context
-New OPs support for VSINPU EP

---------

Signed-off-by: Kee <xuke537@hotmail.com>
2024-12-02 13:57:30 -08:00
Satya Kumar Jandhyala e8bf46a70e
[WebGPU EP] Support GroupQueryAttention (#22658)
### Description
<!-- Describe your changes. -->
Support GroupQueryAttention operator for native webgpu ep.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
This is required for inferencing some LLMs.
2024-12-02 12:40:03 -08:00
Jian Chen 6c2ff5fc55
Refactor emulator start and stop functions for clarity and efficiency (#22861)
### Description
This pull request introduces several enhancements and new
functionalities to the `tools/python/util/android/android.py` file,
focusing on improving the management of Android emulators. The most
important changes include adding a timeout parameter to the
`start_emulator` function, adding checks to prevent multiple emulators
from running simultaneously, and introducing new utility functions to
manage emulator processes more effectively.

Enhancements to `start_emulator` function:

* Added a `timeout_minutes` parameter to the `start_emulator` function
to make the startup timeout configurable.
[[1]](diffhunk://#diff-c54db556a9c445989f830c09ab90ce2704e648deaccce9c9e0ee4875ddaa864dL108-R117)
[[2]](diffhunk://#diff-c54db556a9c445989f830c09ab90ce2704e648deaccce9c9e0ee4875ddaa864dL158-R170)
* Added a check to prevent starting a new emulator if one with the same
AVD name is already running.
* Included additional emulator arguments `-verbose` for better control
and debugging.
* Added a final verification step to ensure the emulator has started
successfully.

New utility functions for managing emulator processes:

* Introduced `check_emulator_running_using_avd_name `,
`check_emulator_running_using_process`, and
`check_emulator_running_using_pid` to check if an emulator is running
based on AVD name, process instance, or PID, respectively.
* Added `stop_emulator_by_proc` and `stop_emulator_by_pid` functions to
stop the emulator process using a `subprocess.Popen` instance or PID,
with a configurable timeout.
* Updated the `stop_emulator` function to use the new utility functions
for stopping the emulator process.

These changes enhance the robustness and flexibility of the emulator
management utilities, making it easier to handle different scenarios in
CI environments and development workflows.



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
2024-12-02 09:29:17 -08:00
Chi Lo e234023d11
[TensorRT EP] Fix wrong input order when generating IndexedSubGraph (#22857)
The input order of generated indexedSubGraph needs to be consistent with
the input order of original graph.

This PR will also fix the github issue
https://github.com/microsoft/onnxruntime/issues/22729
2024-12-02 01:45:29 -08:00
Chi Lo 49a80df77f
Keep the model metadata on the generated EP context model (use bridge api) (#22860)
In addition to the
[PR](https://github.com/microsoft/onnxruntime/pull/22825) which directly
uses internal graph api, this PR updates the bridge api for the case of
TRT EP and OpenVINO EP.
2024-12-01 21:57:45 -08:00
Vincent Wang 1128882bfd
Quantize Bias for Conv/Gemm on Quantized Model (#22889)
Some quantized models don't have Conv/Gemm node's bias quantized but
still leave them in float. This PR is to create a sub-graph to quantize
the bias for Conv/Gemm nodes with scale = scale_input_0 * scale_input_1
and zp = 0. We only do this for bias initializer so that ConstantFolding
will fold the sub-graph to a real quantized int32 bias initializer
during the graph optimization next round.
2024-11-28 10:10:24 +08:00
Vincent Wang 42ecb05080
[QNN] ReduceL2 Support (#22636)
Add ReduceL2 support to QNN EP. Some of the QNN AI Hub models contain
Reduce L2, such as openai_clip_CLIPTextEncoder and
openai_clip_CLIPIamgeEncoder, without this PR, the ReduceL2 will be
assigned to CPU and the graph will be split to 2 QNN graphs, which this
PR, all nodes will be in QNN EP.
2024-11-28 10:09:13 +08:00
Jing Fang 08abab0b14
[CPU] Fix mamtulnbits accuracy level (#22963)
### Description
Fix mamtulnbits accuracy level



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-27 17:40:04 -08:00
wejoncy a24723df16
[CoreML ] ML Program more operators support [3/N] (#22710)
### Description
- Erf
- Round
- Max
- ReduceMax
- ReduceMean
- ReduceSum
- Unsqueeze
- Squeeze
- Softmax



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

---------

Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-11-28 09:21:02 +08:00
Yi Zhang b930b4ab5b
Limit PipAuthenticate in Private Project now (#22954)
### Description
Fixes regression in post merge pipeline caused by #22612



### Motivation and Context
So far, there isn't  the artifactFeeds in Public Project
2024-11-27 13:32:35 +08:00
Wanming Lin fe749a88a5
[WebNN EP] Fixed bug in usage of Array.reduce() (#22944)
In JS, reduce of empty array with no initial value will throw error. Fix
it by checking the array length firstly.
2024-11-26 19:03:44 -08:00
wejoncy c284a686f2
[CoreML] Create EP by AppendExecutionProvider (#22675)
### Description
AppendExecutionProvider("CoreML", {{"MLComputeUnits","MLProgram"}})



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

---------

Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-11-27 09:26:31 +08:00
Chen Feiyue 487184fa42
[VSINPU] update crosscompiling patch (#22937)
### Description
<!-- Describe your changes. -->
Update this patch because the origin file has changed


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-26 14:35:16 -08:00
amancini-N 8826e39a81
#22890 Fix profiling on empty Optional (#22891)
### Description
Fix sequential_executor.cc to avoid segfault when profiling is used on
model with empty Optional



### Motivation and Context
Fixes #22890
2024-11-26 11:18:47 -08:00
shiyi afbb53937c
[WebNN] Support negative steps for slice (#22871)
Slice with negative steps can be emulated by reverse+slice.
2024-11-25 23:06:23 -08:00
Bin Miao 558ae8621c
[WebNN EP] Fix an issue of CumSum operator (#22936)
This PR limits the axis of the CumSum operator to be a constant when
using WebNN EP.
@Honry  @fdwr PTAL.
2024-11-25 21:05:53 -08:00
sheetalarkadam f80afeb9a1
Override android qnn sdk version with pipeline param (#22895)
We need to be able to control/override the exact version of qnn sdk used
for the android build as qnn-runtime (maven package) releases are slower
to QNN SDK releases.
2024-11-25 21:01:05 -08:00
Tianlei Wu 09d2ee6274
Update pipeline status (#22924)
### Description
Update pipeline status:
(1) replace dead link of cuda pipeline
(2) remove dead link of training distributed pipeline
(3) add webgpu pipeline

Before:
https://github.com/microsoft/onnxruntime/blob/main/README.md#builtin-pipeline-status
After:
8ec473d013/README.md (builtin-pipeline-status)

### Motivation and Context
Some pipelines are removed, need replace with new one.
2024-11-24 21:26:27 -08:00
Yi Zhang 85751e7276
Build DML in Windows GPU CI pipeline (#22869)
### Description
Add a new stage to build cuda and dml in Windows GPU CI pipeline (PR
checks) to prevent regressions introduced by new cuda tests.
Update all tests in cuda/testcases name prefix to CudaEp for skipping
them easily

### Motivation and Context
1. CudaNhwcEP is added by default when using cuda ep
2. if onnxruntime_ENABLE_CUDA_EP_INTERNAL_TES is enable, the tests in
tests/provider/cuda/testcases is added too.

### To do
add enable_pybind in the new stage.
Now, --enable_pybind will trigger some python test, like
onnxruntime_test_python.py.
It uses the API of get_avaible_providers() .
More discussions are needed to decide how to make it works
2024-11-25 10:50:52 +08:00
Xavier Dupré a2ba3cb547
Implementation of TreeEnsemble ai.onnx.ml==5 (#22333)
### Description
Merges PR #21851, #21222.

Implements TreeEnsemble from ai.onnx.ml==5 (CPU).

---------

Co-authored-by: Bilyana Indzheva <bilyana2002@gmail.com>
Co-authored-by: Bilyana Indzheva <36890669+bili2002@users.noreply.github.com>
Co-authored-by: Christian Bourjau <cbourjau@users.noreply.github.com>
2024-11-22 19:48:23 +01:00
Tianlei Wu c97dd6e3c1
Update transformers test requirements (#22911)
### Description

* Install PyTorch for transformers tests. The installation is before
python tests so that it can use torch if needed.
* Update protobuf and numpy versions used in transformers test.

### Motivation and Context

Currently, transformers tests are enabled in the following CI pipelines:
* Linux CPU CI Pipeline (torch for cpu-only)
* Linux GPU CI Pipeline (torch for cuda 12)
* Windows GPU CUDA CI Pipeline (torch for cpu-only right now, note that
we might change it to torch for cuda 12 in the future).

For ROCm CI Pipeline, transformer tests are enabled but skipped since
onnx package is not installed in CI.

Previously, torch was not installed before python tests, so some tests
depending on torch were skipped like
[test_bind_onnx_types_not_supported_by_numpy](f6e1d44829/onnxruntime/test/python/onnxruntime_test_python_iobinding.py (L199))
or [test
user_compute_stream](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/test/python/onnxruntime_test_python.py#L465-L476).

In this PR, we changed build.py to install torch before running python
tests.
2024-11-22 09:45:12 -08:00
Scott McKay b1ccbe2a8e
Minor update to onnxruntime_perf_test usage info for `-I` (#22810)
### Description
<!-- Describe your changes. -->
Update comment for `-I` to mention that symbolic dim values can be
provided with `-f`.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-22 16:38:25 +11:00
Aleksei Nikiforov f6e1d44829
Add option to force generic algorithms on x86 (#22917)
Option is named onnxruntime_FORCE_GENERIC_ALGORITHMS

Follow up to https://github.com/microsoft/onnxruntime/pull/22125.

### Description
This change adds compile-time option to disable optimized algorithms and
use generic algorithms (exclude AVX* and SSE etc in GEMM) on x86. This
new option is intended only for testing these algorithms, not for
production use.

Following build command on linux x86_64 builds onnxruntime with new
option enabled:
`./build.sh --parallel --cmake_extra_defines
onnxruntime_FORCE_GENERIC_ALGORITHMS=1`

### Motivation and Context
This change allows testing generic algorithms. This may be needed for
platforms which don't have optimized implementations available, like in
https://github.com/microsoft/onnxruntime/pull/22125.
2024-11-21 13:45:46 -08:00
Tianlei Wu 8d99b1a8dc
reduce GQA test combinations (#22918)
### Description
* Reduce GQA test combinations to save about 35 minutes test time in CI
pipelines.
* Show latency of transformers tests
* Use seed in DMMHA test to avoid random failure.
* For test_flash_attn_rocm.py, test skipping condition from "has cuda
ep" to "not has rocm ep", so that it does not run in cpu build.
* For test_flash_attn_cuda.py, move flash attention and memory efficient
attention tests to different classes, so that we can skip a test suite
instead of checking in each test.

### Motivation and Context
It takes too long to run GQA tests in CI pipelines since there are too
many combinations.

###### Linux GPU CI Pipeline
Before: 5097 passed, 68 skipped, 8 warnings in 1954.64s (0:32:34)
After:  150 passed, 176 skipped, 8 warnings in 530.38s (0:08:50)
Time Saved: **1424** seconds (0:23:44)

###### Windows GPU CUDA CI Pipeline
Before: 1781 passed, 72 skipped, 6 warnings in 605.48s (0:10:05)
After: 116 passed, 118 skipped, 6 warnings in 275.48s (0:04:35) 
Time Saved: **330** seconds (0:05:30)

###### Linux CPU CI Pipeline
Before: 5093 passed, 72 skipped, 4 warnings in 467.04s (0:07:47)
- 212.96s transformers/test_gqa_cpu.py::TestGQA::test_gqa_past
- 154.12s transformers/test_gqa_cpu.py::TestGQA::test_gqa_no_past
- 26.45s
transformers/test_gqa_cpu.py::TestGQA::test_gqa_interactive_one_batch

After: 116 passed, 210 skipped, 4 warnings in 93.41s (0:01:33)
- 0.97s  transformers/test_gqa_cpu.py::TestGQA::test_gqa_past
- 19.23s transformers/test_gqa_cpu.py::TestGQA::test_gqa_no_past
- 2.41s
transformers/test_gqa_cpu.py::TestGQA::test_gqa_interactive_one_batch

Time Saved: **374** seconds (0:06:14).
2024-11-21 12:26:46 -08:00
Tianlei Wu 55f0559e5d
Update attention fusion to support SDPA pattern (#22629)
### Description
Match new SDPA pattern for huggingface BERT model that exported from
latest transformers package.

Some changes of transformers tests in CI pipeline:
(1) Enable tests for bert, distilbert and roberta models in CI.
(2) Remove out-of-date tests for huggingface models that were marked as
slow and not enabled in CI pipeline.
(3) Upgrade transformers package version to the latest.

### Motivation and Context

Recent huggingface transformers use torch SDPA in bert modeling. The
graph pattern change causes attention fusion not working anymore. Update
the fusion script to match the new pattern.
2024-11-21 09:42:41 -08:00
kailums 1e605be166
bigmodel pipeline update cp38 to cp310 (#22793)
### Description
<!-- Describe your changes. -->
when updating from cp38 to cp310, there has some issues for bigmodel
pipeine. there are two jobs failed: stable_diffusion and whisper.

1. for stable_diffusion, we are now using
"nvcr.io/nvidia/pytorch:22.11-py3" from nvidia repo. it is for cuda11
and python3.8. and they are not providing python3.10 version for cuda
11. the latest version of this docker image is for cuda12 and
python3.10. To solve this problem, i use a docker image of ubuntu22.04,
and then install all need python package for this job.
2. for whisper. the original docker image is ubuntu20.04 which doesn't
have python3.10, and has to update to ubuntu22.04.
2024-11-21 07:25:01 -08:00
Jian Chen 369d7bf887
Update the Docker image version (#22907)
### Description
<!-- Describe your changes. -->



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-21 19:38:39 +08:00
Yi Zhang a28246a994
Revert "Update Gradle version 8.7 and java version 17 within onnxrunt… (#22914)
…ime/java (#22771)"

This reverts commit 632a36a233.

### Description
<!-- Describe your changes. -->



### Motivation and Context
Run E2E tests using Browserstack failed due to this PR.
2024-11-21 18:12:28 +08:00
Aleksei Nikiforov e430795332
Fix MlasSgemmKernel: properly process more than 2 rows (#22125)
This change fixes multiple tests like QDQTransformerTests.MatMul_U8S8S8,
for all architectures where architecture-specific
optimized function is not available yet, like s390x.

### Description
Matrix B is packed by 16 elements, thus new row starts 16 items later.
Also, for next C increment index only by 1 for each increment of C.


### Motivation and Context
This change fixes mlas sgemm fallback implementation for all
architectures which don't have architecture-specific implementations
available, like s390x.
2024-11-20 16:00:23 -08:00
Kyle 712bee13db
Fix Pipeline Timeout Issue (#22901)
### Description
<!-- Describe your changes. -->
Extend timeout for always failed job. 


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-20 17:18:50 +01:00
Edward Chen af0303f9b4
Simplify CPU allocator arena usage helper function, fix unit tests that check old ifdefs. (#22876) 2024-11-19 14:24:52 -08:00
Changming Sun 13346fdf18
Cleanup code (#22827)
### Description
1.  Delete TVM EP because it is out of maintain 
2.  Delete ortmodule related docker files and scripts.
2024-11-19 14:13:33 -08:00
Wanming Lin 5b787121e8
[WebNN] Check split's output name (#22884)
Chromium will rename split's output name from "output" to "outputs" in
`OpSupportLimits` to align with spec, the EP should check which name is
available to make it compatible.
2024-11-19 12:44:23 -08:00
Wanming Lin 8a06f13301
[WebNN] Remove wasm.currentContext check (#22886)
If a WebNN session is threw early, this check for `wasm.currentContext`
will break all the following WebNN sessions, this often happens in npm
tests.
2024-11-19 12:22:02 -06:00
Caroline Zhu 0d00fc3130
[mobile] Fix for mac-ios-packaging pipeline (#22879)
### Description
Appends variant name to the Browserstack artifacts that are published so
that we don't run into the error:
"##[error]Artifact browserstack_test_artifacts already exists for build
609095."

[Working pipeline
run](https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=609503&view=results)


### Motivation and Context
- onnxruntime-ios-packaging-pipeline has been failing
2024-11-19 09:27:51 -08:00
Chi Lo 56e4fda8a8
[TensorRT EP] Revert "Add new provider option to exclude nodes from running on TRT" (#22878)
- Revert https://github.com/microsoft/onnxruntime/pull/22681
- But still implicitly exclude DDS ops for TRT 10. Will later provide
better PR to add trt_op_types_to_exclude provider option.
2024-11-19 09:08:54 -08:00
Changming Sun a0d36a508c
Move C# doc Github Action to Windows (#22880)
### Description
Move C# doc Github Action to Windows machines, to avoid having
dependency on Mono which I think is getting deprecated.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-18 23:56:59 -08:00
Adrian Lizarraga 497b06f0a9
[QNN EP] QNN SDK 2.28.2 (#22844)
### Description
- Updates pipelines to use QNN SDK 2.28.2.241116.
- Re-enable LayerNormalization unit tests that failed with accuracy
errors with the previous QNN SDK (2.28.0).
- Update QNN EP to no longer provide a dummy bias for LayerNorm if the
QNN SDK version is >= 2.28.0.


### Motivation and Context
Use the latest QNN SDK. This version improves inference latency for
certain customer models.
2024-11-18 20:10:36 -08:00
Jiajia Qin e597eaed4a
[js/webgpu] Optimize transpose as reshape when suitable (#22870)
BUG #22031
2024-11-18 12:52:48 -08:00
Tianlei Wu c4f3742bb4
Replace INFINITY by std::numeric_limits<float>::infinity() (#22868)
Replace INFINITY by `std::numeric_limits<float>::infinity()` to avoid
build errors with Visual Studio 2022 v17.12 Preview 5

### Motivation and Context
https://github.com/microsoft/onnxruntime/issues/22728
2024-11-18 09:16:41 -08:00
Yi-Hong Lyu 02a0be3599
Optimize Transpose around QLinearSoftmax (#22849)
### Description
<!-- Describe your changes. -->

- Improved Transpose around QLinearSoftmax in Level 3 NHWC Transformer.
- Removed redundant code HandleQLinearConcat, HandleQLinearBinaryOp.

### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->

By merging and eliminating redundant transpose , the Image Segmentation
i8 model (MobileNetv2 + DeepLabv3) achieves a 2.34X speedup.
2024-11-18 06:58:21 -08:00
Yi Zhang 135d8b2beb
Fix CUDA/DML package exception caused by ENABLE_CUDA_NHWC_OPS (#22851)
### Description
Now,  ENABLE_CUDA_NHWC_OPS is enabled by default.
It adds a new chance to create cuda provider while both cuda/dml are
enabled


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-18 10:46:23 +08:00
liqun Fu 101ed10e5e
Refactor SkipLayerNorm and handle beta properly (#22862)
Signed-off-by: Liqun Fu <liqfu@microsoft.com>
Signed-off-by: Liqun Fu <liqun.fu@microsoft.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-11-17 14:51:16 -08:00
Peishen Yan 5928009553
[WebNN EP] Support Einsum op (#19558)
Adds support for einsum via WebNN matmul, transpose, reshape, reducesum,
identity and element-wise binary ops.
2024-11-15 17:58:35 -08:00
Jing Fang c73a3d1804
[ARM] MatMulNBits fp16 support - connect kernels (#22856)
### Description
A breakdown PR of https://github.com/microsoft/onnxruntime/pull/22651



### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2024-11-15 14:59:11 -08:00
Po-Wei (Vincent) bbe7c87738
Fix 1.20 cuda minimal build failure (#22751)
### Description
Fixes build failure for the cuda minimal build




### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
[This change](https://github.com/microsoft/onnxruntime/pull/19470) in
1.20 is causing build failures for the cuda minimal build.
Essentially, some cudnn logic was not guarded by the `USE_CUDA_MINIMAL`.
Also the build is looking for cudnn while in the cuda minimal build it
shouldn't depend on it, resulting in linking error.


cc @gedoensmax @chilo-ms
2024-11-15 10:50:55 -08:00