Граф коммитов

140 Коммитов

Автор SHA1 Сообщение Дата
Justin Chu c203d89958
Update ruff and clang-format versions (#21479)
ruff -> 0.5.4
clang-format -> 18
2024-07-24 11:50:11 -07:00
mindest 5b9369e93c
Fix typos according to reviewdog report. (#21335)
### Description
Fix typos based on reviewdog report but with some
exceptions/corrections.
2024-07-22 13:37:32 -07:00
Changming Sun efad5bbc5a
Replace some old file system calls with C++17 std::filesystem APIs. (#19196)
### Description
1. Replace some old file system calls to use C++17 std::filesystem APIs.
2. Remove tensorflow_C_PACKAGE_PATH cmake option, which was only used in
onnxruntime_perf_test and the code is out of maintain.
3. Excludes onnx_test_runner and onnxruntime_perf_test from iOS build
because C++17 filesystem library is not available there
2024-03-09 09:17:36 -08:00
Sheil Kumar 0b7048e7d6
Update winml to use #cores - #soc cores by Default as the number of intraopthreads (#18384)
Update winml to use #cores - #soc cores by Default as the number of
intraopthreads

---------

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2023-11-28 09:26:48 -08:00
Justin Chu c250540722
Bump linter versions (#18341)
Bump linter versions and run format.
2023-11-08 13:04:40 -08:00
Yi Zhang 20798a9f03
Enable onnx_test_runner to run the whole models dir in CI machine (#17863)
### Description
1. If the model should be skipped, don't load it.
2. print loaded tests and skipped tests
3. add more same filters as of the onnxruntime_test_all.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2023-10-12 12:01:02 +08:00
Yi Zhang 9136748462
Fix: Fail to skip disabledmodel in winml (#17728)
### Description
Move appending source name behind the ModifyNameIfDisabledTest

### Motivation and Context
In winml,  disabled test name doesn't include the model source name.
WinML job will be broken in the new image.

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1151451&view=logs&s=4eef7ad1-5202-529d-b414-e2b14d056c05

### Verified

https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=1151691&view=logs&s=4eef7ad1-5202-529d-b414-e2b14d056c05
2023-09-28 13:46:44 +08:00
Changming Sun bc84f52633
Update C/C++ dependencies: abseil, date, nsync, googletest, wil, mp11, cpuinfo and safeint (#15470)
### Description
Update C/C++ dependencies abseil, date, nsync, googletest, wil, mp11,
cpuinfo and safeint to newer versions per request of @
mayeut. He created the following PRs to update the deps:
https://github.com/microsoft/onnxruntime/pull/15432
https://github.com/microsoft/onnxruntime/pull/15434
https://github.com/microsoft/onnxruntime/pull/15435
https://github.com/microsoft/onnxruntime/pull/15436
https://github.com/microsoft/onnxruntime/pull/15437

However, our build system needs to fetch the dependencies from an
internal mirror that only Microsoft employees have write access to. So I
closed his PRs and created this one.

This PR also updates abseil to a newer version. This is to prepare for
upgrading re2.
2023-09-08 13:35:04 -07:00
Justin Chu 2575b9aaa1
Improve comments in winml/ (#17163)
Follow up of #17144. Manually fixed indentation in block comments and
replaced all tabs with spaces.
2023-08-15 23:30:56 -04:00
Justin Chu 416dc2e84d
Fix clang-format comment indents on Windows for winml/ (#17144)
On Windows, clang-format has a bug when AlignTrailingComments.Kind is
set to `Leave`
(https://clang.llvm.org/docs/ClangFormatStyleOptions.html#aligntrailingcomments),
where it will keep adding indentation to comments after each formatting
runs.

This PR changes to always align comments so we do not hit the bug.

As a consequence of the options change we need to reformat some of the
files. Note that this option is aligned with the rest of the repository.
2023-08-14 23:50:14 -04:00
Jeff Bloomfield 0180c0429f
Fix DML regression from allocator refactor and enable unrounded weight allocation in ORT API (#17030)
This addresses a DML performance regression from the following PR
resulting in allocations not being rounded and pooled in the DML
execution provider.

https://github.com/microsoft/onnxruntime/pull/15833

This also fixes a pre-existing limitation that allocations during
session initialization (primarily large weights and persistent
resources) only bypassed rounding and pooling while using the Winml API.
The allocator now also respects a caller's rounding mode parameter when
provided.
2023-08-10 17:02:24 -07:00
Justin Chu eeef157888
Format c++ code under `winml/` (#16660)
winml/ was previously excluded from lintrunner config. This change
includes the directory and adds the clang-format config file specific to
winml/ that fits existing style.

---------

Signed-off-by: Justin Chu <justinchu@microsoft.com>
2023-07-25 21:56:50 -07:00
cao lei 0c5f492493
remove AllocatorMgr class (#16509)
### Description
Remove AllocatorManager class


### Motivation and Context
After the refactor PR #15833 is in, AllocatorManager class is not
referenced anymore.
2023-06-28 15:43:19 -07:00
Justin Chu e2381c42f2
Use M_PI to replace 3.14 constants (#16421)
### Description

Use M_PI to replace 3.14 constants

### Motivation and Context

Fixes #16413
2023-06-20 15:09:10 -07:00
Sheil Kumar 2b7f26af7c
Add GridSample implementation to DirectML (#15788)
Add GridSample implementation to DirectML EP.

Temporary add HLSL shader in the DirectML EP to handle GridSample until
officially added to DirectML.
2023-05-05 15:59:33 -07:00
Sheil Kumar 5bde1e8e37
Add Bluestein Z-Chirp Algorithm to DirectML DFT implementation (#15686)
Add Bluestein Z-Chirp Algorithm to DirectML DFT implementation

This will enable STFT and DFT on signals which have non-powers of 2.
2023-04-27 14:03:40 -07:00
Numfor Tiapo f44f6c5b2e
Fix Prefast Errors (#15651)
This PR adds fixes for prefast errors with the following codes:

- C26814
- C26451
- C26400
2023-04-25 16:41:39 -07:00
Patrice Vignola b49d428299
[DML EP] Add missing newline to image test logging (#15596) 2023-04-21 13:39:07 -07:00
Tianlei Wu 5a675d9113
Disable random failing DML image batch test (#15624)
### Description
Disable a test with random failure in Windows GPU CI Pipeline like the
following:

```
11: [       OK ] BatchTest/BatchTest.BatchSupport/163 (0 ms)
11: [ RUN      ] BatchTest/BatchTest.BatchSupport/164
11: D:\a\_work\1\s\winml\test\image\imagetests.cpp(186): error: Expected: m_model_binding.Bind(output_data_binding_name, output_video_frames) doesn't throw an exception.
11:   Actual: it throws.
11: D:\a\_work\1\s\winml\test\image\imagetests.cpp(211): error: Expected: m_result = m_session.Evaluate(m_model_binding, L"") doesn't throw an exception.
11:   Actual: it throws.
11: total errors is 0/2073600, errors rate is 0total errors is 0/2073600, errors rate is 0total errors is 0/2073600, errors rate is 0[  FAILED  ] BatchTest/BatchTest.BatchSupport/164, where GetParam() = ((L"fns-candy_Bgr8_Batch3.onnx", 0, { L"1080.jpg", L"fish_720_Gray.png", L"fish_720.png" }, 3, false), 0, 1, 1, 1, 4-byte object <02-00 00-00>) (3203 ms)
```

Since https://github.com/microsoft/onnxruntime/pull/15468 merged to
main, about 10~15% build job failed in the test.
2023-04-21 13:29:56 -07:00
Changming Sun 5bed8d0285
Disable XNNPack EP's tests in Windows CI pipeline (#15406)
### Description

1. Disable XNNPack EP's tests in Windows CI pipeline
The EP code has a known problem(memory alignment), but the problem does
not impact the usages that we ship the code to. Now we only use XNNPack
EP in mobile apps and web usages. We have already pipelines to cover
these usages. We need to prioritize fixing the bugs found in these
pipelines, and there no resource to put on this Windows one. We can
re-enable the tests once we reached an agreement on how to fix the
memory alignment bug.

2.  Delete anybuild.yml which was for an already deleted pipeline.
3. Move Windows CPU pipelines to AMD CPU machine pools which are
cheaper.
4. Disable some qdq/int8 model tests that will fail if the CPU doesn't
have Intel AVX512 8-bit instructions.
2023-04-13 12:19:32 -07:00
Numfor Tiapo e3086b2ed8
Move DML CI Pipeline to A10 (#15468)
This change moves the DML CI pipeline to the A10 machines and fixes or
disables tests that were failing from this change.

- Max error rate threshold was increased for Image Tests
- Some failing batch tests were disabled

---------

Co-authored-by: Changming Sun <chasun@microsoft.com>
2023-04-12 10:19:40 -07:00
Sumit Agarwal 5b16593192
[DML EP] Attention Kernel bug fix (#13879)
### Description
- Use same data type as input for mask_index tensor which is used as DML
GEMM API's C parameter.
- Remove gsl header include as it is already gets included transitively.



### Motivation and Context
- Why is this change required? What problem does it solve?
Bug found in internal conformance testing.
- If it fixes an open issue, please link to the issue here.
N/A
2022-12-07 15:24:27 -08:00
Yi Zhang ae2a9373ab
reenable quant model tests (#13871)
### Description

### Motivation and Context
Test data in the image has been fixed.
2022-12-07 23:33:22 +08:00
Yi Zhang a9a9c34d98
Fix WinML Test Case: create LearningModelBinding for every testcase (#13587)
### Description
Fix #13509

### Motivation and Context
The exception was caused by the incorrect fetches, which was from the
binding with last test cases.

efcbdac58e/onnxruntime/core/session/onnxruntime_c_api.cc (L809-L815)
2022-11-09 11:20:48 +08:00
Yi Zhang e160688a9b
Skip some failed models winml and training workflows on Windows CPU (#13407)
### Description
1. update model name structure in model_tests.cpp with source name. To
avoid
`Condition test_param_names.count(param_name) == 0 failed. Duplicate
parameterized test name 'BERT_Squad_opset10_CPU'`
2. skip some failed models https://github.com/onnx/models/issues/568


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
2022-10-25 10:05:04 +08:00
Edward Chen 454f77cd94
Update kernel matching logic: decouple from op schemas and remove kernel def hashes (#12791)
# Motivation
Currently, ORT minimal builds use kernel def hashes to map from nodes to
kernels to execute when loading the model. As the kernel def hashes must
be known ahead of time, this works for statically registered kernels.
This works well for the CPU EP.
For this approach to work, the kernel def hashes must also be known at
ORT format model conversion time, which means the EP with statically
registered kernels must also be enabled then. This is not an issue for
the always-available CPU EP. However, we do not want to require that any
EP which statically registers kernels is always available too.
Consequently, we explore another approach to match nodes to kernels that
does not rely on kernel def hashes. An added benefit of this is the
possibility of moving away from kernel def hashes completely, which
would eliminate the maintenance burden of keeping the hashes stable.

# Approach
In a full build, ORT uses some information from the ONNX op schema to
match a node to a kernel. We want to avoid including the ONNX op schema
in a minimal build to reduce binary size. Essentially, we take the
necessary information from the ONNX op schema and make it available in a
minimal build.
We decouple the ONNX op schema from the kernel matching logic. The
kernel matching logic instead relies on per-op information which can
either be obtained from the ONNX op schema or another source.
This per-op information must be available in a minimal build when there
are no ONNX op schemas. We put it in the ORT format model.
Existing uses of kernel def hashes to look up kernels are replaced
with the updated kernel matching logic. We no longer store
kernel def hashes in the ORT format model’s session state and runtime
optimization representations. We no longer keep the logic to
generate and ensure stability of kernel def hashes.
2022-09-20 14:24:59 -07:00
Sumit Agarwal f78ed1388a Fixed build break: inbox version of WindowsAI repo 2022-09-09 18:25:01 -07:00
Sumit Agarwal bcdddb47ba Merge remote-tracking branch 'origin/main' into WindowsAI 2022-09-09 17:34:48 -07:00
sumitsays 05c65a54b3
[DML EP] Contrib Op: FusedMatMul (#12898)
* Contrib Op: FusedMatMul for DML EP

* Added relevant comments and extra validation

* Polish

* More polish

* Last polish

* Addressed comment on the PR

* Addressed comment on the R

* Removed un-necessary comments

* Used c++ standard function

* used std::c++ algorithms function

* Removed unsed code

Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
Co-authored-by: Dwayne Robinson <fdwr@hotmail.com>
2022-09-09 09:37:38 -07:00
Sheil Kumar 535b0835f2
User/sheilk/dft fixes (#12862)
* DirectML DFT Tests and Fixes

* Dynamicaly allocate temporaries using the allocator...

* Allocate during compute

* wrong dims

* CR feedback
2022-09-07 13:21:56 -07:00
Yulong Wang 1a402a3f25
replace 'master' branch ref to 'main' for onnx repo (#12678) 2022-08-30 13:41:42 -07:00
Chun-Wei Chen 6246662b1d
[Dup] Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose (#12537)
* Fix SAME_UPPER/SAME_LOWER (auto_pad attribute) in ConvTranspose

* Bump ONNX 1.10.2 globally

* load ONNX_VERSION from VERSION_NUMBER

* /

* revert deprecate warning in ORT 1.12

* add a comment about why removing cntk_simple_seg

* correct the implem in DML as well
2022-08-22 15:35:34 -07:00
Sheil Kumar 7d712c8f8b
Fix WinML Tests are still targetting deprecated (deleted) experimental signal op definitions (#12006)
* fix winml tests

* remove legacy test

* switch idft -> dft+inverse attr

* upgrade opset 13->17 for signal ops tests
2022-06-27 16:35:50 -07:00
Gary Miguel e8b0d24071
Support per-test tolerances for ONNX tests (#11775)
Prior to this every test shared the same tolerances. This meant
that if an ONNX test failed due to a small but acceptable difference in
output, the only alternative was to disable the test entirely.

In op set 17, the DFT operator is being added. Without this change, the
tests for that operator fail because the output is off by about 5e-5.
It's better to keep test coverage for this new op rather than disable
the test entirely.

Also prior to this change, the global tolerances were not shared between
C++, JavaScript, and Python tests. Now they are.

Also fix various minor issues raised by linters.

Unblocks https://github.com/microsoft/onnxruntime/issues/11640.
2022-06-14 15:12:23 -07:00
Sheil Kumar 22739137c4
Update signal op defs to match onnx17 defs, and add more tests (#11631) 2022-05-28 16:00:09 -07:00
Sheil Kumar 6255194659
All LearningModelSessions created from a common LearningModelDevice should share the same thread pool (#11457)
* Share thread pools between devices

* make tests reuse device

* Change cpu thread pool options for dml sessions to use 1 thread with no spinning

* fix test failure

* Update missing type constraints for dft

* Add comment and rename inference session parameter

* default missing causing inconsistent test behavior

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2022-05-13 11:12:43 -07:00
Sheil Kumar 85fa168dc1
Add optional dft_length input to the DFT and IDFT operators. (#11427)
* Add optional dft_length input.

* CR Feedback

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2022-05-03 16:17:43 -07:00
Sheil Kumar 027565b3b2
Add multi-dim dft test, and fix complex idft (#10947)
* fix complex multi-dim dft

* Add multi-dim dft test, and fix complex idft

* remove incorrect inplace specification

* Add DFT tests

* update epsilon to 1000ths place

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2022-03-22 10:08:12 -07:00
Sheil Kumar 810c18e809
fix complex multi-dim dft (#10896)
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2022-03-17 12:45:51 -07:00
Sheil Kumar 860f28254e
Update DFT definition to more closely align with PyTorch by enabling axis attribute, and arbitrary tensor rank. (#10842)
* Add axis attribute

* fix breaks

* Enable axis-specified DFT

* remove static cast

Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2022-03-15 15:27:12 -07:00
Jingqiao Fu f4fd67cc2c
Revert "add load from buffer (#10162)" (#10590)
This reverts commit 5cd57bb726.
2022-03-08 13:35:23 -08:00
Numfor Tiapo 9ad95bf068
Skip SetName test on inbox build (#10699) 2022-03-02 10:28:58 -08:00
Numfor Tiapo 5fbfca3d58
Add Experimental API for setting model name (#10518)
* Add experimental API for editing model name

* Change EditModelName to 'SetName'

* Change API to pass c_string

* Update SetName to edit the proto

* Test that the model proto gets changed

* Remove comments

* Skip inbox tests

* Use filehelper path

Co-authored-by: Numfor Mbiziwo-Tiapo <numform@microsoft.com>
2022-02-25 14:23:49 -08:00
Dwayne Robinson 6db6ee5710 Merged PR 6973543: ORT DML EP Opset 13 more complete
Extend opset 13 support for:
- Split-13
- Squeeze-13
- Unsqueeze-13
- Reshape-13
- QuantizeLinear-13
- DequantizeLinear-13
- ReduceSum-13
- Resize-13

Also:
- Rename the file where all the opset versions are stored from "OperatorRegistration.h" to "OperatorVersions.h", which will make it much less confusing in the future when looking given there's another file called "OperatorRegistration.h" that corresponds to "OperatorRegistration.cpp".
- Detemplatize many of the OperatorHelper.h constructors, which duplicate multiple instantiations due to the operator helper classes not sharing a common base class, by wrapping them with an adapter. Ideally there would be a common COM base interface that both IMLOperatorKernelCreationContext and IMLOperatorShapeInferenceContext implementation objects would implement, which a wrapper in MLOperatorAuthorHelper.h could QI for.
- Fix style formatting issues in OperatorHelper.h (sorry for the noise).

```
Summary: Total=4679, Passed=4355, Failed=0, Blocked=0, Not Run=0, Skipped=324
```

Corresponding WindowsAI PR:
https://microsoft.visualstudio.com/WindowsAI/_git/WindowsAI/pullrequest/6973645

Related work items: #36672908, #36672926
2022-02-18 01:41:07 +00:00
Ryan Lai c07e251cec Merged PR 6835169: RI 12/9/21 - 01/12/22
Build is green https://microsoft.visualstudio.com/WindowsAI/_build/results?buildId=43713985&view=results

![image.png](https://microsoft.visualstudio.com/274e76ac-6b29-4f77-a85d-7914c77cabd5/_apis/git/repositories/853d2ddc-663c-4fe8-8036-dbf0d50db2d9/pullRequests/6835169/attachments/image.png)

Related work items: #37712737
2022-01-13 00:25:51 +00:00
Dwayne Robinson 4ff78aae45
Merge pull request #9917 from microsoft/user/dwayner/FnsCandyTolerance30696168
Update WinML model tests for FNS candy and Inception float16
2021-12-02 22:45:45 -08:00
Sheil Kumar 5edaa75ef6
Fix LoadFromStream to not use wss::Buffer internally (#9918)
Co-authored-by: Sheil Kumar <sheilk@microsoft.com>
2021-12-02 21:29:06 -08:00
Dwayne Robinson 6e4c534ce2 Relax tolerance slightly more for Intel after autopilot run 2021-12-02 19:42:31 -08:00
Dwayne Robinson 77e67a6de7 Add one more example line 2021-12-02 13:34:01 -08:00
Dwayne Robinson ef7671b938 Comment out old lines 2021-12-02 13:30:34 -08:00