ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Перейти к файлу
0xdr3dd 5c361106e6
[Fuzzer] Add two new ORT libfuzzer (Linux clang support for now) (#22055)
### Description
This PR adds two new libfuzzer in fuzzer project.
1. Binary libfuzzer 
2. libprotobuf-fuzzer

To compile run below cmd on linux:
```
LLVM_PROFILE_FILE="%p.profraw" CFLAGS="-g -fsanitize=address,fuzzer-no-link -shared-libasan -fprofile-instr-generate -fcoverage-mapping" CXXFLAGS="-g -shared-libasan -fsanitize=address,fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping" CC=clang CXX=clang++ ./build.sh --update --build --config Debug --compile_no_warning_as_error --build_shared_lib --skip_submodule_sync --use_full_protobuf  --parallel --fuzz_testing --build_dir build/
```
Run fuzzer:
```
LD_PRELOAD=$(clang -print-file-name=libclang_rt.asan-x86_64.so) build/Debug/onnxruntime_libfuzzer_fuzz  testinput -rss_limit_mb=8196 -max_total_time=472800 -fork=2 -jobs=4 -workers=4 -ignore_crashes=1 -max_len=2097152 2>&1 | grep -v "\[libprotobuf ERROR"
```


### Motivation and Context
The existing custom fuzzer is not coverage guided and it's slow and it
will work on one model mutation at a time. The new fuzzers are coverage
guided, and we can use more models' files as a corpus to increase the
coverage.
2024-09-12 11:50:34 -07:00
.config
.devcontainer
.gdn
.github Create CMake option `onnxruntime_USE_VCPKG` (#21348) 2024-09-10 16:39:27 -07:00
.pipelines
.vscode
cgmanifests
cmake [Fuzzer] Add two new ORT libfuzzer (Linux clang support for now) (#22055) 2024-09-12 11:50:34 -07:00
csharp
dockerfiles
docs
include/onnxruntime/core Ovep release lnl 1.2.1 (#22027) 2024-09-11 14:55:40 -07:00
java
js adds support for Uint8ClampedArray (#21985) 2024-09-11 22:02:30 -07:00
objectivec
onnxruntime [Fuzzer] Add two new ORT libfuzzer (Linux clang support for now) (#22055) 2024-09-12 11:50:34 -07:00
orttraining Move Gelu and LayerNorm fusion to L1 optimization (#21332) 2024-09-09 13:27:52 +10:00
rust
samples
tools [CI] Linux ROCm CI Pipeline: fix error, set trigger rules. (#22069) 2024-09-12 09:54:32 -07:00
winml
.clang-format
.clang-tidy
.dockerignore
.gitattributes
.gitignore
.gitmodules
.lintrunner.toml
CITATION.cff
CODEOWNERS
CONTRIBUTING.md
LICENSE
NuGet.config
ORT_icon_for_light_bg.png
README.md
SECURITY.md
ThirdPartyNotices.txt
VERSION_NUMBER
build.bat
build.sh
build_arm64x.bat
lgtm.yml
ort.wprp
packages.config
pyproject.toml
requirements-dev.txt
requirements-doc.txt
requirements-lintrunner.txt
requirements-training.txt
requirements.txt
setup.py

README.md

ONNX Runtime is a cross-platform inference and training machine-learning accelerator.

ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. Learn more →

ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Learn more →

Get Started & Resources

Builtin Pipeline Status

System Inference Training
Windows Build Status
Build Status
Build Status
Linux Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Mac Build Status
Android Build Status
iOS Build Status
Web Build Status
Other Build Status

Third-party Pipeline Status

System Inference Training
Linux Build Status

Data/Telemetry

Windows distributions of this project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.

Contributions and Feedback

We welcome contributions! Please see the contribution guidelines.

For feature requests or bug reports, please file a GitHub Issue.

For general discussion or questions, please use GitHub Discussions.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

License

This project is licensed under the MIT License.