Add better native nuget package readme (#21889)

### Description
<!-- Describe your changes. -->
Request from Nuget team to add a better readme to the nuget package so
it is displayed nicely on nuget.org.

Previously we were using the ORT repo readme.md but that a) doesn't
display correctly due to limited markdown support on nuget.org, and b)
has a lot of irrelevant info like build pipeline status.

- Created a generic readme.md that includes the ORT description from the
main readme, includes the ORT logo via an acceptable link, and lists the
native nuget packages so the file can be included in any of them as-is.
- Updated the nuget packaging script to add the `readme` tag and use
this file.


### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Request from MS Nuget team to MS package owners to add.
This commit is contained in:
Scott McKay 2024-09-06 08:28:14 +10:00 коммит произвёл GitHub
Родитель c7d0ded079
Коммит 20c802afd4
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
2 изменённых файлов: 60 добавлений и 1 удалений

Просмотреть файл

@ -213,6 +213,10 @@ def generate_repo_url(line_list, repo_url, commit_id):
line_list.append('<repository type="git" url="' + repo_url + '"' + ' commit="' + commit_id + '" />')
def generate_readme(line_list):
line_list.append("<readme>README.md</readme>")
def add_common_dependencies(xml_text, package_name, version):
xml_text.append('<dependency id="Microsoft.ML.OnnxRuntime.Managed"' + ' version="' + version + '"/>')
if package_name == "Microsoft.ML.OnnxRuntime.Gpu":
@ -327,6 +331,7 @@ def generate_metadata(line_list, args):
generate_license(metadata_list)
generate_project_url(metadata_list, "https://github.com/Microsoft/onnxruntime")
generate_repo_url(metadata_list, "https://github.com/Microsoft/onnxruntime.git", args.commit_id)
generate_readme(metadata_list)
generate_dependencies(metadata_list, args.package_name, args.package_version)
generate_release_notes(metadata_list, args.sdk_info)
metadata_list.append("</metadata>")
@ -1045,7 +1050,9 @@ def generate_files(line_list, args):
)
# README
files_list.append("<file src=" + '"' + os.path.join(args.sources_path, "README.md") + '" target="README.md" />')
files_list.append(
"<file src=" + '"' + os.path.join(args.sources_path, "tools/nuget/nupkg.README.md") + '" target="README.md" />'
)
# Process License, ThirdPartyNotices, Privacy
files_list.append("<file src=" + '"' + os.path.join(args.sources_path, "LICENSE") + '" target="LICENSE" />')

Просмотреть файл

@ -0,0 +1,52 @@
## About
![ONNX Runtime Logo](https://raw.githubusercontent.com/microsoft/onnxruntime/main/docs/images/ONNX_Runtime_logo_dark.png)
**ONNX Runtime is a cross-platform machine-learning inferencing accelerator**.
**ONNX Runtime** can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc.
ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more &rarr; [here](https://www.onnxruntime.ai/docs)
## NuGet Packages
### ONNX Runtime Native packages
#### Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html
- XNNPACK Execution Provider on Android/iOS
- https://onnxruntime.ai/docs/execution-providers/Xnnpack-ExecutionProvider.html
#### Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html
- CUDA Execution Provider
- https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
- CPU Execution Provider
#### Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html
- CPU Execution Provider
#### Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html
- CPU Execution Provider
### Other packages
#### Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
#### Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
- https://github.com/microsoft/onnxruntime-extensions