Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation.
Перейти к файлу
Jambay Kinley b178e2f2c8
Use `tmp_path` instead of `tmpdir` in pass unit tests (#550)
## Describe your changes
Follow up to https://github.com/microsoft/Olive/pull/549. 

`tmp_path` returns a `pathlib.Path` object which is more convienient.
https://docs.pytest.org/en/7.1.x/how-to/tmp_path.html

## Checklist before requesting a review
- [ ] Add unit tests for this change.
- [ ] Make sure all tests can pass.
- [ ] Update documents if necessary.
- [ ] Format your code by running `pre-commit run --all-files`
- [ ] Is this a user-facing change? If yes, give a description of this
change to be included in the release notes.

## (Optional) Issue link
2023-09-07 23:40:41 -07:00
.azure_pipelines 🏑 Doc version fix (#500) 2023-08-21 15:22:56 +08:00
.github add issue template (#303) 2023-05-26 11:15:13 +08:00
docs Add `QLoRA` pass (#545) 2023-09-07 21:32:02 -07:00
examples 🐇 Requirements-txt-fixer for pre-commit (#548) 2023-09-08 12:45:14 +08:00
olive Add `QLoRA` pass (#545) 2023-09-07 21:32:02 -07:00
scripts Upgrade pip in CI pipeline (#455) 2023-07-31 23:52:10 -07:00
test Use `tmp_path` instead of `tmpdir` in pass unit tests (#550) 2023-09-07 23:40:41 -07:00
.coveragerc enable code coverage for aml runner (#297) 2023-05-25 10:49:23 +08:00
.flake8 Welcome to new version of Olive (#127) 2023-03-23 17:27:01 -07:00
.gitignore Add Aml sku support for AMLSystem (#418) 2023-07-26 23:21:04 -07:00
.pre-commit-config.yaml 🐇 Requirements-txt-fixer for pre-commit (#548) 2023-09-08 12:45:14 +08:00
CONTRIBUTING.md Welcome to new version of Olive (#127) 2023-03-23 17:27:01 -07:00
LICENSE Welcome to new version of Olive (#127) 2023-03-23 17:27:01 -07:00
MANIFEST.in include sample code directory (#247) 2023-05-08 16:16:47 +08:00
Makefile Reduce pipeline disk usage per agent (#422) 2023-07-19 07:49:17 +08:00
NOTICE.txt add notice file (#234) 2023-05-04 12:00:38 +08:00
README.md typos fix (#310) 2023-05-30 17:07:09 +08:00
SECURITY.md Update package description (#128) 2023-03-23 18:53:16 -07:00
pyproject.toml Welcome to new version of Olive (#127) 2023-03-23 17:27:01 -07:00
requirements-dev.txt Welcome to new version of Olive (#127) 2023-03-23 17:27:01 -07:00
requirements.txt 🐇 Requirements-txt-fixer for pre-commit (#548) 2023-09-08 12:45:14 +08:00
setup.py Add Aml sku support for AMLSystem (#418) 2023-07-26 23:21:04 -07:00

README.md

Olive

Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation. Given a model and targeted hardware, Olive composes the best suitable optimization techniques to output the most efficient model(s) for inferring on cloud or edge, while taking a set of constraints such as accuracy and latency into consideration.

Since every ML accelerator vendor implements their own acceleration tool chains to make the most of their hardware, hardware-aware optimizations are fragmented. With Olive, we can:

Reduce engineering effort for optimizing models for cloud and edge: Developers are required to learn and utilize multiple hardware vendor-specific toolchains in order to prepare and optimize their trained model for deployment. Olive aims to simplify the experience by aggregating and automating optimization techniques for the desired hardware targets.

Build up a unified optimization framework: Given that no single optimization technique serves all scenarios well, Olive enables an extensible framework that allows industry to easily plugin their optimization innovations. Olive can efficiently compose and tune integrated techniques for offering a ready-to-use E2E optimization solution.

Get Started and Resources

Installation

We recommend installing Olive in a virtual environment or a conda environment. Olive is installed using pip.

Create a virtual/conda environment with the desired version of Python and activate it.

You will need to install a build of onnxruntime. You can install the desired build separately but public versions of onnxruntime can also be installed as extra dependencies during Olive installation.

Install with pip

Olive is available for installation from PyPI.

pip install olive-ai

With onnxruntime (Default CPU):

pip install olive-ai[cpu]

With onnxruntime-gpu:

pip install olive-ai[gpu]

With onnxruntime-directml:

pip install olive-ai[directml]

Optional Dependencies

Olive has optional dependencies that can be installed to enable additional features. These dependencies can be installed as extras:

  • azureml: To enable AzureML integration. Packages: azure-ai-ml, azure-identity
  • docker: To enable docker integration. Packages: docker
  • openvino: To use OpenVINO related passes. Packages: openvino==2022.3.0, openvino-dev[tensorflow,onnx]==2022.3.0

Contributing

Wed love to embrace your contribution to Olive. Please refer to CONTRIBUTING.md.

Formatting

Olive uses pre-commit hooks to check and format code. To install the pre-commit hooks, run the following commands from the root of the repository:

# install pre-commit and other dev requirements
python -m pip install pre-commit
# install the git hook scripts
pre-commit install
# for the first time, run on all files
pre-commit run --all-files

Every time you make a git commit, the hooks will automatically point out issues in code for changed files and fix them if possible.

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT License.