Π·Π΅Ρ€ΠΊΠ°Π»ΠΎ ΠΈΠ·
1
0
Π€ΠΎΡ€ΠΊΠ½ΡƒΡ‚ΡŒ 0

πŸͺ Support opversion conversion for models with external data (#1322)

## Describe your changes

## Checklist before requesting a review
- [ ] Add unit tests for this change.
- [ ] Make sure all tests can pass.
- [ ] Update documents if necessary.
- [ ] Lint and apply fixes to your code by running `lintrunner -a`
- [ ] Is this a user-facing change? If yes, give a description of this
change to be included in the release notes.
- [ ] Is this PR including examples changes? If yes, please remember to
update [example
documentation](https://github.com/microsoft/Olive/blob/main/docs/source/examples.md)
in a follow-up PR.

## (Optional) Issue link
This commit is contained in:
trajep 2024-08-21 16:16:08 +08:00 ΠΊΠΎΠΌΠΌΠΈΡ‚ ΠΏΡ€ΠΎΠΈΠ·Π²Ρ‘Π» GitHub
Π ΠΎΠ΄ΠΈΡ‚Π΅Π»ΡŒ 3dcd8c8d12
ΠšΠΎΠΌΠΌΠΈΡ‚ 7ab3f92a7b
НС Π½Π°ΠΉΠ΄Π΅Π½ ΠΊΠ»ΡŽΡ‡, ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‰ΠΈΠΉ Π΄Π°Π½Π½ΠΎΠΉ подписи
Π˜Π΄Π΅Π½Ρ‚ΠΈΡ„ΠΈΠΊΠ°Ρ‚ΠΎΡ€ ΠΊΠ»ΡŽΡ‡Π° GPG: B5690EEEBB952194
4 ΠΈΠ·ΠΌΠ΅Π½Ρ‘Π½Π½Ρ‹Ρ… Ρ„Π°ΠΉΠ»ΠΎΠ²: 25 Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠΉ ΠΈ 8 ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠΉ

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -25,7 +25,7 @@ repos:
exclude: examples/
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.4.5
rev: v0.6.0
hooks:
- id: ruff
args: [ --fix ]

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -26,4 +26,3 @@ $ olive export-adapters [-h] [--adapter_path ADAPTER_PATH] \
[--int4_block_size {16,32,64,128,256}] \
[--int4_quantization_mode {symmetric,asymmetric}]
```

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -10,6 +10,7 @@ from pathlib import Path
from typing import Any, Dict, Optional, Tuple, Union
import onnx
import onnx.external_data_helper
import torch
from packaging import version
@ -517,13 +518,26 @@ class OnnxOpVersionConversion(Pass):
def _run_for_config(
self, model: ONNXModelHandler, config: Dict[str, Any], output_model_path: str
) -> ONNXModelHandler:
# get current models's opset version
model_proto = model.load_model()
output_model_path = resolve_onnx_path(output_model_path)
# since external data is saved in a separate file, we need to load the model to get the opset version
model_proto = onnx.load(model.model_path, load_external_data=False)
model_opset_version = model_proto.opset_import[0].version
if model_opset_version == config["target_opset"]:
logger.info("Model is already in target opset version %s.", config["target_opset"])
return model
output_model_path = resolve_onnx_path(output_model_path)
model_proto = onnx.version_converter.convert_version(model_proto, config["target_opset"])
return model_proto_to_olive_model(model_proto, output_model_path, config)
converted_model_proto = onnx.version_converter.convert_version(model_proto, config["target_opset"])
# copy the external data of original model to the new model
dst_init_map = {init.name: init for init in converted_model_proto.graph.initializer}
for src_init in model_proto.graph.initializer:
if (
src_init.name in dst_init_map
and src_init.HasField("data_location")
and src_init.data_location == onnx.TensorProto.EXTERNAL
):
dst_init_map[src_init.name].CopyFrom(src_init)
onnx.external_data_helper.load_external_data_for_model(
converted_model_proto, str(Path(model.model_path).resolve().parent)
)
return model_proto_to_olive_model(converted_model_proto, output_model_path, config)

ΠŸΡ€ΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ Ρ„Π°ΠΉΠ»

@ -64,7 +64,11 @@ def test_onnx_conversion_pass_quant_model(add_quantized_modules, tmp_path):
def test_onnx_op_version_conversion_pass(target_opset, tmp_path):
input_model = get_onnx_model()
# setup
p = create_pass_from_dict(OnnxOpVersionConversion, {"target_opset": target_opset}, disable_search=True)
p = create_pass_from_dict(
OnnxOpVersionConversion,
{"target_opset": target_opset},
disable_search=True,
)
output_folder = str(tmp_path / "onnx")
onnx_model = p.run(input_model, output_folder)