* checking out local deploy

* added code for local deploy

* added src files

* snippets added and notebook restructing

* AzureML Training Run Removed

* deploy-local-cli added with changes

* Variable generalisation with reference to the doc.

* Comments Resolved

* MODEL_ID added

* python readme run

* inputs changed to default docs

* comments resolved

* cleaned the code

* Metadata cell name added

* MetaData Cell Name Space Added

* MetaData Cell Name '-'  Added

* removing checkpoints, adding workflow

* cleaning up strings

* removing triton description

* typo in workflow

* typo

* also needed to modify working directory

* adding readme

Co-authored-by: HiteshTetarwal <imhitesh007@gmail.com>
Co-authored-by: ArzooAneja <arzoo@DESKTOP-NQ30L8D.localdomain>
Co-authored-by: ArzooAneja <arzooanejamca@gmail.com>
This commit is contained in:
Gopal Vashishtha 2021-05-10 16:42:54 -04:00 коммит произвёл GitHub
Родитель 069c65dd04
Коммит 064b351d17
11 изменённых файлов: 1077 добавлений и 0 удалений

37
.github/workflows/tutorial-deploy-local.yml поставляемый Normal file
Просмотреть файл

@ -0,0 +1,37 @@
name: tutorial-deploy-local
on:
schedule:
- cron: "0 */2 * * *"
pull_request:
branches:
- main
paths:
- tutorials/deploy-local/**
- .github/workflows/tutorial-deploy-local.yml
- requirements.txt
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: setup python
uses: actions/setup-python@v2
with:
python-version: "3.8"
- name: pip install
run: pip install -r requirements.txt
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZ_AE_CREDS}}
- name: install azmlcli
run: az extension add -n azure-cli-ml -y
- name: attach to workspace
run: az ml folder attach -w default -g azureml-examples
- name: run 1.deploy-local.ipynb
run: papermill 1.deploy-local.ipynb - -k python
working-directory: tutorials/deploy-local
- name: run 2.deploy-local-cli.ipynb
run: papermill 2.deploy-local-cli.ipynb - -k python
working-directory: tutorials/deploy-local

Просмотреть файл

@ -57,6 +57,7 @@ path|status|notebooks|description
-|-|-|-
[an-introduction](tutorials/an-introduction)|[![an-introduction](https://github.com/Azure/azureml-examples/workflows/tutorial-an-introduction/badge.svg)](https://github.com/Azure/azureml-examples/actions?query=workflow%3Atutorial-an-introduction)|[1.hello-world.ipynb](tutorials/an-introduction/1.hello-world.ipynb)<br>[2.pytorch-model.ipynb](tutorials/an-introduction/2.pytorch-model.ipynb)<br>[3.pytorch-model-cloud-data.ipynb](tutorials/an-introduction/3.pytorch-model-cloud-data.ipynb)|Run "hello world" and train a simple model on Azure Machine Learning.
[automl-with-pycaret](tutorials/automl-with-pycaret)|[![automl-with-pycaret](https://github.com/Azure/azureml-examples/workflows/tutorial-automl-with-pycaret/badge.svg)](https://github.com/Azure/azureml-examples/actions?query=workflow%3Atutorial-automl-with-pycaret)|[1.classification.ipynb](tutorials/automl-with-pycaret/1.classification.ipynb)|Learn how to use [PyCaret](https://github.com/pycaret/pycaret) for automated machine learning, with tracking and scaling in Azure ML.
[deploy-local](tutorials/deploy-local)|[![deploy-local](https://github.com/Azure/azureml-examples/workflows/tutorial-deploy-local/badge.svg)](https://github.com/Azure/azureml-examples/actions?query=workflow%3Atutorial-deploy-local)|[1.deploy-local.ipynb](tutorials/deploy-local/1.deploy-local.ipynb)<br>[2.deploy-local-cli.ipynb](tutorials/deploy-local/2.deploy-local-cli.ipynb)|*no description*
[using-dask](tutorials/using-dask)|[![using-dask](https://github.com/Azure/azureml-examples/workflows/tutorial-using-dask/badge.svg)](https://github.com/Azure/azureml-examples/actions?query=workflow%3Atutorial-using-dask)|[1.intro-to-dask.ipynb](tutorials/using-dask/1.intro-to-dask.ipynb)|Learn how to read from cloud data and scale PyData tools (Numpy, Pandas, Scikit-Learn, etc.) with [Dask](https://dask.org) and Azure ML.
[using-rapids](tutorials/using-rapids)|[![using-rapids](https://github.com/Azure/azureml-examples/workflows/tutorial-using-rapids/badge.svg)](https://github.com/Azure/azureml-examples/actions?query=workflow%3Atutorial-using-rapids)|[1.train-and-hpo.ipynb](tutorials/using-rapids/1.train-and-hpo.ipynb)<br>[2.train-multi-gpu.ipynb](tutorials/using-rapids/2.train-multi-gpu.ipynb)|Learn how to accelerate PyData tools (Numpy, Pandas, Scikit-Learn, etc.) on NVIDIA GPUs with [RAPIDS](https://github.com/rapidsai) and Azure ML.

Просмотреть файл

@ -0,0 +1,512 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fd969e25",
"metadata": {
"name": "deploy-models-to-azure"
},
"source": [
"# Deploy machine learning models to Azure\n",
"\n",
"description: (preview) deploy your machine learning or deep learning model as a web service in the Azure cloud."
]
},
{
"cell_type": "markdown",
"id": "996082fd",
"metadata": {
"name": "connect-to-workspace"
},
"source": [
"## Connect to your workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a9f14ab",
"metadata": {
"name": "connect-to-workspace-code"
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"ws"
]
},
{
"cell_type": "markdown",
"id": "9057aaf6",
"metadata": {
"name": "register-the-model"
},
"source": [
"## Register your model\n",
"\n",
"A registered model is a logical container stored in the cloud, containing all files located at `model_path`, which is associated with a version number and other metadata.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "cfe3cd0e",
"metadata": {
"name": "register-model-from-local-file"
},
"source": [
"## Register a model from a local file\n",
"\n",
"You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbf6e9da",
"metadata": {
"name": "register-model-from-local-file-code"
},
"outputs": [],
"source": [
"import urllib.request\n",
"from azureml.core.model import Model\n",
"\n",
"# Download model\n",
"urllib.request.urlretrieve(\"https://aka.ms/bidaf-9-model\", \"model.onnx\")\n",
"\n",
"# Register model\n",
"model = Model.register(ws, model_name=\"bidaf_onnx\", model_path=\"./model.onnx\")"
]
},
{
"cell_type": "markdown",
"id": "fc200fe9",
"metadata": {
"name": "inference-configuration"
},
"source": [
"## Define an inference configuration\n",
"\n",
"The inference configuration below specifies that the machine learning deployment will use the file echo_score.py in the ./source_dir directory to process incoming requests and that it will use the Docker image with the Python packages specified in the project_environment environment."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9e5efc8",
"metadata": {
"name": "inference-configuration-code"
},
"outputs": [],
"source": [
"from azureml.core import Environment\n",
"from azureml.core.model import InferenceConfig\n",
"\n",
"env = Environment(name=\"project_environment\")\n",
"dummy_inference_config = InferenceConfig(\n",
" environment=env, source_directory=\"./source_dir\", entry_script=\"./echo_score.py\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c850e5f4",
"metadata": {
"name": "deployment-configuration"
},
"source": [
"## Define a deployment configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "87a6fb85",
"metadata": {
"name": "deployment-configuration-code"
},
"outputs": [],
"source": [
"from azureml.core.webservice import LocalWebservice\n",
"\n",
"deployment_config = LocalWebservice.deploy_configuration(port=6789)"
]
},
{
"cell_type": "markdown",
"id": "f5d34207",
"metadata": {
"name": "deploy-model"
},
"source": [
"## Deploy your machine learning model\n",
"\n",
"A deployment configuration specifies the amount of memory and cores to reserve for your webservice will require in order to run, as well as configuration details of the underlying webservice. For example, a deployment configuration lets you specify that your service needs 2 gigabytes of memory, 2 CPU cores, 1 GPU core, and that you want to enable autoscaling.\n",
"\n",
"The options available for a deployment configuration differ depending on the compute target you choose. In a local deployment, all you can specify is which port your webservice will be served on."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9762e02b",
"metadata": {
"name": "deploy-model-code",
"scrolled": true
},
"outputs": [],
"source": [
"service = Model.deploy(\n",
" ws, \"myservice\", [model], dummy_inference_config, deployment_config, overwrite=True\n",
")\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ede851f8",
"metadata": {
"name": "deploy-model-print-logs"
},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"id": "1b9d47b4",
"metadata": {
"name": "call-into-model"
},
"source": [
"## Call into your model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45e92e91",
"metadata": {
"name": "call-into-model-code"
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"\n",
"uri = service.scoring_uri\n",
"requests.get(\"http://localhost:6789\")\n",
"headers = {\"Content-Type\": \"application/json\"}\n",
"data = {\n",
" \"query\": \"What color is the fox\",\n",
" \"context\": \"The quick brown fox jumped over the lazy dog.\",\n",
"}\n",
"data = json.dumps(data)\n",
"response = requests.post(uri, data=data, headers=headers)\n",
"print(response.json())"
]
},
{
"source": [
"## Define a new inference configuration\n",
"\n",
"Now that we've deployed with a dummy entry script, let's try using a real entry script."
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "markdown",
"id": "34c9e82c",
"metadata": {
"name": "notice"
},
"source": [
"Notice the use of the AZUREML_MODEL_DIR environment variable to locate your registered model. Now that you've added some pip packages, you also need to update your inference configuration to add in those additional packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71b9d8a2",
"metadata": {
"name": "add-pip-package"
},
"outputs": [],
"source": [
"env = Environment(name=\"myenv\")\n",
"python_packages = [\"nltk\", \"numpy\", \"onnxruntime\"]\n",
"for package in python_packages:\n",
" env.python.conda_dependencies.add_pip_package(package)\n",
"\n",
"inference_config = InferenceConfig(\n",
" environment=env, source_directory=\"./source_dir\", entry_script=\"./score.py\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "08c0c387",
"metadata": {
"name": "re-deploy-model"
},
"source": [
"## Deploy again and call your service"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd084966",
"metadata": {
"name": "re-deploy-model-code"
},
"outputs": [],
"source": [
"service = Model.deploy(\n",
" ws, \"myservice\", [model], inference_config, deployment_config, overwrite=True\n",
")\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "72f79356",
"metadata": {
"name": "re-deploy-model-print-logs"
},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"id": "7861af8f",
"metadata": {
"name": "send-post-request"
},
"source": [
"Then ensure you can send a post request to the service:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0dac0e2",
"metadata": {
"name": "send-post-request-code"
},
"outputs": [],
"source": [
"import requests\n",
"\n",
"uri = service.scoring_uri\n",
"\n",
"headers = {\"Content-Type\": \"application/json\"}\n",
"data = {\n",
" \"query\": \"What color is the fox\",\n",
" \"context\": \"The quick brown fox jumped over the lazy dog.\",\n",
"}\n",
"data = json.dumps(data)\n",
"response = requests.post(uri, data=data, headers=headers)\n",
"print(response.json())"
]
},
{
"cell_type": "markdown",
"id": "fc9aab6b",
"metadata": {
"name": "deploy-model-on-cloud"
},
"source": [
"## Re-deploy to cloud\n",
"\n",
"Once you've confirmed your service works locally and chosen a remote compute target, you are ready to deploy to the cloud.\n",
"\n",
"Change your deploy configuration to correspond to the compute target you've chosen, in this case Azure Container Instances."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e0a4ce66",
"metadata": {
"name": "deploy-model-on-cloud-code"
},
"outputs": [],
"source": [
"from azureml.core.webservice import AciWebservice\n",
"\n",
"deployment_config = AciWebservice.deploy_configuration(\n",
" cpu_cores=0.5, memory_gb=1, auth_enabled=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4828de68",
"metadata": {
"name": "re-deploy-service"
},
"source": [
"Deploy your service again"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51029f65",
"metadata": {
"name": "re-deploy-service-code"
},
"outputs": [],
"source": [
"service = Model.deploy(\n",
" ws, \"myservice\", [model], inference_config, deployment_config, overwrite=True\n",
")\n",
"service.wait_for_deployment(show_output=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d141a247",
"metadata": {
"name": "re-deploy-service-print-logs"
},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"id": "b5448133",
"metadata": {
"name": "call-remote-web-service"
},
"source": [
"## Call your remote webservice\n",
"\n",
"When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd48987d",
"metadata": {
"name": "call-remote-web-service-code",
"scrolled": false
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"from azureml.core import Webservice\n",
"\n",
"service = Webservice(workspace=ws, name=\"myservice\")\n",
"scoring_uri = service.scoring_uri\n",
"\n",
"# If the service is authenticated, set the key or token\n",
"key, _ = service.get_keys()\n",
"\n",
"# Set the appropriate headers\n",
"headers = {\"Content-Type\": \"application/json\"}\n",
"headers[\"Authorization\"] = f\"Bearer {key}\"\n",
"\n",
"# Make the request and display the response and logs\n",
"data = {\n",
" \"query\": \"What color is the fox\",\n",
" \"context\": \"The quick brown fox jumped over the lazy dog.\",\n",
"}\n",
"data = json.dumps(data)\n",
"resp = requests.post(scoring_uri, data=data, headers=headers)\n",
"print(resp.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a6b357",
"metadata": {
"name": "call-remote-webservice-print-logs"
},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"id": "4c5ca0cb",
"metadata": {
"name": "delete-resource"
},
"source": [
"## Delete resources"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7a5a03fd",
"metadata": {
"name": "delete-resource-code"
},
"outputs": [],
"source": [
"service.delete()\n",
"model.delete()"
]
},
{
"cell_type": "markdown",
"id": "4f3af1a4",
"metadata": {
"name": "next-steps"
},
"source": [
"## Next Steps"
]
},
{
"cell_type": "markdown",
"id": "8f530f08",
"metadata": {
"name": "next-steps-link"
},
"source": [
"Try reading [our documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=python)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8",
"language": "python",
"name": "python3.8"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Просмотреть файл

@ -0,0 +1,375 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fd969e25",
"metadata": {
"name": "deploy-models-to-azure"
},
"source": [
"# Deploy machine learning models to Azure\n",
"\n",
"description: (preview) deploy your machine learning or deep learning model as a web service in the Azure cloud."
]
},
{
"cell_type": "markdown",
"id": "996082fd",
"metadata": {
"name": "connect-to-workspace"
},
"source": [
"## Connect to your workspace"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a9f14ab",
"metadata": {
"name": "connect-to-workspace-code"
},
"outputs": [],
"source": [
"from azureml.core import Workspace\n",
"\n",
"# get workspace configurations\n",
"ws = Workspace.from_config()\n",
"\n",
"# get subscription and resourcegroup from config\n",
"SUBSCRIPTION_ID = ws.subscription_id\n",
"RESOURCE_GROUP = ws.resource_group\n",
"\n",
"RESOURCE_GROUP, SUBSCRIPTION_ID"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b394a5aa",
"metadata": {
"name": "connect-to-workspace-code"
},
"outputs": [],
"source": [
"!az account set -s $SUBSCRIPTION_ID\n",
"!az ml workspace list --resource-group=$RESOURCE_GROUP"
]
},
{
"cell_type": "markdown",
"id": "9057aaf6",
"metadata": {
"name": "register-the-model"
},
"source": [
"## Register your model\n",
"\n",
"A registered model is a logical container stored in the cloud, containing all files located at `model_path`, which is associated with a version number and other metadata.\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "cfe3cd0e",
"metadata": {
"name": "register-model-from-local-file"
},
"source": [
"## Register a model from a local file\n",
"\n",
"You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbf6e9da",
"metadata": {
"name": "register-model-from-local-file-code"
},
"outputs": [],
"source": [
"!wget https://aka.ms/bidaf-9-model -o model.onnx\n",
"!az ml model register -n bidaf_onnx -p ./model.onnx"
]
},
{
"cell_type": "markdown",
"id": "f5d34207",
"metadata": {
"name": "deploy-model"
},
"source": [
"## Deploy your machine learning model\n",
"\n",
"Replace bidaf_onnx:1 with the name of your model and its version number"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9762e02b",
"metadata": {
"name": "deploy-model-code",
"scrolled": true
},
"outputs": [],
"source": [
"!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic dummyinferenceconfig.json --dc deploymentconfig.json\n",
"!az ml service get-logs -n myservice"
]
},
{
"cell_type": "markdown",
"id": "1b9d47b4",
"metadata": {
"name": "call-into-model"
},
"source": [
"## Call into your model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45e92e91",
"metadata": {
"name": "call-into-model-code"
},
"outputs": [],
"source": [
"!curl -v http://localhost:32267\n",
"!curl -v -X POST -H \"content-type:application/json\" -d '{\"query\": \"What color is the fox\", \"context\": \"The quick brown fox jumped over the lazy dog.\"}' http://localhost:32267/score"
]
},
{
"cell_type": "markdown",
"id": "34c9e82c",
"metadata": {
"name": "notice"
},
"source": [
"Notice the use of the AZUREML_MODEL_DIR environment variable to locate your registered model. Now that you've added some pip packages, you also need to update your inference configuration with [new configurations](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=azcli#tabpanel_7_azcli) to add in those additional packages"
]
},
{
"cell_type": "markdown",
"id": "08c0c387",
"metadata": {
"name": "re-deploy-model"
},
"source": [
"## Deploy again and call your service\n",
"\n",
"Now that we've deployed successfully with a dummy entry script, let's try deploying with a real one. Replace `bidaf_onnx:1` with the name of your model and its version number"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd084966",
"metadata": {
"name": "re-deploy-model-code"
},
"outputs": [],
"source": [
"!az ml model deploy -n myservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc deploymentconfig.json\n",
"!az ml service get-logs -n myservice"
]
},
{
"cell_type": "markdown",
"id": "7861af8f",
"metadata": {
"name": "send-post-request"
},
"source": [
"Then ensure you can send a post request to the service:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0dac0e2",
"metadata": {
"name": "send-post-request-code"
},
"outputs": [],
"source": [
"!curl -v -X POST -H \"content-type:application/json\" -d '{\"query\": \"What color is the fox\", \"context\": \"The quick brown fox jumped over the lazy dog.\"}' http://localhost:32267/score"
]
},
{
"cell_type": "markdown",
"id": "fc9aab6b",
"metadata": {
"name": "deploy-model-on-cloud"
},
"source": [
"## Re-deploy to cloud\n",
"\n",
"Once you've confirmed your service works locally and chosen a remote compute target, you are ready to deploy to the cloud.\n",
"Change your re-deploy configuration to correspond to the compute target you've chosen, in this case Azure Container Instances.\n",
"\n",
"Deploy your service again"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51029f65",
"metadata": {
"name": "deploy-model-on-cloud-code"
},
"outputs": [],
"source": [
"!az ml model deploy -n myaciservice -m bidaf_onnx:1 --overwrite --ic inferenceconfig.json --dc re-deploymentconfig.json\n",
"!az ml service get-logs -n myaciservice"
]
},
{
"cell_type": "markdown",
"id": "b5448133",
"metadata": {
"name": "call-remote-webservice"
},
"source": [
"## Call your remote webservice\n",
"\n",
"When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd48987d",
"metadata": {
"name": "call-remote-webservice-code"
},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"from azureml.core import Webservice, Workspace\n",
"\n",
"ws = Workspace.from_config()\n",
"\n",
"service = Webservice(workspace=ws, name=\"myaciservice\")\n",
"scoring_uri = service.scoring_uri\n",
"\n",
"# If the service is authenticated, set the key or token\n",
"key, _ = service.get_keys()\n",
"\n",
"# Set the appropriate headers\n",
"headers = {\"Content-Type\": \"application/json\"}\n",
"headers[\"Authorization\"] = f\"Bearer {key}\"\n",
"\n",
"# Make the request and display the response and logs\n",
"data = {\n",
" \"query\": \"What color is the fox\",\n",
" \"context\": \"The quick brown fox jumped over the lazy dog.\",\n",
"}\n",
"data = json.dumps(data)\n",
"resp = requests.post(scoring_uri, data=data, headers=headers)\n",
"print(resp.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a6b357",
"metadata": {
"name": "call-remote-web-service-print-logs"
},
"outputs": [],
"source": [
"print(service.get_logs())"
]
},
{
"cell_type": "markdown",
"id": "63a8ef4c",
"metadata": {
"name": "delete-resource"
},
"source": [
"# Delete resources "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ffb1039",
"metadata": {
"name": "delete-resource-code"
},
"outputs": [],
"source": [
"# Get the current model id\n",
"import os\n",
"\n",
"stream = os.popen(\n",
" 'az ml model list --model-name=bidaf_onnx --latest --query \"[0].id\" -o tsv'\n",
")\n",
"MODEL_ID = stream.read()[0:-1]\n",
"MODEL_ID"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1b69c6c",
"metadata": {
"name": "delete-your-resource-code"
},
"outputs": [],
"source": [
"!az ml service delete -n myservice\n",
"!az ml service delete -n myaciservice\n",
"!az ml model delete --model-id=$MODEL_ID"
]
},
{
"cell_type": "markdown",
"id": "4f3af1a4",
"metadata": {
"name": "next-steps"
},
"source": [
"## Next Steps"
]
},
{
"cell_type": "markdown",
"id": "8f530f08",
"metadata": {
"name": "next-steps-link"
},
"source": [
"Try reading [our documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=python)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8",
"language": "python",
"name": "python3.8"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Просмотреть файл

@ -0,0 +1,4 @@
# Learn how to deploy locally
This tutorial populates the Microsoft documentation on [how and where to deploy a machine learning model](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where?tabs=azcli). Go through notebook 1 in order to learn how to do a local deployment and an ACI deployment through the Python SDK. Go through notebook 2 in order to learn how to do a local deployment and an ACI deployment through the Azure CLI.

Просмотреть файл

@ -0,0 +1,4 @@
{
"computeType": "local",
"port": 32267
}

Просмотреть файл

@ -0,0 +1,37 @@
{
"entryScript": "echo_score.py",
"sourceDirectory": "./source_dir",
"environment": {
"docker": {
"arguments": [],
"baseDockerfile": null,
"baseImage": "mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04",
"enabled": false,
"sharedVolumes": true,
"shmSize": null
},
"environmentVariables": {
"EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
},
"name": "my-deploy-env",
"python": {
"baseCondaEnvironment": null,
"condaDependencies": {
"channels": [],
"dependencies": [
"python=3.6.2",
{
"pip": [
"azureml-defaults"
]
}
],
"name": "project_environment"
},
"condaDependenciesFile": null,
"interpreterPath": "python",
"userManagedDependencies": false
},
"version": "1"
}
}

Просмотреть файл

@ -0,0 +1,40 @@
{
"entryScript": "score.py",
"sourceDirectory": "./source_dir",
"environment": {
"docker": {
"arguments": [],
"baseDockerfile": null,
"baseImage": "mcr.microsoft.com/azureml/base:intelmpi2018.3-ubuntu16.04",
"enabled": false,
"sharedVolumes": true,
"shmSize": null
},
"environmentVariables": {
"EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
},
"name": "my-deploy-env",
"python": {
"baseCondaEnvironment": null,
"condaDependencies": {
"channels": [],
"dependencies": [
"python=3.6.2",
{
"pip": [
"azureml-defaults",
"nltk",
"numpy",
"onnxruntime"
]
}
],
"name": "project_environment"
},
"condaDependenciesFile": null,
"interpreterPath": "python",
"userManagedDependencies": false
},
"version": "2"
}
}

Просмотреть файл

@ -0,0 +1,11 @@
{
"computeType": "aci",
"containerResourceRequirements":
{
"cpu": 0.5,
"memoryInGB": 1.0
},
"authEnabled": true,
"sslEnabled": false,
"appInsightsEnabled": false
}

Просмотреть файл

@ -0,0 +1,11 @@
import json
def init():
print("This is init")
def run(data):
test = json.loads(data)
print(f"received data {test}")
return f"test is {test}"

Просмотреть файл

@ -0,0 +1,45 @@
import json
import numpy as np
import os
import onnxruntime
from nltk import word_tokenize
import nltk
def init():
nltk.download("punkt")
global sess
sess = onnxruntime.InferenceSession(
os.path.join(os.getenv("AZUREML_MODEL_DIR"), "model.onnx")
)
def run(request):
print(request)
text = json.loads(request)
qw, qc = preprocess(text["query"])
cw, cc = preprocess(text["context"])
# Run inference
test = sess.run(
None,
{"query_word": qw, "query_char": qc, "context_word": cw, "context_char": cc},
)
start = np.asscalar(test[0])
end = np.asscalar(test[1])
ans = [w for w in cw[start : end + 1].reshape(-1)]
print(ans)
return ans
def preprocess(word):
tokens = word_tokenize(word)
# split into lower-case word tokens, in numpy array with shape of (seq, 1)
words = np.asarray([w.lower() for w in tokens]).reshape(-1, 1)
# split words into chars, in numpy array with shape of (seq, 1, 1, 16)
chars = [[c for c in t][:16] for t in tokens]
chars = [cs + [""] * (16 - len(cs)) for cs in chars]
chars = np.asarray(chars).reshape(-1, 1, 1, 16)
return words, chars