Add sdk and cli example for binary payloads (#1795)

* Init

* Formatting

* Remove testing data

* Edit

* Remove dev vars

* Formatting

* Replace paths and other fixes

* Remove test data

* Edit

* Refresh workflow

* Kernelspec

* Untouch kernelspec in batch

* Edits

* Cleanup

* Remove dev string and formatting

* Paths...

* Formatting

* Paths

Co-authored-by: Shohei Nagata <Shohei.Nagata@microsoft.com>
This commit is contained in:
Alex Wallace 2022-11-18 04:04:56 -05:00 коммит произвёл GitHub
Родитель 5172b8b44d
Коммит 504cfdb5f5
13 изменённых файлов: 779 добавлений и 0 удалений

30
.github/workflows/cli-scripts-deploy-moe-binary-payloads.yml поставляемый Normal file
Просмотреть файл

@ -0,0 +1,30 @@
name: cli-scripts-deploy-moe-binary-payloads
on:
workflow_dispatch:
schedule:
- cron: "0 0/4 * * *"
pull_request:
branches:
- main
- sdk-preview
paths:
- cli/deploy-moe-binary-payloads.sh
- .github/workflows/cli-scripts-deploy-moe-binary-payloads.yml
- cli/setup.sh
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZ_CREDS}}
- name: setup
run: bash setup.sh
working-directory: cli
continue-on-error: true
- name: test script script
run: set -e; bash -x deploy-moe-binary-payloads.sh
working-directory: cli

Просмотреть файл

@ -0,0 +1,89 @@
name: sdk-endpoints-online-managed-online-endpoints-binary-payloads
# This file is created by sdk/python/readme.py.
# Please do not edit directly.
on:
workflow_dispatch:
schedule:
- cron: "0 */8 * * *"
pull_request:
branches:
- main
paths:
- sdk/python/endpoints/online/managed/**
- .github/workflows/sdk-endpoints-online-managed-online-endpoints-binary-payloads.yml
- sdk/python/dev-requirements.txt
- infra/**
- sdk/python/setup.sh
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: setup python
uses: actions/setup-python@v2
with:
python-version: "3.8"
- name: pip install notebook reqs
run: pip install -r sdk/python/dev-requirements.txt
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZUREML_CREDENTIALS}}
- name: bootstrap resources
run: |
echo '${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}';
bash bootstrap.sh
working-directory: infra
continue-on-error: false
- name: setup SDK
run: |
source "${{ github.workspace }}/infra/sdk_helpers.sh";
source "${{ github.workspace }}/infra/init_environment.sh";
bash setup.sh
working-directory: sdk/python
continue-on-error: true
- name: setup-cli
run: |
source "${{ github.workspace }}/infra/sdk_helpers.sh";
source "${{ github.workspace }}/infra/init_environment.sh";
bash setup.sh
working-directory: cli
continue-on-error: true
- name: run endpoints/online/managed/online-endpoints-binary-payloads.ipynb
run: |
source "${{ github.workspace }}/infra/sdk_helpers.sh";
source "${{ github.workspace }}/infra/init_environment.sh";
bash "${{ github.workspace }}/infra/sdk_helpers.sh" generate_workspace_config "../../.azureml/config.json";
bash "${{ github.workspace }}/infra/sdk_helpers.sh" replace_template_values "online-endpoints-binary-payloads.ipynb";
[ -f "../../.azureml/config" ] && cat "../../.azureml/config";
papermill -k python online-endpoints-binary-payloads.ipynb online-endpoints-binary-payloads.output.ipynb
working-directory: sdk/python/endpoints/online/managed
- name: upload notebook's working folder as an artifact
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: online-endpoints-binary-payloads
path: sdk/python/endpoints/online/managed
- name: Send IcM on failure
if: ${{ failure() && github.ref_type == 'branch' && (github.ref_name == 'main' || contains(github.ref_name, 'release')) }}
uses: ./.github/actions/generate-icm
with:
host: ${{ secrets.AZUREML_ICM_CONNECTOR_HOST_NAME }}
connector_id: ${{ secrets.AZUREML_ICM_CONNECTOR_CONNECTOR_ID }}
certificate: ${{ secrets.AZUREML_ICM_CONNECTOR_CERTIFICATE }}
private_key: ${{ secrets.AZUREML_ICM_CONNECTOR_PRIVATE_KEY }}
args: |
incident:
Title: "[azureml-examples] Notebook validation failed on branch '${{ github.ref_name }}' for notebook 'endpoints/online/managed/online-endpoints-binary-payloads.ipynb'"
Summary: |
Notebook 'endpoints/online/managed/online-endpoints-binary-payloads.ipynb' is failing on branch '${{ github.ref_name }}': ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
Severity: 4
RoutingId: "github://azureml-examples"
Status: Active
Source:
IncidentId: "endpoints/online/managed/online-endpoints-binary-payloads.ipynb[${{ github.ref_name }}]"

Просмотреть файл

@ -58,6 +58,7 @@ path|status|
[deploy-mlcompute-update-to-system-identity.sh](deploy-mlcompute-update-to-system-identity.sh)|[![deploy-mlcompute-update-to-system-identity](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-mlcompute-update-to-system-identity/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-mlcompute-update-to-system-identity.yml)
[deploy-mlcompute-update-to-user-identity.sh](deploy-mlcompute-update-to-user-identity.sh)|[![deploy-mlcompute-update-to-user-identity](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-mlcompute-update-to-user-identity/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-mlcompute-update-to-user-identity.yml)
[deploy-moe-autoscale.sh](deploy-moe-autoscale.sh)|[![deploy-moe-autoscale](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-moe-autoscale/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-moe-autoscale.yml)
[deploy-moe-binary-payloads.sh](deploy-moe-binary-payloads.sh)|[![deploy-moe-binary-payloads](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-moe-binary-payloads/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-moe-binary-payloads.yml)
[deploy-moe-inference-schema.sh](deploy-moe-inference-schema.sh)|[![deploy-moe-inference-schema](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-moe-inference-schema/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-moe-inference-schema.yml)
[deploy-moe-keyvault.sh](deploy-moe-keyvault.sh)|[![deploy-moe-keyvault](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-moe-keyvault/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-moe-keyvault.yml)
[deploy-moe-minimal-single-model-registered.sh](deploy-moe-minimal-single-model-registered.sh)|[![deploy-moe-minimal-single-model-registered](https://github.com/Azure/azureml-examples/workflows/cli-scripts-deploy-moe-minimal-single-model-registered/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/cli-scripts-deploy-moe-minimal-single-model-registered.yml)

Просмотреть файл

@ -0,0 +1,82 @@
#!/bin/bash
set -e
# <set_variables>
ENDPOINT_NAME=endpt-moe-`echo $RANDOM`
ACR_NAME=$(az ml workspace show --query container_registry -o tsv | cut -d'/' -f9-)
# </set_variables>
BASE_PATH="endpoints/online/managed/binary-payloads"
# <download_sample_data>
wget https://aka.ms/peacock-pic -O endpoints/online/managed/binary-payloads/input.jpg
# </download_sample_data>
# <create_endpoint>
az ml online-endpoint create -n $ENDPOINT_NAME
# </create_endpoint>
# Check if endpoint was successful
endpoint_status=`az ml online-endpoint show --name $ENDPOINT_NAME --query "provisioning_state" -o tsv `
echo $endpoint_status
if [[ $endpoint_status == "Succeeded" ]]
then
echo "Endpoint created successfully"
else
echo "Endpoint creation failed"
exit 1
fi
# <create_deployment>
az ml online-deployment create -e $ENDPOINT_NAME -f $BASE_PATH/binary-payloads-deployment.yml \
--set code_configuration.scoring_script=single-file-to-file-score.py \
--all-traffic
# </create_deployment>
# <get_endpoint_details>
# Get key
echo "Getting access key..."
KEY=$(az ml online-endpoint get-credentials -n $ENDPOINT_NAME --query primaryKey -o tsv )
# Get scoring url
echo "Getting scoring url..."
SCORING_URL=$(az ml online-endpoint show -n $ENDPOINT_NAME --query scoring_uri -o tsv )
echo "Scoring url is $SCORING_URL"
# </get_endpoint_details>
# <get_logs>
az ml online-deployment get-logs -n binary-payload -e $ENDPOINT_NAME
# </get_logs>
# <check_deployment>
# Check if deployment was successful
deploy_status=`az ml online-deployment show --name binary-payload --endpoint $ENDPOINT_NAME --query "provisioning_state" -o tsv `
echo $deploy_status
if [[ $deploy_status == "Succeeded" ]]
then
echo "Deployment completed successfully"
else
echo "Deployment failed"
exit 1
fi
# </check_deployment>
# <test_online_endpoint_1>
curl -X POST -F "file=@endpoints/online/managed/binary-payloads/input.jpg" -H "Authorization: Bearer $KEY" $SCORING_URL \
-o endpoints/online/managed/binary-payloads/output.jpg
# <test_online_endpoint_1>
# <update_deployment2>
az ml online-deployment update -e $ENDPOINT_NAME -n binary-payload \
--set code_configuration.scoring_script="multi-file-to-json-score.py"
# </updat _deployment2>
# <test_online_endpoint_2>
curl -X POST -F "file[]=@endpoints/online/managed/binary-payloads/input.jpg" \
-F "file[]=@endpoints/online/managed/binary-payloads/output.jpg" \
-H "Authorization: Bearer $KEY" $SCORING_URL
# <test_online_endpoint_2>
# <delete_assets>
az ml online-endpoint delete -n $ENDPOINT_NAME --no-wait --yes
# </delete_assets>

Просмотреть файл

@ -0,0 +1,13 @@
$schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
name: binary-payload
endpoint_name: <ENDPOINT_NAME>
model:
path: .
code_configuration:
code: code
scoring_script: <SCORING_SCRIPT>
environment:
image: mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest
conda_file: env.yml
instance_type: Standard_DS2_v2
instance_count: 1

Просмотреть файл

@ -0,0 +1,13 @@
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
from PIL import Image
def init():
pass
@rawhttp
def run(req: AMLRequest):
files = req.files.getlist("file[]")
sizes = [{"filename": f.filename, "size": Image.open(f.stream).size} for f in files]
return {"response": sizes}

Просмотреть файл

@ -0,0 +1,28 @@
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
from azureml.contrib.services.aml_response import AMLResponse
from PIL import Image
import io
default_resize = (128, 128)
def init():
pass
@rawhttp
def run(req: AMLRequest):
try:
data = req.files.getlist("file")[0]
except IndexError:
return AMLResponse("No file uploaded", status_code=422)
img = Image.open(data.stream)
img = img.resize(default_resize)
output = io.BytesIO()
img.save(output, format="JPEG")
resp = AMLResponse(message=output.getvalue(), status_code=200)
resp.mimetype = "image/jpg"
return resp

Просмотреть файл

@ -0,0 +1,7 @@
name: userenv
dependencies:
- python=3.10
- pip
- pip:
- pillow~=9.3
- azureml-defaults>=1.47,<2

Просмотреть файл

@ -55,6 +55,7 @@ Test Status is for branch - **_main_**
|endpoints|online|[debug-online-endpoints-locally-in-visual-studio-code](endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb)|*no description* - _This sample is excluded from automated tests_|[![debug-online-endpoints-locally-in-visual-studio-code](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-debug-online-endpoints-locally-in-visual-studio-code.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-debug-online-endpoints-locally-in-visual-studio-code.yml)|
|endpoints|online|[online-endpoints-managed-identity-sai](endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb)|*no description* - _This sample is excluded from automated tests_|[![online-endpoints-managed-identity-sai](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-managed-identities-online-endpoints-managed-identity-sai.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-managed-identities-online-endpoints-managed-identity-sai.yml)|
|endpoints|online|[online-endpoints-managed-identity-uai](endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb)|*no description* - _This sample is excluded from automated tests_|[![online-endpoints-managed-identity-uai](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-managed-identities-online-endpoints-managed-identity-uai.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-managed-identities-online-endpoints-managed-identity-uai.yml)|
|endpoints|online|[online-endpoints-binary-payloads](endpoints/online/managed/online-endpoints-binary-payloads.ipynb)|*no description*|[![online-endpoints-binary-payloads](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-binary-payloads.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-binary-payloads.yml)|
|endpoints|online|[online-endpoints-inference-schema](endpoints/online/managed/online-endpoints-inference-schema.ipynb)|*no description*|[![online-endpoints-inference-schema](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-inference-schema.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-inference-schema.yml)|
|endpoints|online|[online-endpoints-keyvault](endpoints/online/managed/online-endpoints-keyvault.ipynb)|*no description*|[![online-endpoints-keyvault](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-keyvault.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-keyvault.yml)|
|endpoints|online|[online-endpoints-multimodel](endpoints/online/managed/online-endpoints-multimodel.ipynb)|*no description*|[![online-endpoints-multimodel](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-multimodel.yml/badge.svg?branch=main)](https://github.com/Azure/azureml-examples/actions/workflows/sdk-endpoints-online-managed-online-endpoints-multimodel.yml)|

Просмотреть файл

@ -0,0 +1,16 @@
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
from PIL import Image
def init():
pass
@rawhttp
def run(req: AMLRequest):
sizes = [
{"filename": f.filename, "size": Image.open(f.stream).size}
for f in req.files.getlist("file[]")
]
return {"response": sizes}

Просмотреть файл

@ -0,0 +1,30 @@
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
from azureml.contrib.services.aml_response import AMLResponse
from email.mime.multipart import MIMEMultipart
from email.mime.image import MIMEImage
from PIL import Image
import io
default_resize = (128, 128)
def init():
pass
@rawhttp
def run(req: AMLRequest):
try:
data = req.files.getlist("file")[0]
except IndexError:
return AMLResponse("No file uploaded", status_code=422)
img = Image.open(data.stream)
img = img.resize(default_resize)
output = io.BytesIO()
img.save(output, format="JPEG")
resp = AMLResponse(message=output.getvalue(), status_code=200)
resp.mimetype = "image/jpg"
return resp

Просмотреть файл

@ -0,0 +1,8 @@
name: userenv
dependencies:
- python=3.10
- pip
- pip:
- pillow
- azureml-inference-server-http
- azureml-contrib-services

Просмотреть файл

@ -0,0 +1,461 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Handle binary payloads from a Managed Online Endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, receiving and sending binary payloads in scoring scripts is demonstrated using the `rawhttp` decorator as well as the `AMLRequest` and `AMLResponse` objects. Without `rawhttp`, the run function is called passed the serialized JSON from the payload. Using `rawhttp`, the run function is instead passed an `AMLRequest` object, which wraps the native Flask request object used internally by the Azure Inference Server. After handling binary payloads, one can either return a JSON-serializable object as usual or use the `AMLResponse` object to have full control over the response, including returning binary payloads. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Configure parameters, assets, and clients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.1 Import required libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "1-import-required-libraries"
},
"outputs": [],
"source": [
"from azure.ai.ml import MLClient\n",
"from azure.ai.ml.entities import (\n",
" ManagedOnlineEndpoint,\n",
" ManagedOnlineDeployment,\n",
" Model,\n",
" CodeConfiguration,\n",
" Environment,\n",
")\n",
"from azure.identity import DefaultAzureCredential\n",
"import random, os, requests"
]
},
{
"cell_type": "markdown",
"metadata": {
"name": "1-import-libraries"
},
"source": [
"### 1.2 Set workspace details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "1-set-workspace-details"
},
"outputs": [],
"source": [
"subscription_id = \"<SUBSCRIPTION_ID>\"\n",
"resource_group = \"<RESOURCE_GROUP>\"\n",
"workspace_name = \"<AML_WORKSPACE_NAME>\""
]
},
{
"cell_type": "markdown",
"metadata": {
"name": "1-set-workspace"
},
"source": [
"### 1.3 Set variables"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "1-set-variables"
},
"outputs": [],
"source": [
"rand = random.randint(0, 10000)\n",
"\n",
"endpoint_name = f\"endpt-moe-{rand}\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.4 Download sample data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "1-download-sample-data"
},
"outputs": [],
"source": [
"url = \"https://aka.ms/peacock-pic\"\n",
"agent = f\"Python Requests/{requests.__version__} (https://github.com/Azure/azureml-examples)\"\n",
"r = requests.get(url, headers={\"User-Agent\": agent}, allow_redirects=True)\n",
"open(\"binary-payloads/input.jpg\", \"wb\").write(r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 1.5 Create an MLClient Instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "1-create-mlclient-instance"
},
"outputs": [],
"source": [
"credential = DefaultAzureCredential()\n",
"ml_client = MLClient(\n",
" credential,\n",
" subscription_id=subscription_id,\n",
" resource_group_name=resource_group,\n",
" workspace_name=workspace_name,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Create endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "2-create-endpoint"
},
"outputs": [],
"source": [
"endpoint = ManagedOnlineEndpoint(name=endpoint_name)\n",
"endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Create a Binary-to-Binary Deployment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This script receives an image as a binary file and returns a resized image as a binary file. Both scoring scripts use the `rawhttp` decorator to change the argument passed to the run function from JSON to the entire `AMLRequest` object. This script also uses the `AMLResponse` object "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
"from azureml.contrib.services.aml_response import AMLResponse\n",
"from email.mime.multipart import MIMEMultipart\n",
"from email.mime.image import MIMEImage\n",
"from PIL import Image\n",
"import io \n",
"\n",
"default_resize = (128, 128)\n",
"\n",
"def init(): \n",
" pass \n",
"\n",
"@rawhttp\n",
"def run(req : AMLRequest):\n",
" try:\n",
" data = req.files.getlist(\"file\")[0]\n",
" except IndexError:\n",
" return AMLResponse(\"No file uploaded\", status_code=422)\n",
" \n",
" img = Image.open(data.stream)\n",
" img = img.resize(default_resize)\n",
"\n",
" output = io.BytesIO()\n",
" img.save(output, format=\"JPEG\")\n",
" resp = AMLResponse(message = output.getvalue(), status_code=200)\n",
" resp.mimetype = \"image/jpg\"\n",
"\n",
" return resp\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 Create the deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "3-create-deployment"
},
"outputs": [],
"source": [
"deployment = ManagedOnlineDeployment(\n",
" name=\"binarypayloads\",\n",
" endpoint_name=endpoint_name,\n",
" model=Model(path=\"binary-payloads\"),\n",
" code_configuration=CodeConfiguration(\n",
" code=\"binary-payloads/code\", scoring_script=\"single-file-to-file-score.py\"\n",
" ),\n",
" environment=Environment(\n",
" conda_file=\"binary-payloads/env.yml\",\n",
" image=\"mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest\",\n",
" ),\n",
" instance_type=\"Standard_DS2_v2\",\n",
" instance_count=1,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 Create the deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "3-create-deployment"
},
"outputs": [],
"source": [
"deployment = ml_client.online_deployments.begin_create_or_update(deployment).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 Update endpoint traffic"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "3-update-endpoint-traffic"
},
"outputs": [],
"source": [
"endpoint.traffic = {\"binarypayloads\": 100}\n",
"endpoint = ml_client.online_endpoints.begin_create_or_update(endpoint).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.4 Get endpoint details"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "3-get-endpoint-details"
},
"outputs": [],
"source": [
"scoring_uri = endpoint.scoring_uri\n",
"key = ml_client.online_endpoints.get_keys(endpoint_name).primary_key"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.5 Test the endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "3-test-endpoint"
},
"outputs": [],
"source": [
"res = requests.post(\n",
" url=scoring_uri,\n",
" headers={\"Authorization\": f\"Bearer {key}\"},\n",
" files=[(\"file\", open(\"binary-payloads/input.jpg\", \"rb\"))],\n",
")\n",
"open(\"binary-payloads/output.jpg\", \"wb\").write(res.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Create a Binary-to-JSON Deployment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.1 Examine the scoring script\n",
"This script accepts multiple image files uploaded as `file[]` and returns the sizes of the images as JSON. Both scoring scripts use the `rawhttp` decorator to change the argument passed to the run function from JSON to the entire `AMLRequest` object. However, unlike the first script this one returns a dictionary rather than an `AMLResponse` object."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"from azureml.contrib.services.aml_request import AMLRequest, rawhttp\n",
"from PIL import Image\n",
"\n",
"def init(): \n",
" pass \n",
"\n",
"@rawhttp\n",
"def run(req : AMLRequest):\n",
" sizes = [{\"filename\" : f.filename,\n",
" \"size\" : Image.open(f.stream).size}\n",
" for f in req.files.getlist(\"file[]\")]\n",
"\n",
" return {\"response\" : sizes}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.2 Update the deployment"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "5-update-deployment"
},
"outputs": [],
"source": [
"deployment.code_configuration = CodeConfiguration(\n",
" code=deployment.code_configuration.code,\n",
" scoring_script=\"code/multi-file-to-json-score.py\",\n",
")\n",
"deployment = ml_client.online_deployments.begin_create_or_update(deployment).result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Test the endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6.1 Send a request"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "6-send-request"
},
"outputs": [],
"source": [
"res = requests.post(\n",
" url=scoring_uri,\n",
" headers={\"Authorization\": f\"Bearer {key}\"},\n",
" files=[\n",
" (\"file[]\", open(\"binary-payloads/input.jpg\", \"rb\")),\n",
" (\"file[]\", open(\"binary-payloads/output.jpg\", \"rb\")),\n",
" ],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Delete assets"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 7.1 Delete the endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"name": "7-delete-endpoint"
},
"outputs": [],
"source": [
"ml_client.online_endpoints.begin_delete(name=endpoint_name)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.10 - SDK V2",
"language": "python",
"name": "python310-sdkv2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
},
"vscode": {
"interpreter": {
"hash": "c54d4b4f21f908d21f1064b6d031502c08620e465e849bef5aa76d1f6a474870"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}