873 строки
44 KiB
Plaintext
873 строки
44 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Copyright (c) Recommenders contributors.\n",
|
|
"\n",
|
|
"Licensed under the MIT License."
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Deploying a Real-Time Content Based Personalization Model\n",
|
|
"\n",
|
|
"This notebook provides an example for how a business can use machine learning to automate content based personalization for their customers by using a recommendation system. Azure Databricks is used to train a model that predicts the probability a user will engage with an item. In turn, this estimate can be used to rank items based on the content that a user is most likely to consume.<br><br>\n",
|
|
"This notebook creates a scalable real-time scoring service for the Spark based models such as the Content Based Personalization model trained in the [MMLSpark-LightGBM-Criteo notebook](../02_model_content_based_filtering/mmlspark_lightgbm_criteo.ipynb).\n",
|
|
"<br><br>\n",
|
|
"### Architecture\n",
|
|
"<img src=\"https://recodatasets.z20.web.core.windows.net/images/lightgbm_criteo_arch.svg?sanitize=true\" alt=\"Architecture\">\n",
|
|
"\n",
|
|
"### Components\n",
|
|
"The following components are used in this architecture:<br>\n",
|
|
"- [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) is a storage service optimized for storing massive amounts of unstructured data. In this case, the input data is stored here.<br>\n",
|
|
"- [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/) is a managed Apache Spark cluster where model training and evaluating is performed.<br>\n",
|
|
"- [Azure Machine Learning service](https://azure.microsoft.com/en-us/services/machine-learning-service/) is used in this scenario to register the machine learning model. <br>\n",
|
|
"- [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) is used to package the scoring script as a container image which is used to serve the model in production. <br>\n",
|
|
"- [Azure Kubernetes Service](https://azure.microsoft.com/en-us/services/kubernetes-service/) is used to deploy the trained models to web or app services. <br>"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Assumptions\n",
|
|
"In order to execute this notebook the following items are assumed:\n",
|
|
"\n",
|
|
"1. A model has previously been trained as shown in the [mmlspark_lightgbm_criteo](../02_model_content_based_filtering/mmlspark_lightgbm_criteo.ipynb) notebook\n",
|
|
"2. This notebook is running in the same Azure Databricks workspace used to run the notebook in Assumption 1.\n",
|
|
"3. The Databricks cluster used has been prepared for operationalization (MML Spark and recommenders are both installed)\n",
|
|
" - See [Setup](../../SETUP.md) instructions for details\n",
|
|
"4. An Azure Machine Learning Service workspace has been setup in the same region as the Azure Databricks workspace used for model training\n",
|
|
" - See [Create A Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/setup-create-workspace) for more details\n",
|
|
"5. The Azure ML Workspace config.json has been uploaded to databrics at `dbfs:/aml_config/config.json`\n",
|
|
" - See [Configure Environment](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment) and [Databricks CLI](https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html#access-dbfs-with-the-databricks-cli)\n",
|
|
"6. An Azure Container Instance (ACI) has been registered for use your Azure subscription\n",
|
|
" - See [Supported Services](https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services#portal) for more details"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Score Service Steps\n",
|
|
"In this example, a \"scoring service\" is a function that is executed by a docker container. It takes in a post request with JSON formatted payload and produces a score based on a previously estimated model. In our case, we will use the model we estimated earlier that predicts the probability of a user-item interaction based on a set of numeric and categorical features. Because that model was trained using PySpark we will create a Spark session on a single instance (within the docker container) which will use [MML Spark Serving](https://github.com/Azure/mmlspark/blob/master/docs/mmlspark-serving.md) to execute the model on the received input data and return the probability of interaction. We will use Azure Machine Learning to create and run the docker container.\n",
|
|
"\n",
|
|
"In order to create a scoring service, we will do the following steps:\n",
|
|
"\n",
|
|
"1. Setup and authorize the Azure Machine Learning Workspace\n",
|
|
"2. Serialize the previously trained model and add it to the Azure Model Registry\n",
|
|
"3. Define the 'scoring service' script to execute the model\n",
|
|
"4. Define all the pre-requisites that that script requires\n",
|
|
"5. Use the model, the driver script, and the pre-requisites to create a Azure Container Image\n",
|
|
"6. Deploy the container image on a scalable platform Azure Kubernetes Service\n",
|
|
"7. Test the service"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Setup libraries and variables\n",
|
|
"\n",
|
|
"The next few cells initialize the environment and variables: we import relevant libraries and set variables."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Azure ML SDK version: 1.0.18\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import os\n",
|
|
"import json\n",
|
|
"import shutil\n",
|
|
"\n",
|
|
"from recommenders.datasets.criteo import get_spark_schema, load_spark_df\n",
|
|
"from recommenders.utils.k8s_utils import qps_to_replicas, replicas_to_qps, nodes_to_replicas\n",
|
|
"\n",
|
|
"from azureml.core import Workspace\n",
|
|
"from azureml.core import VERSION as azureml_version\n",
|
|
"\n",
|
|
"from azureml.core.model import Model\n",
|
|
"from azureml.core.conda_dependencies import CondaDependencies \n",
|
|
"from azureml.core.webservice import Webservice, AksWebservice\n",
|
|
"from azureml.core.image import ContainerImage\n",
|
|
"from azureml.core.compute import AksCompute, ComputeTarget\n",
|
|
"\n",
|
|
"from math import floor\n",
|
|
"\n",
|
|
"# Check core SDK version number\n",
|
|
"print(\"Azure ML SDK version: {}\".format(azureml_version))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Configure Scoring Service Variables"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"MODEL_NAME = 'lightgbm_criteo.mml' # this name must exactly match the name used to save the pipeline model in the estimation notebook\n",
|
|
"MODEL_DESCRIPTION = 'LightGBM Criteo Model'\n",
|
|
"\n",
|
|
"# Setup AzureML assets (names must be lower case alphanumeric without spaces and between 3 and 32 characters)\n",
|
|
"# Azure ML Webservice\n",
|
|
"SERVICE_NAME = 'lightgbm-criteo'\n",
|
|
"# Azure ML Container Image\n",
|
|
"CONTAINER_NAME = SERVICE_NAME\n",
|
|
"CONTAINER_RUN_TIME = 'spark-PY'\n",
|
|
"# Azure Kubernetes Service (AKS)\n",
|
|
"AKS_NAME = 'predict-aks'\n",
|
|
"\n",
|
|
"# Names of other files that are used below\n",
|
|
"CONDA_FILE = \"deploy_conda.yaml\"\n",
|
|
"DRIVER_FILE = \"mmlspark_serving.py\""
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Setup AzureML Workspace\n",
|
|
"Workspace configuration can be retrieved from the portal and uploaded to Databricks<br>\n",
|
|
"See [AzureML on Databricks](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-environment#azure-databricks)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"ws = Workspace.from_config('/dbfs/aml_config/config.json')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Prepare the Serialized Model\n",
|
|
"\n",
|
|
"In order to create the docker container, the first thing we will do is to prepare the model we estimated in a prior step so that the docker container we are creating will be able to access it. We do this by *registering* the model to the workspace (see the Azure ML [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-model-management-and-deployment) for additional details).\n",
|
|
"\n",
|
|
"The model has been stored as a directory on dbfs, and before we register it, we do a few additional steps to facilitate the process.\n",
|
|
"\n",
|
|
"### Input Schema\n",
|
|
"\n",
|
|
"Spark Serving requires the schema of the raw input data. Therefore, we get the schema and \n",
|
|
"store it as an additional file in the model directory.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"></div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"raw_schema = get_spark_schema()\n",
|
|
"with open(os.path.join('/dbfs', MODEL_NAME, 'schema.json'), 'w') as f:\n",
|
|
" f.write(raw_schema.json())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Copy the model from dbfs to local\n",
|
|
"\n",
|
|
"While you can access files on DBFS with local file APIs, it is safer to explicitly copy saved models to and from dbfs, because the local file APIs can only access files smaller than 2 GB (see details [here](https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html#access-dbfs-using-local-file-apis))."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"><span class=\"ansired\">Out[</span><span class=\"ansired\">6</span><span class=\"ansired\">]: </span>True\n",
|
|
"</div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"model_local = os.path.join(os.getcwd(), MODEL_NAME)\n",
|
|
"dbutils.fs.cp('dbfs:/' + MODEL_NAME, 'file:' + model_local, recurse=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Register the Model\n",
|
|
"\n",
|
|
"Now we are ready to register the model in the Azure Machine Learning Workspace."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\">Registering model lightgbm_criteo.mml\n",
|
|
"lightgbm_criteo.mml LightGBM Criteo Model 4\n",
|
|
"</div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"# First the model directory is compressed to minimize data transfer\n",
|
|
"zip_file = shutil.make_archive(base_name=MODEL_NAME, format='zip', root_dir=model_local)\n",
|
|
"\n",
|
|
"# Register the model\n",
|
|
"model = Model.register(model_path=zip_file, # this points to a local file\n",
|
|
" model_name=MODEL_NAME, # this is the name the model is registered as\n",
|
|
" description=MODEL_DESCRIPTION,\n",
|
|
" workspace=ws)\n",
|
|
"\n",
|
|
"print(model.name, model.description, model.version)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Define the Scoring Script\n",
|
|
"\n",
|
|
"Next, we need to create the driver script that will be executed when the service is called. The functions that need to be defined for scoring are `init()` and `run()`. The `init()` function is run when the service is created, and the `run()` function is run each time the service is called.\n",
|
|
"\n",
|
|
"In our example, we use the `init()` function to load all the libraries, initialize the spark session, start the spark streaming service and load the model pipeline. We use the `run()` method to route the input to the spark streaming service to generate predictions (in this case the probability of an interaction) then return the output."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"></div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"driver_file = '''\n",
|
|
"import os\n",
|
|
"import json\n",
|
|
"from time import sleep\n",
|
|
"from uuid import uuid4\n",
|
|
"from zipfile import ZipFile\n",
|
|
"\n",
|
|
"from azureml.core.model import Model\n",
|
|
"from pyspark.ml import PipelineModel\n",
|
|
"from pyspark.sql import SparkSession\n",
|
|
"from pyspark.sql.types import StructType\n",
|
|
"import requests\n",
|
|
"\n",
|
|
"\n",
|
|
"def init():\n",
|
|
" \"\"\"One time initialization of pyspark and model server\"\"\"\n",
|
|
"\n",
|
|
" spark = SparkSession.builder.appName(\"Model Server\").getOrCreate()\n",
|
|
" import mmlspark # this is needed to load mmlspark libraries\n",
|
|
"\n",
|
|
" # extract and load model\n",
|
|
" model_path = Model.get_model_path('{model_name}')\n",
|
|
" with ZipFile(model_path, 'r') as f:\n",
|
|
" f.extractall('model')\n",
|
|
" model = PipelineModel.load('model')\n",
|
|
"\n",
|
|
" # load data schema saved with model\n",
|
|
" with open(os.path.join('model', 'schema.json'), 'r') as f:\n",
|
|
" schema = StructType.fromJson(json.load(f))\n",
|
|
"\n",
|
|
" input_df = (\n",
|
|
" spark.readStream.continuousServer()\n",
|
|
" .address(\"localhost\", 8089, \"predict\")\n",
|
|
" .load()\n",
|
|
" .parseRequest(schema)\n",
|
|
" )\n",
|
|
"\n",
|
|
" output_df = (\n",
|
|
" model.transform(input_df)\n",
|
|
" .makeReply(\"probability\")\n",
|
|
" )\n",
|
|
"\n",
|
|
" checkpoint = os.path.join('/tmp', 'checkpoints', uuid4().hex)\n",
|
|
" server = (\n",
|
|
" output_df.writeStream.continuousServer()\n",
|
|
" .trigger(continuous=\"30 seconds\")\n",
|
|
" .replyTo(\"predict\")\n",
|
|
" .queryName(\"prediction\")\n",
|
|
" .option(\"checkpointLocation\", checkpoint)\n",
|
|
" .start()\n",
|
|
" )\n",
|
|
"\n",
|
|
" # let the server finish starting\n",
|
|
" sleep(1)\n",
|
|
"\n",
|
|
"\n",
|
|
"def run(input_json):\n",
|
|
" try:\n",
|
|
" response = requests.post(data=input_json, url='http://localhost:8089/predict')\n",
|
|
" result = response.json()['probability']['values'][1]\n",
|
|
" except Exception as e:\n",
|
|
" result = str(e)\n",
|
|
" \n",
|
|
" return json.dumps({{\"result\": result}})\n",
|
|
" \n",
|
|
"'''.format(model_name=MODEL_NAME)\n",
|
|
"\n",
|
|
"# check syntax\n",
|
|
"exec(driver_file)\n",
|
|
"\n",
|
|
"with open(DRIVER_FILE, \"w\") as f:\n",
|
|
" f.write(driver_file)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Define Dependencies\n",
|
|
"\n",
|
|
"Next, we define the dependencies that are required by the driver script."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 19,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"></div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"# azureml-sdk is required to load the registered model\n",
|
|
"conda_file = CondaDependencies.create(pip_packages=['azureml-sdk', 'requests']).serialize_to_string()\n",
|
|
"\n",
|
|
"with open(CONDA_FILE, \"w\") as f:\n",
|
|
" f.write(conda_file)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create the Image\n",
|
|
"\n",
|
|
"We use the `ContainerImage` class to first configure the image with the defined driver and dependencies, then to create the image for use later.<br>\n",
|
|
"Building the image allows it to be downloaded and debugged locally using docker, see [troubleshooting instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-troubleshoot-deployment)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 21,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\">Creating image\n",
|
|
"Running......................\n",
|
|
"SucceededImage creation operation finished for image lightgbm-criteo:3, operation "Succeeded"\n",
|
|
"</div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"image_config = ContainerImage.image_configuration(execution_script=DRIVER_FILE, \n",
|
|
" runtime=CONTAINER_RUN_TIME,\n",
|
|
" conda_file=CONDA_FILE,\n",
|
|
" tags={\"runtime\":CONTAINER_RUN_TIME, \"model\": MODEL_NAME})\n",
|
|
"\n",
|
|
"image = ContainerImage.create(name=CONTAINER_NAME,\n",
|
|
" models=[model],\n",
|
|
" image_config=image_config,\n",
|
|
" workspace=ws)\n",
|
|
"\n",
|
|
"image.wait_for_creation(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Create the Service\n",
|
|
"\n",
|
|
"Once we have created an image, we configure an Azure Kubernetes Service (AKS) and deploy the image as an AKS Webservice.\n",
|
|
"\n",
|
|
"**NOTE** We *can* create a service directly from the registered model and image_configuration with the `Webservice.deploy_from_model()` function. \n",
|
|
" We create the image here explicitly and use `deploy_from_image()` for three reasons:\n",
|
|
"\n",
|
|
"1. It provides more transparency in terms of the actual steps that are taking place\n",
|
|
"2. It provides more flexibility and control. For example, you can create images with names that are independent of the service that you are creating. This can be useful in cases where your images are used across multiple services.\n",
|
|
"3. It has potential for faster iteration and for more portability. Once we have an image, we can create a new deployment with the exact same code.\n",
|
|
"\n",
|
|
"### Setup and Planning\n",
|
|
"\n",
|
|
"When we are setting up a production service, we should start by estimating the load we would like to support. In order to estimate that, we need to estimate how long a single call is likely to take. In this example, we have done some local tests, and we have estimated that a single query may take approximately 100 ms to process. \n",
|
|
"\n",
|
|
"Based on a few additional assumptions, we can estimate how many replicas are required to support a targetted number of queries per second (qps). \n",
|
|
"\n",
|
|
"**Note:** This estimate should be used as a ballpark figure to get started, and we can verify performance with subsequent load testing to hone in on better estimates. See this [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aks) for more details.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"We have written some helper functions to support this type of calculation, and we will use them to estimate the number of replicas required to support loads of 25, 50, 100, 200, and 350 queries per second, using 100 ms as our estimate of the time to complete a single query."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"all_target_qps = [25, 50, 100, 200, 350]\n",
|
|
"query_processing_time = 0.1 ## in seconds\n",
|
|
"replica_estimates = {t: qps_to_replicas(t, query_processing_time) for t in all_target_qps}"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Based on the size of our customer base and other considerations (e.g. upcoming announcements that may boost traffic, etc), we make a decision on the maximum load we want to support. In this example, we will say we want to support 100 queries per second, and that will indicate that we should use the corresponding number of replicas (15 based on the estimates above). \n",
|
|
"\n",
|
|
"Once we have the number of replicas, we then need to make sure we have enough resources (Cores and Memory) within our Azure Kubernetes Service to support that number of replicas. In order to estimate that number, we need to know how many cores are going to be assigned to each replica. This number can be fractional, because there are many use-cases where there are multiple replicas per core. You can see additional details [here](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units). When we create the Webservice below, we will allocate 0.3 `cpu_cores` and 0.5 GB of memory to each replica. To support 15 replicas, we need `15*0.3` cores and `15*0.5` GB of memory. \n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"4.5 cores required\n",
|
|
"7.5 GB of memory required\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"cpu_cores_per_replica = 0.3\n",
|
|
"print('{} cores required'.format(replica_estimates[100]*cpu_cores_per_replica))\n",
|
|
"print('{} GB of memory required'.format(replica_estimates[100]*0.5))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Provision Azure Kubernetes Service\n",
|
|
"\n",
|
|
"Now that we have an estimate of the number of cores and amount of memory we need, we will configure and create the AKS cluster. By default, `AksCompute.provisioning_configuration()` will create a configuration that has 3 agents with `vm_size='Standard_D3_v2'`. Each Standard_D3_v2 virtual machine has 4 cores and 14 GB of memory, so the defaults result in a cluster with a combined 12 cores and 42 GB of memory, which are both sufficient to meet our estimated load requirements. \n",
|
|
"\n",
|
|
"**Note**: In this particular case, even though our load requirements are just 4.5 cores, we should **not** go below 12 cores in the AKS cluster. 12 cores is the minimum number of cores in AKS required for web services. See documentation for [details](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#aks). We can use the `agent_count` and `vm_size` parameters to increase the number of cores above 12 if our load requirements demand it, but we should not use them to go below."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 23,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Create AKS compute first\n",
|
|
"\n",
|
|
"# Use the default configuration (can also provide parameters to customize)\n",
|
|
"prov_config = AksCompute.provisioning_configuration()\n",
|
|
"\n",
|
|
"# Create the cluster\n",
|
|
"aks_target = ComputeTarget.create(\n",
|
|
" workspace=ws, \n",
|
|
" name=AKS_NAME, \n",
|
|
" provisioning_configuration=prov_config\n",
|
|
")\n",
|
|
"\n",
|
|
"aks_target.wait_for_completion(show_output=True)\n",
|
|
"\n",
|
|
"print(aks_target.provisioning_state)\n",
|
|
"print(aks_target.provisioning_errors)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Consideration\n",
|
|
"\n",
|
|
"Because our estimated load requirements are less than the minimums set by Azure Machine Learning, we should consider an alternate approach to estimating the number of replicas to use for the web service. If this is the only service that will run on the AKS cluster, then we are potentially wasting resources by not leveraging all of the compute resources. Initially, we used the expected load to estimate the number of replicas that should be used. Instead of that approach, we can also use the number of cores in our cluster to estimate the maximum number of replicas that could be supported.\n",
|
|
"\n",
|
|
"In order to estimate the maximum number of replicas, we do need to consider that there is some overhead on each node for the base kubernetes operations as well as the node's operating system and core functionality. We assume 10\\% overhead in this case, but you can find more details [here](https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads).\n",
|
|
"\n",
|
|
"**Note** we are using cores in this example, but we could also leverage memory requirements instead.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"max_replicas_12_cores = nodes_to_replicas(\n",
|
|
" n_cores_per_node=4, n_nodes=3, cpu_cores_per_replica=cpu_cores_per_replica\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Once we have the number of replicas our cluster will support, we can then estimate the queries per second we believe the AKS cluster could support."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"140"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"replicas_to_qps(max_replicas_12_cores, query_processing_time)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Create the Webservice\n",
|
|
"\n",
|
|
"Next, we will configure and create the webservice. In this configuration, we will say each replica will set `cpu_cores=cpu_cores_per_replica` (default `cpu_cores=0.1`). We are adjusting this value based on experience and prior testing with this service. \n",
|
|
"\n",
|
|
"If no arguments are passed to `AksWebservice.deploy_configuration()`, it uses `autoscale_enabled=True` with `autoscale_min_replicas=1` and `autoscale_max_replicas=10`. The max value does not meet our minimum requirements to support 100 queries per second, so we need to adjust it. We can adjust this value to either our estimate based on load (15) or our estimate based on the number that can be supported by the AKS cluster (36). In this example, we will set it to the value based on load to allow the AKS cluster to be used for other tasks or services."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"webservice_config = AksWebservice.deploy_configuration(cpu_cores=cpu_cores_per_replica,\n",
|
|
" autoscale_enabled=True,\n",
|
|
" autoscale_max_replicas=replica_estimates[100])\n",
|
|
"\n",
|
|
"# Deploy service using created image\n",
|
|
"aks_service = Webservice.deploy_from_image(\n",
|
|
" workspace=ws, \n",
|
|
" name=SERVICE_NAME,\n",
|
|
" deployment_config=webservice_config,\n",
|
|
" image=image,\n",
|
|
" deployment_target=aks_target\n",
|
|
")\n",
|
|
"\n",
|
|
"aks_service.wait_for_deployment(show_output=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Test the Service\n",
|
|
"\n",
|
|
"Next, we can use data from the `sample` data to test the service.\n",
|
|
"\n",
|
|
"The service expects JSON as its payload, so we take the sample data, convert to a dictionary, then submit to the service endpoint."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 26,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# View the URI\n",
|
|
"url = aks_service.scoring_uri\n",
|
|
"print('AKS URI: {}'.format(url))\n",
|
|
"\n",
|
|
"# Setup authentication using one of the keys from aks_service\n",
|
|
"headers = dict(Authorization='Bearer {}'.format(aks_service.get_keys()[0]))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 27,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Grab some sample data\n",
|
|
"df = load_spark_df(size='sample', spark=spark, dbutils=dbutils)\n",
|
|
"data = df.head().asDict()\n",
|
|
"print(data)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Send a request to the AKS cluster\n",
|
|
"response = requests.post(url=url, json=data, headers=headers)\n",
|
|
"print(response.json())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Delete the Service\n",
|
|
"\n",
|
|
"When you are done, you can delete the service to minimize costs. You can always redeploy from the image using the same command above."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 30,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"></div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"# Uncomment the following line to delete the web service\n",
|
|
"# aks_service.delete()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 31,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/html": [
|
|
"<style scoped>\n",
|
|
" .ansiout {\n",
|
|
" display: block;\n",
|
|
" unicode-bidi: embed;\n",
|
|
" white-space: pre-wrap;\n",
|
|
" word-wrap: break-word;\n",
|
|
" word-break: break-all;\n",
|
|
" font-family: \"Source Code Pro\", \"Menlo\", monospace;;\n",
|
|
" font-size: 13px;\n",
|
|
" color: #555;\n",
|
|
" margin-left: 4px;\n",
|
|
" line-height: 19px;\n",
|
|
" }\n",
|
|
"</style>\n",
|
|
"<div class=\"ansiout\"><span class=\"ansired\">Out[</span><span class=\"ansired\">34</span><span class=\"ansired\">]: </span>'Deleting'\n",
|
|
"</div>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"aks_service.state"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"authors": [
|
|
{
|
|
"name": "pasha"
|
|
}
|
|
],
|
|
"kernelspec": {
|
|
"display_name": "Python (reco_base)",
|
|
"language": "python",
|
|
"name": "reco_base"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.10"
|
|
},
|
|
"name": "deploy-to-aci-04",
|
|
"notebookId": 904892461294324
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 1
|
|
} |