This commit is contained in:
YanZhangADS 2018-06-06 19:38:31 +00:00
Родитель effc31a2c2
Коммит 3c827e95e9
2 изменённых файлов: 21 добавлений и 12 удалений

Просмотреть файл

@ -12,7 +12,7 @@
"\n",
"## Outline<a id=\"BackToTop\"></a>\n",
"- [Prerequisite](#prerequisite)\n",
"- [Step 1: Deploy the ML model through Azure CLI](#step1)\n",
"- [Step 1: Build the trained ML Model into Docker Image](#step1)\n",
"- [Step 2: Provision and Configure IoT Edge Device](#step2)\n",
"- [Step 3: Deploy ML Module on IoT Edge Device](#step3)\n",
"- [Step 4: Test ML Module](#step4)"
@ -24,7 +24,7 @@
"source": [
"## Prerequisite <a id=\"Prerequisite\"></a>\n",
"\n",
"Before starting this notebook, you should finish Keras_TF_CNN_DeployModel.ipynb in the same repository (except the last section \"Clean up resources\"). As a recap, we have created following resources in step \"Deploy model as a Web Service\" in this previous execercise:\n",
"Before starting this notebook, you should finish [Keras_TF_CNN_DeployModel.ipynb](Keras_TF_CNN_DeployModel.ipynb) in the same repository (except the last section \"Clean up resources\"). As a recap, we have created following resources in step \"Deploy model as a Web Service\" in this previous execercise:\n",
" \n",
" - Resource group defined in variable YOUR_RESOURCE_GROUP\n",
" * Machine Learning Model Management\n",
@ -41,12 +41,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 1: Deploy the ML model through Azure CLI <a id=\"step1\"></a>\n",
"## Step 1: Build the trained ML Model into Docker Image <a id=\"step1\"></a>\n",
"\n",
"If you have finished [Prerequisite](#prerequisite), you can skip this step. Otherwise, you can follow Section *Create the Azure ML container* in [Deploy Azure Machine Learning as an IoT Edge module - preview](https://docs.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-machine-learning) to deploy your own ML model. The expected output of this step include:\n",
"\n",
" 1. A docker image hosted on ACR (Azure Container Registry). This image will be used to create a docker container running on the edge device. \n",
" 2. A web service. This web service is used for testing purpose."
" 2. A web service. This web service can be used for testing purpose."
]
},
{

Просмотреть файл

@ -2,13 +2,16 @@
# Deploying Deep Learning Models on Azure Container Service and on Azure IoT Edge
It is often a non-trivial task to deploy machine learning (ML) models. In the work shown in [this repo](https://github.com/ilkarman/DeepLearningFrameworks), we aim to create a Rosetta stone of deep-learning frameworks by using common code for several different network structures. We mainly show the network training and evaluation process across many different frameworks. With this work, we further introduce a model deployment approach for a trained machine learning model. As an example, we show how to deploy the trained Keras (tensorflow) model, which is one of the deep learning models from above mentioned deep learning framework comparison task.
It is often a non-trivial task to deploy machine learning (ML) models. In this post, we show two types of model deployments: model deployment on Azure Container Service (ACS); and model deployment on Azure IoT Edge.
In our [previous work](https://github.com/ilkarman/DeepLearningFrameworks), we aim to create a Rosetta stone of deep learning (DL) frameworks by using common code for several different network structures. Taking one of these network structures as an example, we show how to deploy a trained Keras (tensorflow) CNN model. The objective of this multi-class classification problem is to perform object recognition in images on [CIFAR-10](https://www.kaggle.com/c/cifar-10) data set. The same model deployment approaches are also applicable to other ML models, not just DL.
We introduce two types of model deployments: model deployment on Azure Container Service (ACS); and model deployment on Azure IoT Edge. In this tutorial, the former is a prerequisite of the latter.
## Model Deployment on ACS
We deploy the model on Azure Container Service (ACS) as a web service via Azure CLI [Machine Learning Model Management]( https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-overview). This approach is also applicable to other ML models, not just DL.
We deploy the model on Azure Container Service (ACS) as a web service via Azure CLI [Machine Learning Model Management]( https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-overview).
Comparing with the previous [ACS deployment tutorial](https://github.com/Azure/ACS-Deployment-Tutorial), this approach simplifies the model deployment process by using a set of model management commands. With this model management tool, it becomes straightforward to take the containerized approach - Docker container, to overcome the dependency problems for ML model deployment. It makes convenient to initialize your Azure machine learning environment with a storage account, ACR registry, App Insights service, and other Azure resources by executing very few CLI commands. You can also easily scale ACS Kubernetes cluster, as introduced in the blog post [Scaling Azure Container Service Clusters](https://blogs.technet.microsoft.com/machinelearning/2018/03/20/scaling-azure-container-service-cluster/).
@ -21,14 +24,22 @@ Specifically, the major steps taken for deploying a Keras model is shown in foll
## Model Deployment on Azure IoT Edge
With the completion of the previous task, we introduce the steps of deploying an ML module through [Azure IoT Edge](https://docs.microsoft.com/en-us/azure/iot-edge/how-iot-edge-works). The purpose is to deploy a trained image classification model to the edge device. When the image data is generated from a particular process pipeline and fed into the edge device, the deployed model is able to make predictions right on the edge device without accessing to the cloud.
With the completion of the previous task, we introduce how to deploy an ML module through [Azure IoT Edge](https://docs.microsoft.com/en-us/azure/iot-edge/how-iot-edge-works).
Azure IoT Edge is an Internet of Things (IoT) service that builds on top of Azure IoT Hub. It is a hybrid solution combining the benefits of the two scenarios: *IoT in the Cloud* and *IoT on the Edge*. This service is meant for customers who want to analyze data on devices, a.k.a. "at the edge", instead of in the cloud. By moving parts of your workload to the edge, your devices can spend less time sending messages to the cloud and react more quickly to changes in status. On the other hand, Azure IoT Hub provides centralized way to manage Azure IoT Edge devices, and make it easy to train ML models in the Cloud and deploy the trained models on the Edge devices.
In this example, we deploy a trained Keras (tensorflow) CNN model to the edge device. When the image data is generated from a particular process pipeline and fed into the prediction engine on the edge device, the deployed model is able to make predictions right on the edge device without accessing to the cloud. Following diagram shows the major components of an Azure IoT edge device.
<p align="center">
<img src="imgs/azureiotedgeruntime.png" alt="logo" width="90%"/>
</p>
We perform following steps for the deployment.
- Step 1: Build the trained ML model into docker image. This image will be used to create a docker container running on the edge device.
- Step 2: Provision and Configure IoT Edge Device
- Step 3: Deploy ML Module on IoT Edge Device
- Step 4: Test ML Module
## Prerequisite <a id="Prerequisite"></a>
@ -37,15 +48,13 @@ With the completion of the previous task, we introduce the steps of deploying an
In this example, we use Deep Learning Virtual Machine - Linux OS, Standard [NC6 (6 vcpus, 56 GB memory) machine](https://azure.microsoft.com/en-us/blog/azure-n-series-preview-availability/) as the compute resource, where we train and deploy the model. Other types of Azure Linux VM should work as well. The difference is that the model trainning can take longer time (~ 1 hour). To provision a DLVM, please see [these instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/provision-deep-learning-dsvm).
The Deep Learning Virtual Machine (DLVM) is a specially configured variant of the Data Science Virtual Machine (DSVM) to make it easier to use GPU-based VM instances for training deep learning models. It is supported on Windows 2016, or the Ubuntu Data Science Virtual Machine and shares the same core VM images (and hence all the rich toolset) as the DSVM. We also provide end-to-end AI samples for image and text understanding. The deep learning virtual machine also makes the rich set of tools and samples on the DSVM more easily discoverable. In terms of the tooling, the Deep Learning Virtual Machine provides several popular deep learning frameworks, tools to acquire and pre-process image, textual data.
We use following tools on this DSVM.
- Python 3
- Jupyter Notebook
- Azure CLI
## Getting Started
Source code and full documentation are available in below notebooks. You are suggested to use these two notebooks in a sequential order.
Source code and full documentation are available in following notebooks. You are suggested to use run them in a sequential order.
- [Keras_TF_CNN_DeployModel.ipynb](Keras_TF_CNN_DeployModel.ipynb)
- [Keras_TF_CNN_DeployModel_IoTEdge.ipynb](Keras_TF_CNN_DeployModel_IoTEdge.ipynb)