This commit is contained in:
Richin Jain 2017-11-28 15:31:49 -05:00 коммит произвёл GitHub
Родитель 2acc62ca28
Коммит 56444c2c5f
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
1 изменённых файлов: 5 добавлений и 1 удалений

Просмотреть файл

@ -6,7 +6,11 @@ We will use a simple python flask web application for the tutorial, you can down
For an in-depth underastanding of how DevOps integrates with different stages of an AI Data Science project, checkout this [training](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/team-data-science-process-for-devops) from our team. In addition, do checkout this great [series](https://blogs.msdn.microsoft.com/buckwoody/category/devops-for-data-science/) of blog posts on DevOps in Data Science from Microsoft.
We also recommend taking a look at our newly launched [Azure Machine Learning services](https://docs.microsoft.com/en-gb/azure/machine-learning/preview/overview-what-is-azure-ml) (Azure ML). Azure ML is an integrated, end-to-end data science and advanced analytics solution for professional data scientists to prepare data, develop experiments, and deploy models at cloud scale. If you are already using Azure ML, you can easily consume your models by exporting them to a storage container. You can also seamlessly integrate with [Azure ML Model Management service](https://docs.microsoft.com/en-gb/azure/machine-learning/preview/model-management-overview) via the REST APIs to fetch specific ML model version for your application. To download a model stored in model management service, you can use the [Get Model Details](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-api-reference#get-model-details) endpoint that returns the URL of the blob container that stores the model. Lastly, if you don't want to pre-package the model with your application, you can deploy your model, at scale from within the Azure ML workbench and consume it as a REST endpoint in your application.
We also recommend taking a look at our newly launched [Azure Machine Learning services](https://docs.microsoft.com/en-gb/azure/machine-learning/preview/overview-what-is-azure-ml) (Azure ML). Azure ML is an integrated, end-to-end data science and advanced analytics solution for professional data scientists to prepare data, develop experiments, and deploy models at cloud scale.
* If you are already using Azure ML, you can easily consume your models by exporting the model file to a storage container.
* You can also seamlessly integrate with [Azure ML Model Management service](https://docs.microsoft.com/en-gb/azure/machine-learning/preview/model-management-overview) via the REST APIs to fetch specific version of a model for your application.
* To download a model stored in model management service, you can use the [Get Model Details](https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-api-reference#get-model-details) endpoint that returns the URL of the blob container that stores the model.
Lastly, if you don't want to pre-package the model with your application, you can deploy your model, at scale from within the Azure ML workbench and consume it as a REST endpoint in your application.
In this tutorial we demonstrate how you can build a continous integration pipeline for an AI application. The application securely pulls the latest model from an Azure Storage account and packages that as part of the application. The deployed application has the app code and ML model packaged as single container. This decouples the app developers and data scientists, to make sure that their production app is always running the latest code with latest ML model. Variation to this tutorial could be consuming the ML application as an endpoint instead of packaging it in the app. The goal of the tutorial is to show how easy it is do devops for an AI application.