This commit is contained in:
Richin Jain 2017-11-27 13:38:36 -05:00 коммит произвёл GitHub
Родитель ac6efe293e
Коммит 994a2d60ec
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
1 изменённых файлов: 5 добавлений и 16 удалений

Просмотреть файл

@ -1,9 +1,11 @@
# DevOps for AI applications: Creating continous integration pipeline using Docker and Kubernetes. # DevOps for Artificial Intelligence (AI) applications: Creating continous integration pipeline using Docker and Kubernetes.
This tutorial demonstrates how to implement a Continous Integration (CI)/Continous Delivery (CD) pipeline for an Artificial Intelligence (AI) application. AI application is a combination of application code embedded with a pretrained machine learning (ML) model. For this tutorial we are fetching a pretrained model from a private Azure blob storage account, it could be a AWS S3 account as well. This tutorial demonstrates how to implement a Continous Integration (CI)/Continous Delivery (CD) pipeline for an AI application. AI application is a combination of application code embedded with a pretrained machine learning (ML) model. For this tutorial we are fetching a pretrained model from a private Azure blob storage account, it could be a AWS S3 account as well.
We will use a simple python flask application, which is available on GitHub <add link here>. We will use a simple python flask application, which is available on GitHub <add link here>.
For an in-depth understasnding of how DevOps integrates with different stages of an AI Data Science project, checkout this comprehensive [article](https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/team-data-science-process-for-devops) from the TDSP team. In addition, do checkout this great [series](https://blogs.msdn.microsoft.com/buckwoody/category/devops-for-data-science/) of blogs on DevOps in Data Science from Buck Woody.
## Introduction ## Introduction
At the end of this tutorial, we will have a pipeline for our AI application that picks the latest commit from GitHub repository and the latest pretrained machine learning model from the Azure Storage container, stores the image in a private image repository on Azure Container Registry (ACR) and deploys it on a Kubernetes cluster running on Azure Container Service (AKS). At the end of this tutorial, we will have a pipeline for our AI application that picks the latest commit from GitHub repository and the latest pretrained machine learning model from the Azure Storage container, stores the image in a private image repository on Azure Container Registry (ACR) and deploys it on a Kubernetes cluster running on Azure Container Service (AKS).
@ -75,7 +77,7 @@ Within the Docker task, give it a name, select subscription and ACR from dropdow
After building the container for our application, we want to test the container. To do so, we will first start the container using the command line task. Note that we are using version 2.* of this task that gives us the option to pass inlien script. We are doing something tricky here, VSTS agent that builds our code is a docker container itself. So basically we are trying to start a container within a container. This is not hard but we need to make sure that both the containers are running on same network so we can access the ports correctly. To do so we first get the container id of the vsts build agent, by running following command. After building the container for our application, we want to test the container. To do so, we will first start the container using the command line task. Note that we are using version 2.* of this task that gives us the option to pass inlien script. We are doing something tricky here, VSTS agent that builds our code is a docker container itself. So basically we are trying to start a container within a container. This is not hard but we need to make sure that both the containers are running on same network so we can access the ports correctly. To do so we first get the container id of the vsts build agent, by running following command.
BUILD_CONTAINER_ID=$(docker ps --filter "ancestor=chris/vsts-agent" --filter "status=running" --format "{{.ID}}") BUILD_CONTAINER_ID=$(docker ps --filter "ancestor=chris/vsts-agent" --filter "status=running" --format "{{.ID}}")
Please Note that the image name of the VSTS agent (chris/vsts-agent) might change in future, which will break this command but you can run other docker specific commands to find the image name of VSTS agent. Please Note that the image name of the VSTS agent (chris/vsts-agent) might change in future, which will break this command but you can run other docker specific commands to find the image name of the VSTS agent.
Next we start our model-api image that we created in previous step and passing the network parameter to run it on same network as build VSTS agent container. Next we start our model-api image that we created in previous step and passing the network parameter to run it on same network as build VSTS agent container.
docker run -d --network container:$BUILD_CONTAINER_ID acrforblog.azurecr.io/model-api:$(Build.BuildId) docker run -d --network container:$BUILD_CONTAINER_ID acrforblog.azurecr.io/model-api:$(Build.BuildId)
@ -140,16 +142,3 @@ Hit save, you now have an end to end pipeline for Continous Integration and Cont
To test your CI/CD pipeline, make some changes in your repository and push them to GitHub. If all works well, you will see a new Build being triggered which, in turn, will trigger a new Release. To test your CI/CD pipeline, make some changes in your repository and push them to GitHub. If all works well, you will see a new Build being triggered which, in turn, will trigger a new Release.
## Conclusion
This tutorial demonstrated how to create an end to end build and release pipeline for an AI application. For every single commit, the build pipeline will run a series of unit and integration test, build the image as a docker container, store it in a private container registry and then deploy it on a Azure Container Service (AKS) kubernetes cluster.