This commit is contained in:
MSalvaris 2018-03-31 21:40:19 +01:00
Родитель 9fedc493ca
Коммит fb0673e097
8 изменённых файлов: 18 добавлений и 2634 удалений

Просмотреть файл

@ -1,21 +1,23 @@
### Authors: Mathew Salvaris and Ilia Karmanov
### Authors: Mathew Salvaris and Fidan Boylu Uz
# Deploy ML on ACS
Deploying machine learning models can often be tricky due to their numerous dependencies, deep learning models often even more so. One of the ways to overcome this is to use Docker containers. Unfortunately, it is rarely straight-forward. In this tutorial, we will demonstrate how to deploy a pre-trained deep learning model using Azure Container Services, which allows us to orchestrate a number of containers using DC/OS. By using Azure Container Services, we can ensure that it is performant, scalable and flexible enough to accommodate any deep learning framework.
The Docker image we will be deploying can be found [here](https://hub.docker.com/r/masalvar/cntkresnet/). It contains a simple Flask web application with Nginx web server. The deep learning framework we will use is the Microsoft Cognitive Toolkit (CNTK) and we will be using a pre-trained model; specifically the ResNet 152 model.
# Deploy Deep Learning CNN on Kubernetes Cluster with GPUs
In this repository there are a number of tutorials in Jupyter notebooks that have step-by-step instruction on how to deploy a pretrained deep learning model on a GPU enabled Kubernetes cluster. The tutorials cover how to deploy models from the following deep learning frameworks:
* TensofrFlow
* Keras (TensorFlow backend)
* Pytorch
For each framework we go through 7 steps:
* Model development where we load the pretrained model and test it by using it to score images
* Developing the interface our Flask app will use to load and call the model
* Building the Docker Image with our Flask REST API and model
* Testing our Docker image before deployment
* Creating our Kubernetes cluster and deploying our application to it
* Testing the deployed model
* Testing the throughput of out model
The application we will develop is a simple image classification service, where we will submit an image and get back what class the image belongs to.
Azure Container Services enables you to configure, construct and manage a cluster of virtual machines pre-configured to run containerized applications. Once the cluster is set up you can use a number of open-source scheduling and orchestration tools, such as Kubernetes and DC/OS. This is ideal for machine learning application since we can use Docker containers which enable us to have ultimate flexibility in the libraries we use and allows us to easily scale up based on demand. While always ensuring that our application remains performant. You can create an ACS through the Azure portal but in this tutorial we will be constructing it using the Azure CLI.
The application will be a simple image classification service, where we will submit an image and get back what class the image belongs to. We have split the process into five sections.
* [Create Docker image of our application](00_BuildImage.ipynb)
* [Test the application locally](01_TestLocally.ipynb)
* [Create an ACS cluster and deploy our web app](02_DeployOnACS.ipynb)
* [Test our web app](03_TestWebApp.ipynb)
* [Load Test our web app](04_SpeedTestWebApp.ipynb)
Each section is accompanied by a Jupyter notebook which contains step-by-step instructions on how to create, deploy and test a web application.
If you already have a Docker image that you would like to deploy you can skip the first two notebooks.
If you already have a Docker image that you would like to deploy or you simply want to use the image we built you can skip the first four notebooks.
# Contributing
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -1,39 +0,0 @@
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04
RUN echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list
USER root
RUN mkdir /code
WORKDIR /code
RUN chmod -R a+w /code
ADD . /code/
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
wget \
git && \
rm -rf /var/lib/apt/lists/*
ENV PYTHON_VERSION=3.5
RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda create -y --name py$PYTHON_VERSION python=$PYTHON_VERSION numpy pyyaml scipy \
ipython pandas scikit-learn && \
/opt/conda/bin/conda clean -ya
ENV PATH /opt/conda/envs/py$PYTHON_VERSION/bin:$PATH
ENV LD_LIBRARY_PATH /opt/conda/envs/py$PYTHON_VERSION/lib:/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
RUN pip install --upgrade pip && \
pip install tensorflow-gpu==1.4.1 && \
pip install keras==2.1.5 && \
pip install -r /code/requirements.txt && \
/opt/conda/bin/conda clean -yt
EXPOSE 5000

Просмотреть файл

@ -1,61 +0,0 @@
from resnet152 import ResNet152
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
import numpy as np
import timeit as t
import base64
import json
from PIL import Image, ImageOps
from io import BytesIO
def init():
""" Initialize ResNet 152 Model
"""
global model
print("Executing init() method...")
start = t.default_timer()
model = ResNet152(weights='imagenet')
end = t.default_timer()
print("Model loading time: {} ms".format(round((end-start)*1000, 2)))
def run(inputString):
""" Classify the input using the loaded model
"""
start = t.default_timer()
responses = []
base64Dict = json.loads(inputString)
for k, v in base64Dict.items():
img_file_name, base64Img = k, v
decoded_img = base64.b64decode(base64Img)
img_buffer = BytesIO(decoded_img)
imageData = Image.open(img_buffer).convert("RGB")
img = ImageOps.fit(imageData, (224, 224), Image.ANTIALIAS)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
preds = decode_predictions(preds, top=3)[0]
resp = {img_file_name: preds}
responses.append(resp)
end = t.default_timer()
return (responses, "Predictions took {0} ms".format(round((end-start)*1000, 2)))
def img_to_json(img_path):
with open(img_path, 'rb') as file:
encoded = base64.b64encode(file.read())
img_dict = {img_path: encoded.decode('utf-8')}
body = json.dumps(img_dict)
return body
if __name__ == "__main__":
init()
img_path = 'elephant.jpg'
body = img_to_json(img_path)
resp = run(body)
print(resp)

Двоичные данные
resnet152/elephant.jpg

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 197 KiB

Просмотреть файл

@ -1,34 +0,0 @@
from flask import Flask, request
import tensorflow as tf
from driver import *
app = Flask(__name__)
@app.route("/score", methods = ['POST'])
def scoreRRS():
""" Endpoint for scoring
"""
if request.headers['Content-Type'] != 'application/json':
return Response(json.dumps({}), status= 415, mimetype ='application/json')
input = request.json['input']
response = run(input)
print(response)
dict = {}
dict['result'] = str(response)
return json.dumps(dict)
@app.route("/")
def healthy():
return "Healthy"
# Tensorflow Version
@app.route('/version', methods = ['GET'])
def version_request():
return tf.__version__
if __name__ == "__main__":
init()
app.run(host='0.0.0.0', port=5000)

Просмотреть файл

@ -1,11 +0,0 @@
Keras==2.1.5
Pillow==5.0.0
click==6.7
configparser==3.5.0
Flask==0.12.2
gunicorn==19.6.0
json-logging-py==0.2
MarkupSafe==1.0
olefile==0.44
requests==2.18.4

Просмотреть файл

@ -1,372 +0,0 @@
# -*- coding: utf-8 -*-
"""ResNet152 model for Keras.
# Reference:
- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
Adaptation of code from flyyufelix, mvoelk, BigMoyan, fchollet at https://github.com/adamcasson/resnet152
"""
import numpy as np
import warnings
from keras.layers import Input
from keras.layers import Dense
from keras.layers import Activation
from keras.layers import Flatten
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import GlobalMaxPooling2D
from keras.layers import ZeroPadding2D
from keras.layers import AveragePooling2D
from keras.layers import GlobalAveragePooling2D
from keras.layers import BatchNormalization
from keras.layers import add
from keras.models import Model
import keras.backend as K
from keras.engine.topology import get_source_inputs
from keras.utils import layer_utils
from keras import initializers
from keras.engine import Layer, InputSpec
from keras.preprocessing import image
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import decode_predictions
from keras.applications.imagenet_utils import preprocess_input
from keras.applications.imagenet_utils import _obtain_input_shape
import sys
sys.setrecursionlimit(3000)
WEIGHTS_PATH = 'https://github.com/adamcasson/resnet152/releases/download/v0.1/resnet152_weights_tf.h5'
WEIGHTS_PATH_NO_TOP = 'https://github.com/adamcasson/resnet152/releases/download/v0.1/resnet152_weights_tf_notop.h5'
class Scale(Layer):
"""Custom Layer for ResNet used for BatchNormalization.
Learns a set of weights and biases used for scaling the input data.
the output consists simply in an element-wise multiplication of the input
and a sum of a set of constants:
out = in * gamma + beta,
where 'gamma' and 'beta' are the weights and biases larned.
Keyword arguments:
axis -- integer, axis along which to normalize in mode 0. For instance,
if your input tensor has shape (samples, channels, rows, cols),
set axis to 1 to normalize per feature map (channels axis).
momentum -- momentum in the computation of the exponential average
of the mean and standard deviation of the data, for
feature-wise normalization.
weights -- Initialization weights.
List of 2 Numpy arrays, with shapes:
`[(input_shape,), (input_shape,)]`
beta_init -- name of initialization function for shift parameter
(see [initializers](../initializers.md)), or alternatively,
Theano/TensorFlow function to use for weights initialization.
This parameter is only relevant if you don't pass a `weights` argument.
gamma_init -- name of initialization function for scale parameter (see
[initializers](../initializers.md)), or alternatively,
Theano/TensorFlow function to use for weights initialization.
This parameter is only relevant if you don't pass a `weights` argument.
"""
def __init__(self, weights=None, axis=-1, momentum = 0.9, beta_init='zero', gamma_init='one', **kwargs):
self.momentum = momentum
self.axis = axis
self.beta_init = initializers.get(beta_init)
self.gamma_init = initializers.get(gamma_init)
self.initial_weights = weights
super(Scale, self).__init__(**kwargs)
def build(self, input_shape):
self.input_spec = [InputSpec(shape=input_shape)]
shape = (int(input_shape[self.axis]),)
self.gamma = K.variable(self.gamma_init(shape), name='%s_gamma'%self.name)
self.beta = K.variable(self.beta_init(shape), name='%s_beta'%self.name)
self.trainable_weights = [self.gamma, self.beta]
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
def call(self, x, mask=None):
input_shape = self.input_spec[0].shape
broadcast_shape = [1] * len(input_shape)
broadcast_shape[self.axis] = input_shape[self.axis]
out = K.reshape(self.gamma, broadcast_shape) * x + K.reshape(self.beta, broadcast_shape)
return out
def get_config(self):
config = {"momentum": self.momentum, "axis": self.axis}
base_config = super(Scale, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def identity_block(input_tensor, kernel_size, filters, stage, block):
"""The identity_block is the block that has no conv layer at shortcut
Keyword arguments
input_tensor -- input tensor
kernel_size -- defualt 3, the kernel size of middle conv layer at main path
filters -- list of integers, the nb_filters of 3 conv layer at main path
stage -- integer, current stage label, used for generating layer names
block -- 'a','b'..., current block label, used for generating layer names
"""
eps = 1.1e-5
if K.image_dim_ordering() == 'tf':
bn_axis = 3
else:
bn_axis = 1
nb_filter1, nb_filter2, nb_filter3 = filters
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
scale_name_base = 'scale' + str(stage) + block + '_branch'
x = Conv2D(nb_filter1, (1, 1), name=conv_name_base + '2a', use_bias=False)(input_tensor)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2a')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2a')(x)
x = Activation('relu', name=conv_name_base + '2a_relu')(x)
x = ZeroPadding2D((1, 1), name=conv_name_base + '2b_zeropadding')(x)
x = Conv2D(nb_filter2, (kernel_size, kernel_size), name=conv_name_base + '2b', use_bias=False)(x)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2b')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2b')(x)
x = Activation('relu', name=conv_name_base + '2b_relu')(x)
x = Conv2D(nb_filter3, (1, 1), name=conv_name_base + '2c', use_bias=False)(x)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2c')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2c')(x)
x = add([x, input_tensor], name='res' + str(stage) + block)
x = Activation('relu', name='res' + str(stage) + block + '_relu')(x)
return x
def conv_block(input_tensor, kernel_size, filters, stage, block, strides=(2, 2)):
"""conv_block is the block that has a conv layer at shortcut
Keyword arguments:
input_tensor -- input tensor
kernel_size -- defualt 3, the kernel size of middle conv layer at main path
filters -- list of integers, the nb_filters of 3 conv layer at main path
stage -- integer, current stage label, used for generating layer names
block -- 'a','b'..., current block label, used for generating layer names
Note that from stage 3, the first conv layer at main path is with subsample=(2,2)
And the shortcut should have subsample=(2,2) as well
"""
eps = 1.1e-5
if K.image_dim_ordering() == 'tf':
bn_axis = 3
else:
bn_axis = 1
nb_filter1, nb_filter2, nb_filter3 = filters
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
scale_name_base = 'scale' + str(stage) + block + '_branch'
x = Conv2D(nb_filter1, (1, 1), strides=strides, name=conv_name_base + '2a', use_bias=False)(input_tensor)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2a')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2a')(x)
x = Activation('relu', name=conv_name_base + '2a_relu')(x)
x = ZeroPadding2D((1, 1), name=conv_name_base + '2b_zeropadding')(x)
x = Conv2D(nb_filter2, (kernel_size, kernel_size),
name=conv_name_base + '2b', use_bias=False)(x)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2b')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2b')(x)
x = Activation('relu', name=conv_name_base + '2b_relu')(x)
x = Conv2D(nb_filter3, (1, 1), name=conv_name_base + '2c', use_bias=False)(x)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '2c')(x)
x = Scale(axis=bn_axis, name=scale_name_base + '2c')(x)
shortcut = Conv2D(nb_filter3, (1, 1), strides=strides,
name=conv_name_base + '1', use_bias=False)(input_tensor)
shortcut = BatchNormalization(epsilon=eps, axis=bn_axis, name=bn_name_base + '1')(shortcut)
shortcut = Scale(axis=bn_axis, name=scale_name_base + '1')(shortcut)
x = add([x, shortcut], name='res' + str(stage) + block)
x = Activation('relu', name='res' + str(stage) + block + '_relu')(x)
return x
def ResNet152(include_top=True, weights=None,
input_tensor=None, input_shape=None,
large_input=False, pooling=None,
classes=1000):
"""Instantiate the ResNet152 architecture.
Keyword arguments:
include_top -- whether to include the fully-connected layer at the
top of the network. (default True)
weights -- one of `None` (random initialization) or "imagenet"
(pre-training on ImageNet). (default None)
input_tensor -- optional Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.(default None)
input_shape -- optional shape tuple, only to be specified if
`include_top` is False (otherwise the input shape has to be
`(224, 224, 3)` (with `channels_last` data format) or
`(3, 224, 224)` (with `channels_first` data format). It should
have exactly 3 inputs channels, and width and height should be
no smaller than 197. E.g. `(200, 200, 3)` would be one valid value.
(default None)
large_input -- if True, then the input shape expected will be
`(448, 448, 3)` (with `channels_last` data format) or
`(3, 448, 448)` (with `channels_first` data format). (default False)
pooling -- Optional pooling mode for feature extraction when
`include_top` is `False`.
- `None` means that the output of the model will be the 4D
tensor output of the last convolutional layer.
- `avg` means that global average pooling will be applied to
the output of the last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will be applied.
(default None)
classes -- optional number of classes to classify image into, only
to be specified if `include_top` is True, and if no `weights`
argument is specified. (default 1000)
Returns:
A Keras model instance.
Raises:
ValueError: in case of invalid argument for `weights`,
or invalid input shape.
"""
if weights not in {'imagenet', None}:
raise ValueError('The `weights` argument should be either '
'`None` (random initialization) or `imagenet` '
'(pre-training on ImageNet).')
if weights == 'imagenet' and include_top and classes != 1000:
raise ValueError('If using `weights` as imagenet with `include_top`'
' as true, `classes` should be 1000')
eps = 1.1e-5
if large_input:
img_size = 448
else:
img_size = 224
# Determine proper input shape
input_shape = _obtain_input_shape(input_shape,
default_size=img_size,
min_size=197,
data_format=K.image_data_format(),
require_flatten=include_top)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# handle dimension ordering for different backends
if K.image_dim_ordering() == 'tf':
bn_axis = 3
else:
bn_axis = 1
x = ZeroPadding2D((3, 3), name='conv1_zeropadding')(img_input)
x = Conv2D(64, (7, 7), strides=(2, 2), name='conv1', use_bias=False)(x)
x = BatchNormalization(epsilon=eps, axis=bn_axis, name='bn_conv1')(x)
x = Scale(axis=bn_axis, name='scale_conv1')(x)
x = Activation('relu', name='conv1_relu')(x)
x = MaxPooling2D((3, 3), strides=(2, 2), name='pool1')(x)
x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')
x = identity_block(x, 3, [64, 64, 256], stage=2, block='c')
x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')
for i in range(1,8):
x = identity_block(x, 3, [128, 128, 512], stage=3, block='b'+str(i))
x = conv_block(x, 3, [256, 256, 1024], stage=4, block='a')
for i in range(1,36):
x = identity_block(x, 3, [256, 256, 1024], stage=4, block='b'+str(i))
x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')
if large_input:
x = AveragePooling2D((14, 14), name='avg_pool')(x)
else:
x = AveragePooling2D((7, 7), name='avg_pool')(x)
# include classification layer by default, not included for feature extraction
if include_top:
x = Flatten()(x)
x = Dense(classes, activation='softmax', name='fc1000')(x)
else:
if pooling == 'avg':
x = GlobalAveragePooling2D()(x)
elif pooling == 'max':
x = GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = Model(inputs, x, name='resnet152')
# load weights
if weights == 'imagenet':
if include_top:
weights_path = get_file('resnet152_weights_tf.h5',
WEIGHTS_PATH,
cache_subdir='models',
md5_hash='cdb18a2158b88e392c0905d47dcef965')
else:
weights_path = get_file('resnet152_weights_tf_notop.h5',
WEIGHTS_PATH_NO_TOP,
cache_subdir='models',
md5_hash='4a90dcdafacbd17d772af1fb44fc2660')
model.load_weights(weights_path, by_name=True)
if K.backend() == 'theano':
layer_utils.convert_all_kernels_in_model(model)
if include_top:
maxpool = model.get_layer(name='avg_pool')
shape = maxpool.output_shape[1:]
dense = model.get_layer(name='fc1000')
layer_utils.convert_dense_weights_data_format(dense, shape, 'channels_first')
if K.image_data_format() == 'channels_first' and K.backend() == 'tensorflow':
warnings.warn('You are using the TensorFlow backend, yet you '
'are using the Theano '
'image data format convention '
'(`image_data_format="channels_first"`). '
'For best performance, set '
'`image_data_format="channels_last"` in '
'your Keras config '
'at ~/.keras/keras.json.')
return model
if __name__ == '__main__':
model = ResNet152(include_top=True, weights='imagenet')
img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(224,224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
preds = model.predict(x)
print('Predicted:', decode_predictions(preds))