This commit is contained in:
Song Duong 2019-11-29 15:44:21 +01:00
Родитель 8d37d9b309
Коммит 931c448969
51 изменённых файлов: 2756 добавлений и 3 удалений

Двоичные данные
Documentation/mlops.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 221 KiB

Просмотреть файл

@ -9,7 +9,7 @@ This is an IoT solution based on Azure IoT Edge that can perform object detectio
* PiCamera (with the 'Camera' option enabled on your Rpi 3)
## Getting started
Clone this repo, to test on your machine, you need to download a [YOLOv3 model](https://onnxzoo.blob.core.windows.net/models/opset_10/yolov3/yolov3.onnx), rename it 'model.onnx' and move it inside the (modules > ObjectDetection >) app/ folder.
Clone this repo, to test on your machine, you need to download a [YOLOv3 model](https://azurecviotedge.blob.core.windows.net/mlops/model.onnx), move it inside the (modules > ObjectDetection >) app/ folder.
If you use your own trained YOLOv3 model, you will also need to update the labels.txt file.
## (Quick) Building and deploying the solution

Просмотреть файл

@ -71,7 +71,7 @@ class YOLOv3Predict:
def predict(self,image):
image_data = self.preprocess(image)
image_size = np.array([image.size[1], image.size[0]], dtype=np.int32).reshape(1, 2)
image_size = np.array([image.size[1], image.size[0]], dtype=np.float32).reshape(1, 2)
input_names = session.get_inputs()
feed_dict = {input_names[0].name: image_data, input_names[1].name: image_size}
boxes, scores, indices = session.run([], input_feed=feed_dict)

Просмотреть файл

@ -0,0 +1,71 @@
pool:
name: Azure Pipelines
vmImage: 'ubuntu-latest'
variables:
- group: iotedge-vg
- group: devopsforai-aml-vg
steps:
- task: UsePythonVersion@0
displayName: 'Use Python 3.x'
- bash: |
python -m pip install python-dotenv
python -m pip install azureml-sdk
displayName: 'Bash Script'
- task: PythonScript@0
displayName: 'Run a Python script'
inputs:
scriptPath: '$(Build.SourcesDirectory)/MLOps/ml_service/util/retrieve_model.py'
workingDirectory: '$(Build.SourcesDirectory)/MLOps/ml_service/util'
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/modules/ObjectDetection/app'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/MLOps/ml_service/util'
Contents: |
yolo.onnx
TargetFolder: '$(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/modules/ObjectDetection/app'
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/modules/ObjectDetection/app'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/MLOps/ml_service/util/$(MODEL_DATA_PATH_DATASTORE)'
Contents: |
classes.txt
TargetFolder: '$(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/modules/ObjectDetection/app'
- task: AzureIoTEdge@2
displayName: 'Azure IoT Edge - Build module images'
inputs:
templateFilePath: '$(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/deployment.template.json'
defaultPlatform: arm32v7
- task: AzureIoTEdge@2
displayName: 'Azure IoT Edge - Push module images'
inputs:
action: 'Push module images'
azureSubscriptionEndpoint: AzureResourceConnection
azureContainerRegistry: '{"loginServer":"$(ACR_ADDRESS)", "id" : "/subscriptions/$(SUBSCRIPTION_ID)/resourceGroups/$(BASE_NAME)-AML-RG/providers/Microsoft.ContainerRegistry/registries/$(ACR_USER)"}'
templateFilePath: '$(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection/deployment.template.json'
defaultPlatform: arm32v7
fillRegistryCredential: false
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/MLOps/Rpi3-objectdetection'
Contents: |
deployment.template.json
**/module.json
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact: iot'
inputs:
ArtifactName: iot

Просмотреть файл

@ -0,0 +1,37 @@
pool:
vmImage: 'ubuntu-latest'
container: mcr.microsoft.com/mlops/python:latest
variables:
- group: devopsforai-aml-vg
steps:
- bash: |
# Invoke the Python building and publishing a training pipeline
python3 $(Build.SourcesDirectory)/MLOps/ml_service/pipelines/build_train_pipeline_for_yolo.py
failOnStderr: 'false'
env:
SP_APP_SECRET: '$(SP_APP_SECRET)'
displayName: 'Publish Azure Machine Learning Pipeline'
enabled: 'true'
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/MLOps'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
Contents: |
ml_service/pipelines/?(run_train_pipeline.py|*.json)
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
ArtifactName: 'mlops-pipelines'
publishLocation: 'container'
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
TargetPath: '$(Build.ArtifactStagingDirectory)'

Просмотреть файл

@ -0,0 +1,43 @@
pr: none
trigger:
branches:
include:
- master
pool:
vmImage: 'ubuntu-latest'
container: mcr.microsoft.com/mlops/python:latest
variables:
- group: devopsforai-aml-vg
steps:
- bash: |
# Invoke the Python building and publishing a training pipeline
python3 $(Build.SourcesDirectory)/MLOps/ml_service/pipelines/build_train_pipeline_for_yolo.py
failOnStderr: 'false'
env:
SP_APP_SECRET: '$(SP_APP_SECRET)'
displayName: 'Publish Azure Machine Learning Pipeline'
enabled: 'true'
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)/MLOps'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
Contents: |
ml_service/pipelines/?(run_train_pipeline.py|*.json)
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
ArtifactName: 'mlops-pipelines'
publishLocation: 'container'
pathtoPublish: '$(Build.ArtifactStagingDirectory)'
TargetPath: '$(Build.ArtifactStagingDirectory)'

17
MLOps/README.md Normal file
Просмотреть файл

@ -0,0 +1,17 @@
# MLOps Implementation for YOLOv3
## Prerequisites
* An [Azure account](https://account.microsoft.com/account?lang=en-us)
* An Azure subscription
* An Azure IoT Hub set up
* Raspberry Pi 3+ (with Raspbian Stretch and [Azure IoT Edge runtime](https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux-arm/?WT.mc_id=devto-blog-dglover) installed)
* PiCamera (with the 'Camera' option enabled on your Rpi 3)
* An Azure DevOps account
## Overview
This MLOps implementation is based on this [MLOPs template](https://github.com/microsoft/MLOpsPython) with the goal to implement pipelines that automate the process from the training of Computer Vision models to their deployment on IoT Edge devices.
This implementation features the retraining of YOLOv3 on the VOC dataset as shown in the first section of the guide and targetting the Raspberry Pi 3B as device.
Still, all the good practices are demonstrated and this implementation can be re-used as a template for other Computer Vision models as well !
<p align="center"><img width="80%" src="https://github.com/microsoft/azure-iot-edge-cv-model-samples/blob/master/Documentation/mlops.png" /></p>

1
MLOps/Rpi3-objectdetection/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
config/

51
MLOps/Rpi3-objectdetection/.vscode/launch.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,51 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "ObjectDetection Remote Debug (Python)",
"type": "python",
"request": "attach",
"port": 5678,
"host": "localhost",
"logToFile": true,
"redirectOutput": true,
"pathMappings": [
{
"localRoot": "${workspaceFolder}/modules/ObjectDetection",
"remoteRoot": "/app"
}
],
"windows": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}\\modules\\ObjectDetection",
"remoteRoot": "/app"
}
]
}
},
{
"name": "CameraModule Remote Debug (Python)",
"type": "python",
"request": "attach",
"port": 5679,
"host": "localhost",
"logToFile": true,
"redirectOutput": true,
"pathMappings": [
{
"localRoot": "${workspaceFolder}/modules/CameraModule",
"remoteRoot": "/app"
}
],
"windows": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}\\modules\\CameraModule",
"remoteRoot": "/app"
}
]
}
}
]
}

6
MLOps/Rpi3-objectdetection/.vscode/settings.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,6 @@
{
"azure-iot-edge.defaultPlatform": {
"platform": "arm32v7",
"alias": null
}
}

Просмотреть файл

@ -0,0 +1,32 @@
# YOLOv3 Object Detection on RPi 3 with Azure IoT Edge
This is an IoT solution based on Azure IoT Edge that can perform object detection (YOLOv3) on a Raspberry Pi 3 equipped with the PiCamera v2.
## Prerequisites
* An [Azure account](https://account.microsoft.com/account?lang=en-us)
* An Azure subscription
* An Azure IoT Hub set up (you can quickly configure one using ../01-configure-iot-hub.ipynb)
* Raspberry Pi 3+ (with Raspbian Stretch and [Azure IoT Edge runtime](https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux-arm/?WT.mc_id=devto-blog-dglover) installed)
* PiCamera (with the 'Camera' option enabled on your Rpi 3)
## Getting started
Clone this repo, to test on your machine, you need to download a [YOLOv3 model](https://onnxzoo.blob.core.windows.net/models/opset_10/yolov3/yolov3.onnx), rename it 'model.onnx' and move it inside the (modules > ObjectDetection >) app/ folder.
If you use your own trained YOLOv3 model, you will also need to update the labels.txt file.
## (Quick) Building and deploying the solution
You can either follow the guide to set up the building and the deployment or if you are interested in testing the solution, run the following commands (you will need to have [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest) installed):
```
az login
az iot edge set-modules --device-id [device id] --hub-name [hub name] --content deployment.arm32v7.json
```
(Don't forget to change the [device id] and the [hub name] with your own device id and hub name)
You need to wait a couple minutes for it to pull all the docker modules and run it on your Raspberry Pi.
## Building and deploying your own solution
If you want your docker modules to be on your own private container registries, open up the .env file and specify your registry credentials:
```
CONTAINER_REGISTRY_ADDRESS= ...
CONTAINER_REGISTRY_USERNAME= ...
CONTAINER_REGISTRY_PASSWORD= ...
```
Then follow the guide for more infos on how to deploy it on your RPi 3.

Просмотреть файл

@ -0,0 +1,79 @@
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
},
"env": {
"OptimizeForPerformance": {
"value": "false"
}
}
}
},
"modules": {
"object-detection": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "azuretest61d4bf48.azurecr.io/objectdetection:0.0.2-arm32v7",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"8000/tcp\":[{\"HostPort\":\"8885\"}]}}}"
}
},
"camera-module": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "azuretest61d4bf48.azurecr.io/cameramodule:0.0.1-arm32v7",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/dev/vchiq:/dev/vchiq\"],\"Privileged\":true,\"Devices\":[{\"PathOnHost\":\"/dev/vchiq\",\"PathInContainer\":\"/dev/vchiq\",\"CgroupPermissions\":\"mrw\"}]}}"
},
"env": {
"IMAGE_PROCESSING_ENDPOINT": {
"value": "http://object-detection:8885/image"
}
}
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"CameraModuleToIoTHub": "FROM /messages/modules/camera-module/outputs/output1 INTO $upstream"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,120 @@
{
"$schema-template": "2.0.0",
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
"azuretest61d4bf48": {
"username": "$ACR_USER",
"password": "$ACR_PASSWORD",
"address": "$ACR_ADDRESS"
}
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"OptimizeForPerformance": {
"value": "false"
}
}
}
},
"modules": {
"object-detection": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.ObjectDetection}",
"createOptions": {
"HostConfig": {
"PortBindings": {
"8000/tcp": [
{
"HostPort": "8885"
}
]
}
}
}
}
},
"camera-module": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.CameraModule}",
"createOptions": {
"HostConfig":{
"Binds":["/dev/vchiq:/dev/vchiq", "/dev/vcsm:/dev/vcsm"],
"Devices":[{"PathOnHost":"/dev/vchiq","PathInContainer":"/dev/vchiq","CgroupPermissions":"mrw"},{"PathOnHost":"/dev/vcsm","PathInContainer":"/dev/vcsm","CgroupPermissions":"mrw"}]
}
}
},
"env": {
"IMAGE_PROCESSING_ENDPOINT": {
"value":"http://object-detection:8885/image"
}
}
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"CameraModuleToIoTHub": "FROM /messages/modules/camera-module/outputs/output1 INTO $upstream"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,18 @@
FROM ubuntu:xenial
WORKDIR /app
RUN apt-get update && \
apt-get install -y --no-install-recommends libcurl4-openssl-dev python3-pip libboost-python1.58-dev libpython3-dev && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
CMD [ "python3", "-u", "./main.py" ]

Просмотреть файл

@ -0,0 +1,20 @@
FROM ubuntu:xenial
WORKDIR /app
RUN apt-get update && \
apt-get install -y --no-install-recommends libcurl4-openssl-dev python3-pip libboost-python1.58-dev libpython3-dev && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
RUN pip install setuptools
RUN pip install ptvsd==4.1.3
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
CMD [ "python3", "-u", "./main.py" ]

Просмотреть файл

@ -0,0 +1,35 @@
FROM balenalib/raspberrypi3:stretch
# The balena base image for building apps on Raspberry Pi 3.
# Enforces cross-compilation through Quemu
RUN [ "cross-build-start" ]
# Update package index and install dependencies
RUN install_packages \
python3 \
python3-pip \
python3-dev \
build-essential \
libopenjp2-7-dev \
zlib1g-dev \
libatlas-base-dev \
wget \
libboost-python1.62.0 \
curl \
libcurl4-openssl-dev
# Install Python packages
COPY /build/arm32v7-requirements.txt ./
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --index-url=https://www.piwheels.org/simple -r arm32v7-requirements.txt
# Cleanup
RUN rm -rf /var/lib/apt/lists/* \
&& apt-get -y autoremove
RUN [ "cross-build-end" ]
ADD /app/ .
ENTRYPOINT [ "python3", "-u", "./main.py" ]

Просмотреть файл

@ -0,0 +1,94 @@
import io
import time
import threading
import json
import requests
class ImageProcessor(threading.Thread):
def __init__(self, owner):
super(ImageProcessor, self).__init__()
self.stream = io.BytesIO()
self.event = threading.Event()
self.terminated = False
self.owner = owner
self.start()
def sendFrameForProcessing(self):
"""send a POST request to the AI module server"""
headers = {'Content-Type': 'application/octet-stream'}
try:
self.stream.seek(0)
response = requests.post(self.owner.endPointForProcessing, headers = headers, data = self.stream)
except Exception as e:
print('sendFrameForProcessing Exception -' + str(e))
return "[]"
return json.dumps(response.json())
def run(self):
# This method runs in a separate thread
while not self.terminated:
# Wait for an image to be written to the stream
if self.event.wait(1):
try:
result = self.sendFrameForProcessing()
if result != "[]":
self.owner.sendToHubCallback(result) # send message to the hub if an object has been detected
print(result)
#self.owner.done=True
# uncomment above if you want the process to terminate for some reason
finally:
# Reset the stream and event
self.stream.seek(0)
self.stream.truncate()
self.event.clear()
# Return ourselves to the available pool
with self.owner.lock:
self.owner.pool.append(self)
class ProcessOutput(object):
def __init__(self, endPoint, functionCallBack):
self.done = False
# Construct a pool of 4 image processors along with a lock
# to control access between threads
# Note that you can vary the number depending on how many processors your device has
self.lock = threading.Lock()
self.endPointForProcessing = endPoint
self.sendToHubCallback = functionCallBack
self.pool = [ImageProcessor(self) for i in range(4)]
self.processor = None
def write(self, buf):
if buf.startswith(b'\xff\xd8'):
# New frame; set the current processor going and grab
# a spare one
if self.processor:
self.processor.event.set()
with self.lock:
if self.pool:
self.processor = self.pool.pop()
else:
# No processor's available, we'll have to skip
# this frame; you may want to print a warning
# here to see whether you hit this case
self.processor = None
if self.processor:
self.processor.stream.write(buf)
def flush(self):
# When told to flush (this indicates end of recording), shut
# down in an orderly fashion. First, add the current processor
# back to the pool
if self.processor:
with self.lock:
self.pool.append(self.processor)
self.processor = None
# Now, empty the pool, joining each thread as we go
while True:
with self.lock:
try:
proc = self.pool.pop()
except IndexError:
pass # pool is empty
proc.terminated = True
proc.join()

Просмотреть файл

@ -0,0 +1,94 @@
# Copyright (c) Microsoft. All rights reserved.
# Licensed under the MIT license. See LICENSE file in the project root for
# full license information.
import random
import time
import sys
# use env variables
import os
import iothub_client
# pylint: disable=E0611
from iothub_client import IoTHubModuleClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError
# picamera imports
import PiCamStream
from PiCamStream import ProcessOutput
from picamera import PiCamera
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubModuleClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 10000
# global counters
SEND_CALLBACKS = 0
# Choose HTTP, AMQP or MQTT as transport protocol. Currently only MQTT is supported.
PROTOCOL = IoTHubTransportProvider.MQTT
# Send message to the Hub and forwards to the "output1" queue
def send_to_Hub_callback(strMessage):
message = IoTHubMessage(bytearray(strMessage, 'utf8'))
hubManager.send_event_to_output("output1", message, 0)
# Callback received when the message that we're forwarding is processed
def send_confirmation_callback(message, result, user_context):
global SEND_CALLBACKS
print ( "Confirmation[%d] received for message with result = %s" % (user_context, result) )
map_properties = message.properties()
key_value_pair = map_properties.get_internals()
print ( " Properties: %s" % key_value_pair )
SEND_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_CALLBACKS )
class HubManager(object):
def __init__(
self,
messageTimeout,
protocol):
self.client_protocol = protocol
self.client = IoTHubModuleClient()
self.client.create_from_environment(protocol)
# set the time until a message times out
self.client.set_option("messageTimeout", messageTimeout)
# Forwards the message received onto the next stage in the process.
def send_event_to_output(self, outputQueueName, event, send_context):
self.client.send_event_async(
outputQueueName, event, send_confirmation_callback, send_context)
def main(messageTimeout, protocol, imageProcessingEndpoint=""):
try:
print ( "\nPython %s\n" % sys.version )
print ( "PiCamera module running. Press CTRL+C to exit" )
try:
global hubManager
hubManager = HubManager(messageTimeout, protocol)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
with PiCamera(resolution='VGA') as camera:
camera.start_preview()
time.sleep(2)
output = ProcessOutput(imageProcessingEndpoint, send_to_Hub_callback)
camera.start_recording(output, format='mjpeg')
while not output.done:
camera.wait_recording(1)
camera.stop_recording()
except KeyboardInterrupt:
print ("PiCamera module stopped")
if __name__ == '__main__':
try:
IMAGE_PROCESSING_ENDPOINT = os.getenv('IMAGE_PROCESSING_ENDPOINT', "")
except ValueError as error:
print ( error )
sys.exit(1)
main(MESSAGE_TIMEOUT, PROTOCOL, IMAGE_PROCESSING_ENDPOINT)

Просмотреть файл

@ -0,0 +1,4 @@
azure-iothub-device-client~=1.4.3
numpy
requests
picamera

Просмотреть файл

@ -0,0 +1,17 @@
{
"$schema-version": "0.0.1",
"description": "",
"image": {
"repository": "$ACR_ADDRESS/cameramodule",
"tag": {
"version": "0.0.${BUILD_BUILDID}",
"platforms": {
"amd64": "./Dockerfile.amd64",
"amd64.debug": "./Dockerfile.amd64.debug",
"arm32v7": "./Dockerfile.arm32v7"
}
},
"buildOptions": []
},
"language": "python"
}

Просмотреть файл

@ -0,0 +1,18 @@
FROM ubuntu:xenial
WORKDIR /app
RUN apt-get update && \
apt-get install -y --no-install-recommends libcurl4-openssl-dev python3-pip libboost-python1.58-dev libpython3-dev && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
CMD [ "python3", "-u", "./main.py" ]

Просмотреть файл

@ -0,0 +1,20 @@
FROM ubuntu:xenial
WORKDIR /app
RUN apt-get update && \
apt-get install -y --no-install-recommends libcurl4-openssl-dev python3-pip libboost-python1.58-dev libpython3-dev && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
RUN pip install setuptools
RUN pip install ptvsd==4.1.3
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash moduleuser
USER moduleuser
CMD [ "python3", "-u", "./main.py" ]

Просмотреть файл

@ -0,0 +1,45 @@
FROM balenalib/raspberrypi3:stretch
# The balena base image for building apps on Raspberry Pi 3.
RUN [ "cross-build-start" ]
# Install basic dependencies
RUN install_packages \
python3 \
python3-pip \
python3-dev \
build-essential \
libopenjp2-7-dev \
libtiff5-dev \
zlib1g-dev \
libjpeg-dev \
libatlas-base-dev \
wget
# Install Python packages
COPY /build/arm32v7-requirements.txt ./
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --index-url=https://www.piwheels.org/simple -r arm32v7-requirements.txt
# Build ONNX Runtime from wheel (need to be updated for future versions of onnxruntime)
COPY /build/onnxruntime-0.5.0-cp35-cp35m-linux_armv7l.whl ./
RUN pip3 install onnxruntime-0.5.0-cp35-cp35m-linux_armv7l.whl
# Cleanup
RUN rm -rf /var/lib/apt/lists/* \
&& apt-get -y autoremove
RUN [ "cross-build-end" ]
# Add the application and the model (as well as the class txt)
ADD app /app
# Expose the port
EXPOSE 8885
# Set the working directory
WORKDIR /app
# Run the flask server for the endpoints
CMD ["python3","app.py"]

Просмотреть файл

@ -0,0 +1,49 @@
import json
import os
import io
# Imports for the REST API
from flask import Flask, request # For development
from waitress import serve # For production
# Imports for image processing
from PIL import Image
# Imports from predict_yolov3.py
import predict_yolov3
from predict_yolov3 import YOLOv3Predict
app = Flask(__name__)
@app.route('/')
def index():
return 'Vision AI module listening'
# Prediction service /image route handles either
# - octet-stream image file
# - a multipart/form-data with files in the imageData parameter
@app.route('/image', methods=['POST'])
def predict_image_handler():
try:
imageData = None
if ('imageData' in request.files):
imageData = request.files['imageData']
else:
imageData = io.BytesIO(request.get_data())
img = Image.open(imageData)
results = model.predict(img)
return json.dumps(results)
except Exception as e:
print('EXCEPTION:', str(e))
return 'Error processing image', 500
if __name__ == '__main__':
# Load and intialize the model
global model
model = YOLOv3Predict('yolo.onnx', 'classes.txt')
model.initialize()
# Run the server
print("Running the Vision AI module...")
#app.run(host='0.0.0.0', port=8885, debug=True) # For development
serve(app, host='0.0.0.0', port=8885) # For production

Просмотреть файл

@ -0,0 +1,20 @@
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor

Просмотреть файл

@ -0,0 +1,91 @@
from PIL import Image
import numpy as np
import sys
import os
import numpy as np
import json
import onnxruntime
# Special json encoder for numpy types
class NumpyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, (np.int_, np.intc, np.intp, np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64)):
return int(obj)
elif isinstance(obj, (np.float_, np.float16, np.float32, np.float64)):
return float(obj)
elif isinstance(obj,(np.ndarray,)):
return obj.tolist()
return json.JSONEncoder.default(self, obj)
# Needed for preprocessing
def letterbox_image(image, size):
iw, ih = image.size
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', size, (128,128,128))
new_image.paste(image, ((w-nw)//2, (h-nh)//2))
return new_image
class YOLOv3Predict:
def __init__(self, model, label_file):
self.model = model
self.label_file = label_file
self.label = []
def get_labels(self):
with open(self.label_file) as f:
for line in f:
self.label.append(line.rstrip())
def initialize(self):
global session
print('Loading model...')
self.get_labels()
session = onnxruntime.InferenceSession(self.model)
print('Model loaded!')
def preprocess(self,img):
model_image_size = (416, 416)
boxed_image = letterbox_image(img, tuple(reversed(model_image_size)))
image_data = np.array(boxed_image, dtype='float32')
image_data /= 255.
#image_data = np.transpose(image_data, [2, 0, 1])
image_data = np.expand_dims(image_data, 0)
return image_data
def postprocess(self, boxes, scores, indices):
out_boxes, out_scores, out_classes = [], [], []
for idx_ in indices:
out_classes.append(idx_[1])
out_scores.append(scores[tuple(idx_)])
idx_1 = (idx_[0], idx_[2])
out_boxes.append(boxes[idx_1])
return out_boxes, out_scores, out_classes
def predict(self,image):
image_data = self.preprocess(image)
image_size = np.array([image.size[1], image.size[0]], dtype=np.int32).reshape(1, 2)
input_names = session.get_inputs()
feed_dict = {input_names[0].name: image_data, input_names[1].name: image_size}
boxes, scores, indices = session.run([], input_feed=feed_dict)
predicted_boxes, predicted_scores, predicted_classes = self.postprocess(boxes, scores, indices)
results = []
for i,c in enumerate(predicted_classes):
data = {}
data[self.label[c]] = json.dumps(predicted_boxes[i].tolist()+[predicted_scores[i]], cls=NumpyEncoder)
results.append(data)
return results
if __name__ == "__main__":
global model
model = YOLOv3Predict('yolo.onnx', 'classes.txt')
model.initialize()
image = Image.open('person.jpg')
# input
results = model.predict(image)
print(results)

Просмотреть файл

@ -0,0 +1,4 @@
pillow
numpy
flask
waitress

Двоичный файл не отображается.

Просмотреть файл

@ -0,0 +1,17 @@
{
"$schema-version": "0.0.1",
"description": "",
"image": {
"repository": "$ACR_ADDRESS/objectdetection",
"tag": {
"version": "0.0.${BUILD_BUILDID}",
"platforms": {
"amd64": "./Dockerfile.amd64",
"amd64.debug": "./Dockerfile.amd64.debug",
"arm32v7": "./Dockerfile.arm32v7"
}
},
"buildOptions": []
},
"language": "python"
}

Просмотреть файл

@ -0,0 +1,354 @@
import os
import sys
import argparse
import inspect
import colorsys
import onnx
import numpy as np
import tensorflow as tf
import keras
from PIL import Image, ImageFont, ImageDraw
from keras import backend as K
from keras.layers import Input
from keras.models import load_model
from keras2onnx import convert_keras
from keras2onnx import set_converter
from keras2onnx.common.onnx_ops import apply_transpose, apply_identity, apply_cast
from keras2onnx.proto import onnx_proto
import yolo3
from yolo3.model import yolo_body, tiny_yolo_body, yolo_boxes_and_scores
from yolo3.utils import letterbox_image
class YOLOEvaluationLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super(YOLOEvaluationLayer, self).__init__()
self.anchors = np.array(kwargs.get('anchors'))
self.num_classes = kwargs.get('num_classes')
def get_config(self):
config = {
"anchors": self.anchors,
"num_classes": self.num_classes,
}
return config
def call(self, inputs, **kwargs):
"""Evaluate YOLO model on given input and return filtered boxes."""
yolo_outputs = inputs[0:3]
input_image_shape = K.squeeze(inputs[3], axis=0)
num_layers = len(yolo_outputs)
anchor_mask = [[6, 7, 8], [3, 4, 5], [0, 1, 2]] if num_layers == 3 else [[3, 4, 5],
[1, 2, 3]] # default setting
input_shape = K.shape(yolo_outputs[0])[1:3] * 32
boxes = []
box_scores = []
for l in range(num_layers):
_boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l], self.anchors[anchor_mask[l]], self.num_classes,
input_shape, input_image_shape)
boxes.append(_boxes)
box_scores.append(_box_scores)
boxes = K.concatenate(boxes, axis=0)
box_scores = K.concatenate(box_scores, axis=0)
return [boxes, box_scores]
def compute_output_shape(self, input_shape):
assert isinstance(input_shape, list)
return [(None, 4), (None, None)]
class YOLONMSLayer(keras.layers.Layer):
def __init__(self, **kwargs):
super(YOLONMSLayer, self).__init__()
self.max_boxes = kwargs.get('max_boxes', 20)
self.score_threshold = kwargs.get('score_threshold', .6)
self.iou_threshold = kwargs.get('iou_threshold', .5)
self.num_classes = kwargs.get('num_classes')
def get_config(self):
config = {
"max_boxes": self.max_boxes,
"score_threshold": self.score_threshold,
"iou_threshold": self.iou_threshold,
"num_classes": self.num_classes,
}
return config
def call(self, inputs, **kwargs):
boxes = inputs[0]
box_scores = inputs[1]
mask = box_scores >= self.score_threshold
max_boxes_tensor = K.constant(self.max_boxes, dtype='int32')
boxes_ = []
scores_ = []
classes_ = []
for c in range(self.num_classes):
class_boxes = tf.boolean_mask(boxes, mask[:, c])
class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c])
nms_index = tf.image.non_max_suppression(
class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=self.iou_threshold)
class_boxes = K.gather(class_boxes, nms_index)
class_box_scores = K.gather(class_box_scores, nms_index)
classes = K.ones_like(class_box_scores, 'int32') * c
boxes_.append(class_boxes)
scores_.append(class_box_scores)
classes_.append(classes)
boxes_ = K.concatenate(boxes_, axis=0)
scores_ = K.concatenate(scores_, axis=0)
classes_ = K.concatenate(classes_, axis=0)
boxes_r = tf.expand_dims(tf.expand_dims(boxes_, 0), 0)
scores_r = tf.expand_dims(tf.expand_dims(scores_, 0), 0)
return [boxes_r, scores_r, classes_]
def compute_output_shape(self, input_shape):
assert isinstance(input_shape, list)
return [(None, None, 4), (None, None, None), (None, None)]
class YOLO(object):
def __init__(self, model_path, model_data_path):
self.model_path = model_path # model path or trained weights path
self.anchors_path = model_data_path+'/yolo_anchors.txt'
self.classes_path = model_data_path+'/classes.txt'
self.score = 0.3
self.iou = 0.45
self.class_names = self._get_class()
self.anchors = self._get_anchors()
self.sess = K.get_session()
self.model_image_size = (416, 416) # fixed size or (None, None), hw
self.session = None
self.final_model = None
# Generate colors for drawing bounding boxes.
hsv_tuples = [(x / len(self.class_names), 1., 1.)
for x in range(len(self.class_names))]
self.colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
self.colors = list(
map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)),
self.colors))
np.random.seed(10101) # Fixed seed for consistent colors across runs.
np.random.shuffle(self.colors) # Shuffle colors to decorrelate adjacent classes.
np.random.seed(None) # Reset seed to default.
K.set_learning_phase(0)
@staticmethod
def _get_data_path(name):
path = os.path.expanduser(name)
if not os.path.isabs(path):
yolo3_dir = os.path.dirname(inspect.getabsfile(yolo3))
path = os.path.join(yolo3_dir, os.path.pardir, path)
return path
def _get_class(self):
classes_path = self._get_data_path(self.classes_path)
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def _get_anchors(self):
anchors_path = self._get_data_path(self.anchors_path)
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def load_model(self):
model_path = self._get_data_path(self.model_path)
assert model_path.endswith('.h5'), 'Keras model or weights must be a .h5 file.'
# Load model, or construct model and load weights.
num_anchors = len(self.anchors)
num_classes = len(self.class_names)
is_tiny_version = num_anchors == 6 # default setting
try:
self.yolo_model = load_model(model_path, compile=False)
except:
self.yolo_model = tiny_yolo_body(Input(shape=(None, None, 3)), num_anchors // 2, num_classes) \
if is_tiny_version else yolo_body(Input(shape=(None, None, 3)), num_anchors // 3, num_classes)
self.yolo_model.load_weights(self.model_path) # make sure model, anchors and classes match
else:
assert self.yolo_model.layers[-1].output_shape[-1] == \
num_anchors / len(self.yolo_model.output) * (num_classes + 5), \
'Mismatch between model and given anchor and class sizes'
input_image_shape = keras.Input(shape=(2,), name='image_shape', dtype='int32')
image_input = keras.Input((None, None, 3), dtype='float32')
y1, y2, y3 = self.yolo_model(image_input)
boxes, box_scores = \
YOLOEvaluationLayer(anchors=self.anchors, num_classes=len(self.class_names))(
inputs=[y1, y2, y3, input_image_shape])
out_boxes, out_scores, out_indices = \
YOLONMSLayer(anchors=self.anchors, num_classes=len(self.class_names))(
inputs=[boxes, box_scores])
self.final_model = keras.Model(inputs=[image_input, input_image_shape],
outputs=[out_boxes, out_scores, out_indices])
#self.final_model.save('final_model.h5')
print('{} model, anchors, and classes loaded.'.format(model_path))
def detect_with_onnx(self, image):
if self.model_image_size != (None, None):
assert self.model_image_size[0] % 32 == 0, 'Multiples of 32 required'
assert self.model_image_size[1] % 32 == 0, 'Multiples of 32 required'
boxed_image = letterbox_image(image, tuple(reversed(self.model_image_size)))
else:
new_image_size = (image.width - (image.width % 32),
image.height - (image.height % 32))
boxed_image = letterbox_image(image, new_image_size)
image_data = np.array(boxed_image, dtype='float32')
image_data /= 255.
image_data = np.transpose(image_data, [2, 0, 1])
image_data = np.expand_dims(image_data, 0) # Add batch dimension.
feed_f = dict(zip(['input_1', 'image_shape'],
(image_data, np.array([image.size[1], image.size[0]], dtype='float32').reshape(1, 2))))
all_boxes, all_scores, indices = self.session.run(None, input_feed=feed_f)
out_boxes, out_scores, out_classes = [], [], []
for idx_ in indices:
out_classes.append(idx_[1])
out_scores.append(all_scores[tuple(idx_)])
idx_1 = (idx_[0], idx_[2])
out_boxes.append(all_boxes[idx_1])
font = ImageFont.truetype(font=self._get_data_path('font/FiraMono-Medium.otf'),
size=np.floor(3e-2 * image.size[1] + 0.5).astype('int32'))
thickness = (image.size[0] + image.size[1]) // 300
for i, c in reversed(list(enumerate(out_classes))):
predicted_class = self.class_names[c]
box = out_boxes[i]
score = out_scores[i]
label = '{} {:.2f}'.format(predicted_class, score)
draw = ImageDraw.Draw(image)
label_size = draw.textsize(label, font)
top, left, bottom, right = box
top = max(0, np.floor(top + 0.5).astype('int32'))
left = max(0, np.floor(left + 0.5).astype('int32'))
bottom = min(image.size[1], np.floor(bottom + 0.5).astype('int32'))
right = min(image.size[0], np.floor(right + 0.5).astype('int32'))
if top - label_size[1] >= 0:
text_origin = np.array([left, top - label_size[1]])
else:
text_origin = np.array([left, top + 1])
for i in range(thickness):
draw.rectangle(
[left + i, top + i, right - i, bottom - i],
outline=self.colors[c])
draw.rectangle(
[tuple(text_origin), tuple(text_origin + label_size)],
fill=self.colors[c])
draw.text(text_origin, label, fill=(0, 0, 0), font=font)
del draw
return image
def detect_img(yolo, img_url, model_file_name):
import onnxruntime
image = Image.open(img_url)
yolo.session = onnxruntime.InferenceSession(model_file_name)
r_image = yolo.detect_with_onnx(image)
n_ext = img_url.rindex('.')
score_file = img_url[0:n_ext] + '_score' + img_url[n_ext:]
r_image.save(score_file, "JPEG")
def convert_NMSLayer(scope, operator, container):
# type: (keras2onnx.common.InterimContext, keras2onnx.common.Operator, keras2onnx.common.OnnxObjectContainer) -> None
box_transpose = scope.get_unique_variable_name(operator.inputs[0].full_name + '_tx')
score_transpose = scope.get_unique_variable_name(operator.inputs[1].full_name + '_tx')
apply_identity(scope, operator.inputs[0].full_name, box_transpose, container)
apply_transpose(scope, operator.inputs[1].full_name, score_transpose, container, perm=[1, 0])
box_batch = scope.get_unique_variable_name(operator.inputs[0].full_name + '_btc')
score_batch = scope.get_unique_variable_name(operator.inputs[1].full_name + '_btc')
container.add_node("Unsqueeze", box_transpose,
box_batch, op_version=10, axes=[0])
container.add_node("Unsqueeze", score_transpose,
score_batch, op_version=10, axes=[0])
layer = operator.raw_operator # type: YOLONMSLayer
max_output_size = scope.get_unique_variable_name('max_output_size')
iou_threshold = scope.get_unique_variable_name('iou_threshold')
score_threshold = scope.get_unique_variable_name('layer.score_threshold')
container.add_initializer(max_output_size, onnx_proto.TensorProto.INT64,
[], [layer.max_boxes])
container.add_initializer(iou_threshold, onnx_proto.TensorProto.FLOAT,
[], [layer.iou_threshold])
container.add_initializer(score_threshold, onnx_proto.TensorProto.FLOAT,
[], [layer.score_threshold])
cast_name = scope.get_unique_variable_name('casted')
nms_node = next((nd_ for nd_ in operator.nodelist if nd_.type == 'NonMaxSuppressionV3'), operator.nodelist[0])
container.add_node("NonMaxSuppression",
[box_batch, score_batch, max_output_size, iou_threshold, score_threshold],
cast_name,
op_version=10,
name=nms_node.name)
apply_cast(scope, cast_name, operator.output_full_names[2], container, to=onnx_proto.TensorProto.INT32)
apply_identity(scope, box_batch, operator.output_full_names[0], container)
apply_identity(scope, score_batch, operator.output_full_names[1], container)
set_converter(YOLONMSLayer, convert_NMSLayer)
def convert_model(yolo, model_file_name, target_opset):
yolo.load_model()
onnxmodel = convert_keras(yolo.final_model, target_opset=target_opset, channel_first_inputs=['input_1'])
onnx.save_model(onnxmodel, model_file_name)
return onnxmodel
if __name__ == '__main__':
#if len(sys.argv) < 2:
# print("Need an image file for object detection.")
# exit(-1)
parser = argparse.ArgumentParser("convert")
parser.add_argument(
"--model_name",
type=str,
help="Name of the Model",
default="yolo.h5",
)
parser.add_argument(
"--model_data_path",
type=str,
help="Path to the model data folder"
)
args = parser.parse_args()
print("Name of the model: %s" % args.model_name)
print("Path to the model data folder on datastore: %s" % args.model_data_path)
model_path = args.model_name
model_data_path = args.model_data_path
model_file_name = 'model.onnx'
target_opset = 10
if not os.path.exists(model_file_name):
onnxmodel = convert_model(YOLO(model_path,model_data_path), model_file_name, target_opset)
#detect_img(YOLO(), sys.argv[1], model_file_name)

Просмотреть файл

@ -0,0 +1,272 @@
"""
Retrain the YOLO model for your own dataset.
"""
# more imports for os operations
import argparse
import shutil
import os
import numpy as np
import keras.backend as K
from keras.layers import Input, Lambda
from keras.models import Model
from keras.optimizers import Adam
from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau, EarlyStopping, Callback
from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
from yolo3.utils import get_random_data
from voc_annotation import write_annotation
from convert_yolov3_to_onnx import YOLO, convert_model
# Fix issue "AttributeError: module 'keras.backend' has no attribute 'control_flow_ops" see https://github.com/keras-team/keras/issues/3857
import tensorflow as tf
K.control_flow_ops = tf
# Logging for azure ML
from azureml.core.run import Run
# Get run when running in remote
if 'run' not in locals():
run = Run.get_context()
def _main(model_name, release_id, model_path, fine_tune_epochs, unfrozen_epochs, learning_rate):
annotation_path = 'train.txt'
log_dir = 'logs/'
classes_path = model_path+'/classes.txt'
anchors_path = model_path+'/yolo_anchors.txt'
class_names = get_classes(classes_path)
num_classes = len(class_names)
anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
is_tiny_version = len(anchors)==6 # default setting
if is_tiny_version:
model = create_tiny_model(input_shape, anchors, num_classes,
freeze_body=2, weights_path=model_path+'/tiny_yolo_weights.h5')
else:
model = create_model(input_shape, anchors, num_classes,
freeze_body=2, weights_path=model_path+'/yolo_weights.h5') # make sure you know what you freeze
# Define callbacks during training
logging = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
# Logging for Azure ML (send acc, loss, val_loss at the end of each epoch)
class LossHistory1(Callback):
def on_epoch_end(self, epoch, logs={}):
run.log('Loss_stage1', logs.get('loss'))
run.log('Val_Loss_stage1', logs.get('val_loss'))
class LossHistory2(Callback):
def on_epoch_end(self, epoch, logs={}):
run.log('Loss_stage2', logs.get('loss'))
run.log('Val_Loss_stage2', logs.get('val_loss'))
lossHistory1 = LossHistory1()
lossHistory2 = LossHistory2()
val_split = 0.2
with open(annotation_path) as f:
lines = f.readlines()
np.random.seed(10101)
np.random.shuffle(lines)
np.random.seed(None)
num_val = int(len(lines)*val_split)
num_train = len(lines) - num_val
# (Stage 1) Train with frozen layers first, to get a stable loss.
# Adjust num epochs to your dataset. This step is enough to obtain a not bad model.
if True:
model.compile(optimizer=Adam(lr=learning_rate), loss={
# use custom yolo_loss Lambda layer.
'yolo_loss': lambda y_true, y_pred: y_pred})
batch_size = 50
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
validation_steps=max(1, num_val//batch_size),
epochs=fine_tune_epochs,
initial_epoch=0,
callbacks=[logging, checkpoint, lossHistory1])
model.save_weights(log_dir + model_name)
# (Stage 2) Unfreeze and continue training, to fine-tune.
# Train longer if the result is not good.
if True:
for i in range(len(model.layers)):
model.layers[i].trainable = True
model.compile(optimizer=Adam(lr=learning_rate), loss={'yolo_loss': lambda y_true, y_pred: y_pred}) # recompile to apply the change
print('Unfreeze all of the layers.')
batch_size = 4 # note that more GPU memory is required after unfreezing the body
print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size))
model.fit_generator(data_generator_wrapper(lines[:num_train], batch_size, input_shape, anchors, num_classes),
steps_per_epoch=max(1, num_train//batch_size),
validation_data=data_generator_wrapper(lines[num_train:], batch_size, input_shape, anchors, num_classes),
validation_steps=max(1, num_val//batch_size),
epochs=unfrozen_epochs,
initial_epoch=fine_tune_epochs,
callbacks=[logging, checkpoint, reduce_lr, early_stopping, lossHistory2])
model.save_weights(log_dir + model_name)
# Further training if needed...
# Add properties to identify this specific training run
run.add_properties({"release_id": release_id, "run_type": "train"})
print(f"added properties: {run.properties}")
try:
model_name_path = os.path.join(log_dir, model_name)
print(model_name_path)
new_model_name = model_name.rstrip('.h5')+'.onnx'
convert_model(YOLO(model_name_path, model_path), log_dir + new_model_name, 10)
new_model_path = os.path.join(log_dir, new_model_name)
print(new_model_path)
run.register_model(
model_name=new_model_name,
model_path=new_model_path,
properties={"release_id": release_id})
print("Registered new model!")
except Exception as e:
print(e)
print(run.get_file_names())
run.complete()
def get_classes(classes_path):
'''loads the classes'''
with open(classes_path) as f:
class_names = f.readlines()
class_names = [c.strip() for c in class_names]
return class_names
def get_anchors(anchors_path):
'''loads the anchors from a file'''
with open(anchors_path) as f:
anchors = f.readline()
anchors = [float(x) for x in anchors.split(',')]
return np.array(anchors).reshape(-1, 2)
def create_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
weights_path='model_data/yolo_weights.h5'):
'''create the training model'''
K.clear_session() # get a new session
image_input = Input(shape=(None, None, 3))
h, w = input_shape
num_anchors = len(anchors)
y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], \
num_anchors//3, num_classes+5)) for l in range(3)]
model_body = yolo_body(image_input, num_anchors//3, num_classes)
print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
if load_pretrained:
model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
print('Load weights {}.'.format(weights_path))
if freeze_body in [1, 2]:
# Freeze darknet53 body or freeze all but 3 output layers.
num = (185, len(model_body.layers)-3)[freeze_body-1]
for i in range(num): model_body.layers[i].trainable = False
print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
return model
def create_tiny_model(input_shape, anchors, num_classes, load_pretrained=True, freeze_body=2,
weights_path='model_data/tiny_yolo_weights.h5'):
'''create the training model, for Tiny YOLOv3'''
K.clear_session() # get a new session
image_input = Input(shape=(None, None, 3))
h, w = input_shape
num_anchors = len(anchors)
y_true = [Input(shape=(h//{0:32, 1:16}[l], w//{0:32, 1:16}[l], \
num_anchors//2, num_classes+5)) for l in range(2)]
model_body = tiny_yolo_body(image_input, num_anchors//2, num_classes)
print('Create Tiny YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes))
if load_pretrained:
model_body.load_weights(weights_path, by_name=True, skip_mismatch=True)
print('Load weights {}.'.format(weights_path))
if freeze_body in [1, 2]:
# Freeze the darknet body or freeze all but 2 output layers.
num = (20, len(model_body.layers)-2)[freeze_body-1]
for i in range(num): model_body.layers[i].trainable = False
print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers)))
model_loss = Lambda(yolo_loss, output_shape=(1,), name='yolo_loss',
arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.7})(
[*model_body.output, *y_true])
model = Model([model_body.input, *y_true], model_loss)
return model
def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes):
'''data generator for fit_generator'''
n = len(annotation_lines)
i = 0
while True:
image_data = []
box_data = []
for b in range(batch_size):
if i==0:
np.random.shuffle(annotation_lines)
image, box = get_random_data(annotation_lines[i], input_shape, random=True)
image_data.append(image)
box_data.append(box)
i = (i+1) % n
image_data = np.array(image_data)
box_data = np.array(box_data)
y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes)
yield [image_data, *y_true], np.zeros(batch_size)
def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes):
n = len(annotation_lines)
if n==0 or batch_size<=0: return None
return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data_folder', type=str, help='Folder path for input data')
parser.add_argument('--model_path', type=str, help='Folder path for model files')
parser.add_argument('--chkpoint_folder', type=str, default='./logs', help='Folder path for checkpoint files')
parser.add_argument('--fine_tune_epochs', type=int, default=40, help='Number of epochs for fine-tuning')
parser.add_argument('--unfrozen_epochs', type=int, default=50, help='Final epoch for training')
parser.add_argument('--learning_rate', type=float, default=1e-4, help='Learning rate')
# Added for MLOps
parser.add_argument("--release_id", type=str, help="The ID of the release triggering this pipeline run")
parser.add_argument("--model_name", type=str, help="Name of the Model", default="yolo.h5",)
FLAGS, unparsed = parser.parse_known_args()
# Clean checkpoint folder if exists
if os.path.exists(FLAGS.chkpoint_folder) :
for file_name in os.listdir(FLAGS.chkpoint_folder):
file_path = os.path.join(FLAGS.chkpoint_folder, file_name)
if os.path.isfile(file_path):
os.remove(file_path)
elif os.path.isdir(file_path):
shutil.rmtree(file_path)
# Write annotation file
write_annotation(FLAGS.data_folder)
_main(FLAGS.model_name, FLAGS.release_id, FLAGS.model_path, FLAGS.fine_tune_epochs, FLAGS.unfrozen_epochs, FLAGS.learning_rate)

Просмотреть файл

@ -0,0 +1,39 @@
import xml.etree.ElementTree as ET
from os import getcwd
import shutil
sets=[('2007', 'train'), ('2007', 'val'), ('2007', 'test')]
classes = ["aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
def convert_annotation(datapath, year, image_id, list_file):
in_file = open(datapath+'/VOC%s/Annotations/%s.xml'%(year, image_id))
tree=ET.parse(in_file)
root = tree.getroot()
for obj in root.iter('object'):
difficult = obj.find('difficult').text
cls = obj.find('name').text
if cls not in classes or int(difficult)==1:
continue
cls_id = classes.index(cls)
xmlbox = obj.find('bndbox')
b = (int(xmlbox.find('xmin').text), int(xmlbox.find('ymin').text), int(xmlbox.find('xmax').text), int(xmlbox.find('ymax').text))
list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))
def write_annotation(data_path):
print(data_path)
for year, image_set in sets:
image_ids = open(data_path+'/VOC%s/ImageSets/Main/%s.txt'%(year, image_set)).read().strip().split()
list_file = open('%s_%s.txt'%(year, image_set), 'w')
for image_id in image_ids:
list_file.write(data_path+'/VOC%s/JPEGImages/%s.jpg'%(year, image_id))
convert_annotation(data_path, year, image_id, list_file)
list_file.write('\n')
list_file.close()
with open('train.txt','wb') as wfd:
for f in ['2007_train.txt','2007_val.txt','2007_test.txt']:
with open(f,'rb') as fd:
shutil.copyfileobj(fd, wfd)

Просмотреть файл

Просмотреть файл

@ -0,0 +1,412 @@
"""YOLO_v3 Model Defined in Keras."""
from functools import wraps
import numpy as np
import tensorflow as tf
from keras import backend as K
from keras.layers import Conv2D, Add, ZeroPadding2D, UpSampling2D, Concatenate, MaxPooling2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.regularizers import l2
from yolo3.utils import compose
@wraps(Conv2D)
def DarknetConv2D(*args, **kwargs):
"""Wrapper to set Darknet parameters for Convolution2D."""
darknet_conv_kwargs = {'kernel_regularizer': l2(5e-4)}
darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2,2) else 'same'
darknet_conv_kwargs.update(kwargs)
return Conv2D(*args, **darknet_conv_kwargs)
def DarknetConv2D_BN_Leaky(*args, **kwargs):
"""Darknet Convolution2D followed by BatchNormalization and LeakyReLU."""
no_bias_kwargs = {'use_bias': False}
no_bias_kwargs.update(kwargs)
return compose(
DarknetConv2D(*args, **no_bias_kwargs),
BatchNormalization(),
LeakyReLU(alpha=0.1))
def resblock_body(x, num_filters, num_blocks):
'''A series of resblocks starting with a downsampling Convolution2D'''
# Darknet uses left and top padding instead of 'same' mode
x = ZeroPadding2D(((1,0),(1,0)))(x)
x = DarknetConv2D_BN_Leaky(num_filters, (3,3), strides=(2,2))(x)
for i in range(num_blocks):
y = compose(
DarknetConv2D_BN_Leaky(num_filters//2, (1,1)),
DarknetConv2D_BN_Leaky(num_filters, (3,3)))(x)
x = Add()([x,y])
return x
def darknet_body(x):
'''Darknent body having 52 Convolution2D layers'''
x = DarknetConv2D_BN_Leaky(32, (3,3))(x)
x = resblock_body(x, 64, 1)
x = resblock_body(x, 128, 2)
x = resblock_body(x, 256, 8)
x = resblock_body(x, 512, 8)
x = resblock_body(x, 1024, 4)
return x
def make_last_layers(x, num_filters, out_filters):
'''6 Conv2D_BN_Leaky layers followed by a Conv2D_linear layer'''
x = compose(
DarknetConv2D_BN_Leaky(num_filters, (1,1)),
DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
DarknetConv2D_BN_Leaky(num_filters, (1,1)),
DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
DarknetConv2D_BN_Leaky(num_filters, (1,1)))(x)
y = compose(
DarknetConv2D_BN_Leaky(num_filters*2, (3,3)),
DarknetConv2D(out_filters, (1,1)))(x)
return x, y
def yolo_body(inputs, num_anchors, num_classes):
"""Create YOLO_V3 model CNN body in Keras."""
darknet = Model(inputs, darknet_body(inputs))
x, y1 = make_last_layers(darknet.output, 512, num_anchors*(num_classes+5))
x = compose(
DarknetConv2D_BN_Leaky(256, (1,1)),
UpSampling2D(2))(x)
x = Concatenate()([x,darknet.layers[152].output])
x, y2 = make_last_layers(x, 256, num_anchors*(num_classes+5))
x = compose(
DarknetConv2D_BN_Leaky(128, (1,1)),
UpSampling2D(2))(x)
x = Concatenate()([x,darknet.layers[92].output])
x, y3 = make_last_layers(x, 128, num_anchors*(num_classes+5))
return Model(inputs, [y1,y2,y3])
def tiny_yolo_body(inputs, num_anchors, num_classes):
'''Create Tiny YOLO_v3 model CNN body in keras.'''
x1 = compose(
DarknetConv2D_BN_Leaky(16, (3,3)),
MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
DarknetConv2D_BN_Leaky(32, (3,3)),
MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
DarknetConv2D_BN_Leaky(64, (3,3)),
MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
DarknetConv2D_BN_Leaky(128, (3,3)),
MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
DarknetConv2D_BN_Leaky(256, (3,3)))(inputs)
x2 = compose(
MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='same'),
DarknetConv2D_BN_Leaky(512, (3,3)),
MaxPooling2D(pool_size=(2,2), strides=(1,1), padding='same'),
DarknetConv2D_BN_Leaky(1024, (3,3)),
DarknetConv2D_BN_Leaky(256, (1,1)))(x1)
y1 = compose(
DarknetConv2D_BN_Leaky(512, (3,3)),
DarknetConv2D(num_anchors*(num_classes+5), (1,1)))(x2)
x2 = compose(
DarknetConv2D_BN_Leaky(128, (1,1)),
UpSampling2D(2))(x2)
y2 = compose(
Concatenate(),
DarknetConv2D_BN_Leaky(256, (3,3)),
DarknetConv2D(num_anchors*(num_classes+5), (1,1)))([x2,x1])
return Model(inputs, [y1,y2])
def yolo_head(feats, anchors, num_classes, input_shape, calc_loss=False):
"""Convert final layer features to bounding box parameters."""
num_anchors = len(anchors)
# Reshape to batch, height, width, num_anchors, box_params.
anchors_tensor = K.reshape(K.constant(anchors), [1, 1, 1, num_anchors, 2])
grid_shape = K.shape(feats)[1:3] # height, width
grid_y = K.tile(K.reshape(K.arange(0, stop=grid_shape[0]), [-1, 1, 1, 1]),
[1, grid_shape[1], 1, 1])
grid_x = K.tile(K.reshape(K.arange(0, stop=grid_shape[1]), [1, -1, 1, 1]),
[grid_shape[0], 1, 1, 1])
grid = K.concatenate([grid_x, grid_y])
grid = K.cast(grid, K.dtype(feats))
feats = K.reshape(
feats, [-1, grid_shape[0], grid_shape[1], num_anchors, num_classes + 5])
# Adjust preditions to each spatial grid point and anchor size.
box_xy = (K.sigmoid(feats[..., :2]) + grid) / K.cast(grid_shape[::-1], K.dtype(feats))
box_wh = K.exp(feats[..., 2:4]) * anchors_tensor / K.cast(input_shape[::-1], K.dtype(feats))
box_confidence = K.sigmoid(feats[..., 4:5])
box_class_probs = K.sigmoid(feats[..., 5:])
if calc_loss == True:
return grid, feats, box_xy, box_wh
return box_xy, box_wh, box_confidence, box_class_probs
def yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape):
'''Get corrected boxes'''
box_yx = box_xy[..., ::-1]
box_hw = box_wh[..., ::-1]
input_shape = K.cast(input_shape, K.dtype(box_yx))
image_shape = K.cast(image_shape, K.dtype(box_yx))
new_shape = K.round(image_shape * K.min(input_shape/image_shape))
offset = (input_shape-new_shape)/2./input_shape
scale = input_shape/new_shape
box_yx = (box_yx - offset) * scale
box_hw *= scale
box_mins = box_yx - (box_hw / 2.)
box_maxes = box_yx + (box_hw / 2.)
boxes = K.concatenate([
box_mins[..., 0:1], # y_min
box_mins[..., 1:2], # x_min
box_maxes[..., 0:1], # y_max
box_maxes[..., 1:2] # x_max
])
# Scale boxes back to original image shape.
boxes *= K.concatenate([image_shape, image_shape])
return boxes
def yolo_boxes_and_scores(feats, anchors, num_classes, input_shape, image_shape):
'''Process Conv layer output'''
box_xy, box_wh, box_confidence, box_class_probs = yolo_head(feats,
anchors, num_classes, input_shape)
boxes = yolo_correct_boxes(box_xy, box_wh, input_shape, image_shape)
boxes = K.reshape(boxes, [-1, 4])
box_scores = box_confidence * box_class_probs
box_scores = K.reshape(box_scores, [-1, num_classes])
return boxes, box_scores
def yolo_eval(yolo_outputs,
anchors,
num_classes,
image_shape,
max_boxes=20,
score_threshold=.6,
iou_threshold=.5):
"""Evaluate YOLO model on given input and return filtered boxes."""
num_layers = len(yolo_outputs)
anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]] # default setting
input_shape = K.shape(yolo_outputs[0])[1:3] * 32
boxes = []
box_scores = []
for l in range(num_layers):
_boxes, _box_scores = yolo_boxes_and_scores(yolo_outputs[l],
anchors[anchor_mask[l]], num_classes, input_shape, image_shape)
boxes.append(_boxes)
box_scores.append(_box_scores)
boxes = K.concatenate(boxes, axis=0)
box_scores = K.concatenate(box_scores, axis=0)
mask = box_scores >= score_threshold
max_boxes_tensor = K.constant(max_boxes, dtype='int32')
boxes_ = []
scores_ = []
classes_ = []
for c in range(num_classes):
# TODO: use keras backend instead of tf.
class_boxes = tf.boolean_mask(boxes, mask[:, c])
class_box_scores = tf.boolean_mask(box_scores[:, c], mask[:, c])
nms_index = tf.image.non_max_suppression(
class_boxes, class_box_scores, max_boxes_tensor, iou_threshold=iou_threshold)
class_boxes = K.gather(class_boxes, nms_index)
class_box_scores = K.gather(class_box_scores, nms_index)
classes = K.ones_like(class_box_scores, 'int32') * c
boxes_.append(class_boxes)
scores_.append(class_box_scores)
classes_.append(classes)
boxes_ = K.concatenate(boxes_, axis=0)
scores_ = K.concatenate(scores_, axis=0)
classes_ = K.concatenate(classes_, axis=0)
return boxes_, scores_, classes_
def preprocess_true_boxes(true_boxes, input_shape, anchors, num_classes):
'''Preprocess true boxes to training input format
Parameters
----------
true_boxes: array, shape=(m, T, 5)
Absolute x_min, y_min, x_max, y_max, class_id relative to input_shape.
input_shape: array-like, hw, multiples of 32
anchors: array, shape=(N, 2), wh
num_classes: integer
Returns
-------
y_true: list of array, shape like yolo_outputs, xywh are reletive value
'''
assert (true_boxes[..., 4]<num_classes).all(), 'class id must be less than num_classes'
num_layers = len(anchors)//3 # default setting
anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]]
true_boxes = np.array(true_boxes, dtype='float32')
input_shape = np.array(input_shape, dtype='int32')
boxes_xy = (true_boxes[..., 0:2] + true_boxes[..., 2:4]) // 2
boxes_wh = true_boxes[..., 2:4] - true_boxes[..., 0:2]
true_boxes[..., 0:2] = boxes_xy/input_shape[::-1]
true_boxes[..., 2:4] = boxes_wh/input_shape[::-1]
m = true_boxes.shape[0]
grid_shapes = [input_shape//{0:32, 1:16, 2:8}[l] for l in range(num_layers)]
y_true = [np.zeros((m,grid_shapes[l][0],grid_shapes[l][1],len(anchor_mask[l]),5+num_classes),
dtype='float32') for l in range(num_layers)]
# Expand dim to apply broadcasting.
anchors = np.expand_dims(anchors, 0)
anchor_maxes = anchors / 2.
anchor_mins = -anchor_maxes
valid_mask = boxes_wh[..., 0]>0
for b in range(m):
# Discard zero rows.
wh = boxes_wh[b, valid_mask[b]]
if len(wh)==0: continue
# Expand dim to apply broadcasting.
wh = np.expand_dims(wh, -2)
box_maxes = wh / 2.
box_mins = -box_maxes
intersect_mins = np.maximum(box_mins, anchor_mins)
intersect_maxes = np.minimum(box_maxes, anchor_maxes)
intersect_wh = np.maximum(intersect_maxes - intersect_mins, 0.)
intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
box_area = wh[..., 0] * wh[..., 1]
anchor_area = anchors[..., 0] * anchors[..., 1]
iou = intersect_area / (box_area + anchor_area - intersect_area)
# Find best anchor for each true box
best_anchor = np.argmax(iou, axis=-1)
for t, n in enumerate(best_anchor):
for l in range(num_layers):
if n in anchor_mask[l]:
i = np.floor(true_boxes[b,t,0]*grid_shapes[l][1]).astype('int32')
j = np.floor(true_boxes[b,t,1]*grid_shapes[l][0]).astype('int32')
k = anchor_mask[l].index(n)
c = true_boxes[b,t, 4].astype('int32')
y_true[l][b, j, i, k, 0:4] = true_boxes[b,t, 0:4]
y_true[l][b, j, i, k, 4] = 1
y_true[l][b, j, i, k, 5+c] = 1
return y_true
def box_iou(b1, b2):
'''Return iou tensor
Parameters
----------
b1: tensor, shape=(i1,...,iN, 4), xywh
b2: tensor, shape=(j, 4), xywh
Returns
-------
iou: tensor, shape=(i1,...,iN, j)
'''
# Expand dim to apply broadcasting.
b1 = K.expand_dims(b1, -2)
b1_xy = b1[..., :2]
b1_wh = b1[..., 2:4]
b1_wh_half = b1_wh/2.
b1_mins = b1_xy - b1_wh_half
b1_maxes = b1_xy + b1_wh_half
# Expand dim to apply broadcasting.
b2 = K.expand_dims(b2, 0)
b2_xy = b2[..., :2]
b2_wh = b2[..., 2:4]
b2_wh_half = b2_wh/2.
b2_mins = b2_xy - b2_wh_half
b2_maxes = b2_xy + b2_wh_half
intersect_mins = K.maximum(b1_mins, b2_mins)
intersect_maxes = K.minimum(b1_maxes, b2_maxes)
intersect_wh = K.maximum(intersect_maxes - intersect_mins, 0.)
intersect_area = intersect_wh[..., 0] * intersect_wh[..., 1]
b1_area = b1_wh[..., 0] * b1_wh[..., 1]
b2_area = b2_wh[..., 0] * b2_wh[..., 1]
iou = intersect_area / (b1_area + b2_area - intersect_area)
return iou
def yolo_loss(args, anchors, num_classes, ignore_thresh=.5, print_loss=False):
'''Return yolo_loss tensor
Parameters
----------
yolo_outputs: list of tensor, the output of yolo_body or tiny_yolo_body
y_true: list of array, the output of preprocess_true_boxes
anchors: array, shape=(N, 2), wh
num_classes: integer
ignore_thresh: float, the iou threshold whether to ignore object confidence loss
Returns
-------
loss: tensor, shape=(1,)
'''
num_layers = len(anchors)//3 # default setting
yolo_outputs = args[:num_layers]
y_true = args[num_layers:]
anchor_mask = [[6,7,8], [3,4,5], [0,1,2]] if num_layers==3 else [[3,4,5], [1,2,3]]
input_shape = K.cast(K.shape(yolo_outputs[0])[1:3] * 32, K.dtype(y_true[0]))
grid_shapes = [K.cast(K.shape(yolo_outputs[l])[1:3], K.dtype(y_true[0])) for l in range(num_layers)]
loss = 0
m = K.shape(yolo_outputs[0])[0] # batch size, tensor
mf = K.cast(m, K.dtype(yolo_outputs[0]))
for l in range(num_layers):
object_mask = y_true[l][..., 4:5]
true_class_probs = y_true[l][..., 5:]
grid, raw_pred, pred_xy, pred_wh = yolo_head(yolo_outputs[l],
anchors[anchor_mask[l]], num_classes, input_shape, calc_loss=True)
pred_box = K.concatenate([pred_xy, pred_wh])
# Darknet raw box to calculate loss.
raw_true_xy = y_true[l][..., :2]*grid_shapes[l][::-1] - grid
raw_true_wh = K.log(y_true[l][..., 2:4] / anchors[anchor_mask[l]] * input_shape[::-1])
raw_true_wh = K.switch(object_mask, raw_true_wh, K.zeros_like(raw_true_wh)) # avoid log(0)=-inf
box_loss_scale = 2 - y_true[l][...,2:3]*y_true[l][...,3:4]
# Find ignore mask, iterate over each of batch.
ignore_mask = tf.TensorArray(K.dtype(y_true[0]), size=1, dynamic_size=True)
object_mask_bool = K.cast(object_mask, 'bool')
def loop_body(b, ignore_mask):
true_box = tf.boolean_mask(y_true[l][b,...,0:4], object_mask_bool[b,...,0])
iou = box_iou(pred_box[b], true_box)
best_iou = K.max(iou, axis=-1)
ignore_mask = ignore_mask.write(b, K.cast(best_iou<ignore_thresh, K.dtype(true_box)))
return b+1, ignore_mask
_, ignore_mask = K.control_flow_ops.while_loop(lambda b,*args: b<m, loop_body, [0, ignore_mask])
ignore_mask = ignore_mask.stack()
ignore_mask = K.expand_dims(ignore_mask, -1)
# K.binary_crossentropy is helpful to avoid exp overflow.
xy_loss = object_mask * box_loss_scale * K.binary_crossentropy(raw_true_xy, raw_pred[...,0:2], from_logits=True)
wh_loss = object_mask * box_loss_scale * 0.5 * K.square(raw_true_wh-raw_pred[...,2:4])
confidence_loss = object_mask * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True)+ \
(1-object_mask) * K.binary_crossentropy(object_mask, raw_pred[...,4:5], from_logits=True) * ignore_mask
class_loss = object_mask * K.binary_crossentropy(true_class_probs, raw_pred[...,5:], from_logits=True)
xy_loss = K.sum(xy_loss) / mf
wh_loss = K.sum(wh_loss) / mf
confidence_loss = K.sum(confidence_loss) / mf
class_loss = K.sum(class_loss) / mf
loss += xy_loss + wh_loss + confidence_loss + class_loss
if print_loss:
loss = tf.Print(loss, [loss, xy_loss, wh_loss, confidence_loss, class_loss, K.sum(ignore_mask)], message='loss: ')
return loss

Просмотреть файл

@ -0,0 +1,121 @@
"""Miscellaneous utility functions."""
from functools import reduce
from PIL import Image
import numpy as np
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
def compose(*funcs):
"""Compose arbitrarily many functions, evaluated left to right.
Reference: https://mathieularose.com/function-composition-in-python/
"""
# return lambda x: reduce(lambda v, f: f(v), funcs, x)
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')
def letterbox_image(image, size):
'''resize image with unchanged aspect ratio using padding'''
iw, ih = image.size
w, h = size
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', size, (128,128,128))
new_image.paste(image, ((w-nw)//2, (h-nh)//2))
return new_image
def rand(a=0, b=1):
return np.random.rand()*(b-a) + a
def get_random_data(annotation_line, input_shape, random=True, max_boxes=20, jitter=.3, hue=.1, sat=1.5, val=1.5, proc_img=True):
'''random preprocessing for real-time data augmentation'''
line = annotation_line.split()
image = Image.open(line[0])
iw, ih = image.size
h, w = input_shape
box = np.array([np.array(list(map(int,box.split(',')))) for box in line[1:]])
if not random:
# resize image
scale = min(w/iw, h/ih)
nw = int(iw*scale)
nh = int(ih*scale)
dx = (w-nw)//2
dy = (h-nh)//2
image_data=0
if proc_img:
image = image.resize((nw,nh), Image.BICUBIC)
new_image = Image.new('RGB', (w,h), (128,128,128))
new_image.paste(image, (dx, dy))
image_data = np.array(new_image)/255.
# correct boxes
box_data = np.zeros((max_boxes,5))
if len(box)>0:
np.random.shuffle(box)
if len(box)>max_boxes: box = box[:max_boxes]
box[:, [0,2]] = box[:, [0,2]]*scale + dx
box[:, [1,3]] = box[:, [1,3]]*scale + dy
box_data[:len(box)] = box
return image_data, box_data
# resize image
new_ar = w/h * rand(1-jitter,1+jitter)/rand(1-jitter,1+jitter)
scale = rand(.25, 2)
if new_ar < 1:
nh = int(scale*h)
nw = int(nh*new_ar)
else:
nw = int(scale*w)
nh = int(nw/new_ar)
image = image.resize((nw,nh), Image.BICUBIC)
# place image
dx = int(rand(0, w-nw))
dy = int(rand(0, h-nh))
new_image = Image.new('RGB', (w,h), (128,128,128))
new_image.paste(image, (dx, dy))
image = new_image
# flip image or not
flip = rand()<.5
if flip: image = image.transpose(Image.FLIP_LEFT_RIGHT)
# distort image
hue = rand(-hue, hue)
sat = rand(1, sat) if rand()<.5 else 1/rand(1, sat)
val = rand(1, val) if rand()<.5 else 1/rand(1, val)
x = rgb_to_hsv(np.array(image)/255.)
x[..., 0] += hue
x[..., 0][x[..., 0]>1] -= 1
x[..., 0][x[..., 0]<0] += 1
x[..., 1] *= sat
x[..., 2] *= val
x[x>1] = 1
x[x<0] = 0
image_data = hsv_to_rgb(x) # numpy array, 0 to 1
# correct boxes
box_data = np.zeros((max_boxes,5))
if len(box)>0:
np.random.shuffle(box)
box[:, [0,2]] = box[:, [0,2]]*nw/iw + dx
box[:, [1,3]] = box[:, [1,3]]*nh/ih + dy
if flip: box[:, [0,2]] = w - box[:, [2,0]]
box[:, 0:2][box[:, 0:2]<0] = 0
box[:, 2][box[:, 2]>w] = w
box[:, 3][box[:, 3]>h] = h
box_w = box[:, 2] - box[:, 0]
box_h = box[:, 3] - box[:, 1]
box = box[np.logical_and(box_w>1, box_h>1)] # discard invalid box
if len(box)>max_boxes: box = box[:max_boxes]
box_data[:len(box)] = box
return image_data, box_data

Просмотреть файл

@ -0,0 +1,13 @@
FROM conda/miniconda3
LABEL org.label-schema.vendor = "Microsoft" \
org.label-schema.url = "https://hub.docker.com/r/microsoft/mlopspython" \
org.label-schema.vcs-url = "https://github.com/microsoft/MLOpsPython"
COPY environment_setup/requirements.txt /setup/
RUN apt-get update && apt-get install gcc -y && pip install --upgrade -r /setup/requirements.txt
CMD ["python"]

Просмотреть файл

@ -0,0 +1,124 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"baseName": {
"type": "string",
"maxLength": 10,
"minLength": 3,
"metadata": {
"description": "The base name to use as prefix to create all the resources."
}
},
"location": {
"type": "string",
"defaultValue": "westeurope",
"allowedValues": [
"eastus",
"eastus2",
"southcentralus",
"southeastasia",
"westcentralus",
"westeurope",
"westus2",
"centralus"
],
"metadata": {
"description": "Specifies the location for all resources."
}
}
},
"variables": {
"amlWorkspaceName": "[concat(parameters('baseName'),'-AML-WS')]",
"storageAccountName": "[concat(toLower(parameters('baseName')), 'amlsa')]",
"storageAccountType": "Standard_LRS",
"keyVaultName": "[concat(parameters('baseName'),'-AML-KV')]",
"tenantId": "[subscription().tenantId]",
"applicationInsightsName": "[concat(parameters('baseName'),'-AML-AI')]",
"containerRegistryName": "[concat(toLower(parameters('baseName')),'amlcr')]"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[variables('storageAccountType')]"
},
"kind": "StorageV2",
"properties": {
"encryption": {
"services": {
"blob": {
"enabled": true
},
"file": {
"enabled": true
}
},
"keySource": "Microsoft.Storage"
},
"supportsHttpsTrafficOnly": true
}
},
{
"type": "Microsoft.KeyVault/vaults",
"apiVersion": "2018-02-14",
"name": "[variables('keyVaultName')]",
"location": "[parameters('location')]",
"properties": {
"tenantId": "[variables('tenantId')]",
"sku": {
"name": "standard",
"family": "A"
},
"accessPolicies": []
}
},
{
"type": "Microsoft.Insights/components",
"apiVersion": "2015-05-01",
"name": "[variables('applicationInsightsName')]",
"location": "[if(or(equals(parameters('location'),'eastus2'),equals(parameters('location'),'westcentralus')),'southcentralus',parameters('location'))]",
"kind": "web",
"properties": {
"Application_Type": "web"
}
},
{
"type": "Microsoft.ContainerRegistry/registries",
"apiVersion": "2017-10-01",
"name": "[variables('containerRegistryName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard"
},
"properties": {
"adminUserEnabled": true
}
},
{
"type": "Microsoft.MachineLearningServices/workspaces",
"apiVersion": "2018-11-19",
"name": "[variables('amlWorkspaceName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.KeyVault/vaults', variables('keyVaultName'))]",
"[resourceId('Microsoft.Insights/components', variables('applicationInsightsName'))]",
"[resourceId('Microsoft.ContainerRegistry/registries', variables('containerRegistryName'))]"
],
"identity": {
"type": "systemAssigned"
},
"properties": {
"friendlyName": "[variables('amlWorkspaceName')]",
"keyVault": "[resourceId('Microsoft.KeyVault/vaults',variables('keyVaultName'))]",
"applicationInsights": "[resourceId('Microsoft.Insights/components',variables('applicationInsightsName'))]",
"containerRegistry": "[resourceId('Microsoft.ContainerRegistry/registries',variables('containerRegistryName'))]",
"storageAccount": "[resourceId('Microsoft.Storage/storageAccounts/',variables('storageAccountName'))]"
}
}
]
}

Просмотреть файл

@ -0,0 +1,29 @@
resources:
- repo: self
queue:
name: Hosted Ubuntu 1604
trigger:
branches:
include:
- master
paths:
include:
- MLOps/environment_setup/*
variables:
containerRegistry: $[coalesce(variables['acrServiceConnection'], 'acrconnection')]
imageName: $[coalesce(variables['agentImageName'], 'public/mlops/python')]
steps:
- task: Docker@2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: '$(containerRegistry)'
repository: '$(imageName)'
tags: 'latest'
buildContext: '$(Build.SourcesDirectory)/MLOps'
dockerFile: '$(Build.SourcesDirectory)/MLOps/environment_setup/Dockerfile'

Просмотреть файл

@ -0,0 +1,45 @@
trigger:
branches:
include:
- master
paths:
include:
- MLOps/environment_setup/arm-templates/*
pool:
vmImage: 'ubuntu-latest'
variables:
- group: devopsforai-aml-vg
- group: iotedge-vg
steps:
- task: AzureResourceGroupDeployment@2
inputs:
azureSubscription: 'AzureResourceConnection'
action: 'Create Or Update Resource Group'
resourceGroupName: '$(BASE_NAME)-RG'
location: $(LOCATION)
templateLocation: 'Linked artifact'
csmFile: '$(Build.SourcesDirectory)/MLOps/environment_setup/arm-templates/cloud-environment.json'
overrideParameters: '-baseName $(BASE_NAME) -location $(LOCATION)'
deploymentMode: 'Incremental'
displayName: 'Deploy MLOps resources to Azure'
- task: AzureResourceGroupDeployment@2
displayName: 'Create Azure IoT Hub'
inputs:
azureSubscription: 'AzureResourceConnection'
resourceGroupName: '$(BASE_NAME)-RG'
location: $(LOCATION)
templateLocation: 'URL of the file'
csmFileLink: 'https://raw.githubusercontent.com/Azure-Samples/devops-iot-scripts/12d60bd513ead7c94aa1669e505083beaef8a480/arm-iothub.json'
overrideParameters: '-iotHubName $(IOTHUB_NAME) -iotHubSku "S1"'
- task: AzureCLI@1
displayName: 'Azure CLI: Create IoT Edge device'
inputs:
azureSubscription: 'AzureResourceConnection'
scriptLocation: inlineScript
inlineScript: '(az extension add --name azure-cli-iot-ext && TMP_OUTPUT="$(az iot hub device-identity show-connection-string --device-id $EDGE_DEVICE_ID --hub-name $IOTHUB_NAME)" && RE="\"connectionString\":\s?\"(.*)\"" && if [[ $TMP_OUTPUT =~ $RE ]]; then CS_OUTPUT=${BASH_REMATCH[1]} && echo "Got device connection string"; fi && echo "##vso[task.setvariable variable=CS_OUTPUT]${CS_OUTPUT}" ) || (az iot hub device-identity create --hub-name $IOTHUB_NAME --device-id $EDGE_DEVICE_ID --edge-enabled --output none && echo "Created Edge device" && sleep 5 && az iot hub device-twin update --device-id $EDGE_DEVICE_ID --hub-name $IOTHUB_NAME --set tags=''{"environment":"dev"}'' && echo "Set tag for device" && TMP_OUTPUT="$(az iot hub device-identity show-connection-string --device-id $EDGE_DEVICE_ID --hub-name $IOTHUB_NAME )" && RE="\"connectionString\":\s?\"(.*)\"" && if [[ $TMP_OUTPUT =~ $RE ]]; then CS_OUTPUT=${BASH_REMATCH[1]} && echo "Got device connection string: ${CS_OUTPUT}"; fi && echo "##vso[task.setvariable variable=CS_OUTPUT]${CS_OUTPUT}")'

Просмотреть файл

@ -0,0 +1,25 @@
trigger:
branches:
include:
- master
paths:
include:
- MLOps/environment_setup/arm-templates/*
pool:
vmImage: 'ubuntu-latest'
variables:
- group: devopsforai-aml-vg
steps:
- task: AzureResourceGroupDeployment@2
inputs:
azureSubscription: 'AzureResourceConnection'
action: 'DeleteRG'
resourceGroupName: '$(BASE_NAME)-AML-RG'
location: $(LOCATION)
displayName: 'Delete resources in Azure'

Просмотреть файл

@ -0,0 +1,31 @@
#!/bin/bash
# Copyright (C) Microsoft Corporation. All rights reserved.
#
# Microsoft Corporation (“Microsoft”) grants you a nonexclusive, perpetual,
# royalty-free right to use, copy, and modify the software code provided by us
# ('Software Code'). You may not sublicense the Software Code or any use of it
# (except to your affiliates and to vendors to perform work on your behalf)
# through distribution, network access, service agreement, lease, rental, or
# otherwise. This license does not purport to express any claim of ownership over
# data you may have shared with Microsoft in the creation of the Software Code.
# Unless applicable law gives you more rights, Microsoft reserves all other
# rights not expressly granted herein, whether by implication, estoppel or
# otherwise.
#
# THE SOFTWARE CODE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# MICROSOFT OR ITS LICENSORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
# BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER
# IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THE SOFTWARE CODE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
python --version
pip install azure-cli==2.0.46
pip install --upgrade azureml-sdk[cli]
pip install -r requirements.txt

Просмотреть файл

@ -0,0 +1,7 @@
pytest==4.3.0
requests>=2.22
azureml-sdk>=1.0
python-dotenv>=0.10.3
flake8
flake8_formatter_junit_xml
azure-cli==2.0.71

Просмотреть файл

Просмотреть файл

@ -0,0 +1,103 @@
from azureml.pipeline.core.graph import PipelineParameter
from azureml.pipeline.steps import PythonScriptStep, EstimatorStep
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.core.runconfig import RunConfiguration, CondaDependencies
from azureml.core import Datastore
from azureml.core.dataset import Dataset
from azureml.train.dnn import TensorFlow
import os
import sys
from dotenv import load_dotenv
sys.path.append(os.path.abspath("./MLOps/ml_service/util")) # NOQA: E402
from workspace import get_workspace
from attach_compute import get_compute
def main():
load_dotenv()
workspace_name = os.environ.get("BASE_NAME")+"-AML-WS"
resource_group = os.environ.get("BASE_NAME")+"-AML-RG"
subscription_id = os.environ.get("SUBSCRIPTION_ID")
tenant_id = os.environ.get("TENANT_ID")
app_id = os.environ.get("SP_APP_ID")
app_secret = os.environ.get("SP_APP_SECRET")
sources_directory_train = os.environ.get("SOURCES_DIR_TRAIN")
train_script_path = os.environ.get("TRAIN_SCRIPT_PATH")
vm_size = os.environ.get("AML_COMPUTE_CLUSTER_CPU_SKU")
compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME")
model_name = os.environ.get("MODEL_NAME")
build_id = os.environ.get("BUILD_BUILDID")
pipeline_name = os.environ.get("TRAINING_PIPELINE_NAME")
data_path = os.environ.get("DATA_PATH_DATASTORE")
model_data_path = os.environ.get("MODEL_DATA_PATH_DATASTORE")
# Get Azure machine learning workspace
aml_workspace = get_workspace(
workspace_name,
resource_group,
subscription_id,
tenant_id,
app_id,
app_secret)
print(aml_workspace)
# Get Azure machine learning cluster
aml_compute = get_compute(
aml_workspace,
compute_name,
vm_size)
if aml_compute is not None:
print(aml_compute)
model_name = PipelineParameter(
name="model_name", default_value=model_name)
release_id = PipelineParameter(
name="release_id", default_value="0"
)
ds = aml_workspace.get_default_datastore()
dataref_folder = ds.path(data_path).as_mount()
model_dataref = ds.path(model_data_path).as_mount()
# NEED those two folders mounted on datastore and env variables specified in variable groups
#ds.upload(src_dir='./VOCdevkit', target_path='VOCdevkit', overwrite=True, show_progress=True)
#ds.upload(src_dir='./model_data', target_path='VOCmodel_data', overwrite=True, show_progress=True)
yoloEstimator = TensorFlow(source_directory=sources_directory_train+'/training',
compute_target=aml_compute,
entry_script=train_script_path,
pip_packages=['keras', 'pillow', 'matplotlib', 'onnxmltools', 'keras2onnx==1.5.1'], # recent versions of keras2onnx give conversion issues
use_gpu=True,
framework_version='1.13')
train_step = EstimatorStep(name="Train & Convert Model",
estimator=yoloEstimator,
estimator_entry_script_arguments=[
"--release_id", release_id,
"--model_name", model_name,
"--data_folder", dataref_folder,
"--model_path", model_dataref
],
runconfig_pipeline_params=None,
inputs=[dataref_folder, model_dataref],
compute_target=aml_compute,
allow_reuse=False)
print("Step Train & Convert created")
train_pipeline = Pipeline(workspace=aml_workspace, steps=[train_step])
train_pipeline.validate()
published_pipeline = train_pipeline.publish(
name=pipeline_name,
description="Model training/retraining pipeline",
version=build_id
)
print(f'Published pipeline: {published_pipeline.name}')
print(f'for build {published_pipeline.version}')
if __name__ == '__main__':
main()

Просмотреть файл

@ -0,0 +1,62 @@
import os
from azureml.pipeline.core import PublishedPipeline
from azureml.core import Workspace
from azureml.core.authentication import ServicePrincipalAuthentication
from dotenv import load_dotenv
def main():
load_dotenv()
workspace_name = os.environ.get("BASE_NAME")+"-AML-WS"
resource_group = os.environ.get("BASE_NAME")+"-AML-RG"
subscription_id = os.environ.get("SUBSCRIPTION_ID")
tenant_id = os.environ.get("TENANT_ID")
experiment_name = os.environ.get("EXPERIMENT_NAME")
model_name = os.environ.get("MODEL_NAME")
app_id = os.environ.get('SP_APP_ID')
app_secret = os.environ.get('SP_APP_SECRET')
release_id = os.environ.get('RELEASE_RELEASEID')
build_id = os.environ.get('BUILD_BUILDID')
service_principal = ServicePrincipalAuthentication(
tenant_id=tenant_id,
service_principal_id=app_id,
service_principal_password=app_secret)
aml_workspace = Workspace.get(
name=workspace_name,
subscription_id=subscription_id,
resource_group=resource_group,
auth=service_principal
)
# Find the pipeline that was published by the specified build ID
pipelines = PublishedPipeline.list(aml_workspace)
matched_pipes = []
for p in pipelines:
if p.version == build_id:
matched_pipes.append(p)
if(len(matched_pipes) > 1):
published_pipeline = None
raise Exception(f"Multiple active pipelines are published for build {build_id}.") # NOQA: E501
elif(len(matched_pipes) == 0):
published_pipeline = None
raise KeyError(f"Unable to find a published pipeline for this build {build_id}") # NOQA: E501
else:
published_pipeline = matched_pipes[0]
pipeline_parameters = {"model_name": model_name, "release_id": release_id}
response = published_pipeline.submit(
aml_workspace,
experiment_name,
pipeline_parameters)
run_id = response.id
print("Pipeline run initiated ", run_id)
if __name__ == "__main__":
main()

Просмотреть файл

@ -0,0 +1,47 @@
import os
from dotenv import load_dotenv
from azureml.core import Workspace
from azureml.core.compute import AmlCompute
from azureml.core.compute import ComputeTarget
from azureml.exceptions import ComputeTargetException
def get_compute(
workspace: Workspace,
compute_name: str,
vm_size: str
):
# Load the environment variables from .env in case this script
# is called outside an existing process
load_dotenv()
# Verify that cluster does not exist already
try:
if compute_name in workspace.compute_targets:
compute_target = workspace.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('Found existing compute target ' + compute_name
+ ' so using it.')
else:
compute_config = AmlCompute.provisioning_configuration(
vm_size=vm_size,
vm_priority=os.environ.get("AML_CLUSTER_PRIORITY",
'lowpriority'),
min_nodes=int(os.environ.get("AML_CLUSTER_MIN_NODES", 0)),
max_nodes=int(os.environ.get("AML_CLUSTER_MAX_NODES", 4)),
idle_seconds_before_scaledown="300"
# #Uncomment the below lines for VNet support
# vnet_resourcegroup_name=vnet_resourcegroup_name,
# vnet_name=vnet_name,
# subnet_name=subnet_name
)
compute_target = ComputeTarget.create(workspace, compute_name,
compute_config)
compute_target.wait_for_completion(
show_output=True,
min_node_count=None,
timeout_in_minutes=10)
return compute_target
except ComputeTargetException as e:
print(e)
print('An error occurred trying to provision compute.')
exit()

Просмотреть файл

@ -0,0 +1,34 @@
from azureml.core import Workspace
from azureml.core.model import Model
from workspace import get_workspace
from dotenv import load_dotenv
import os
def main():
load_dotenv()
workspace_name = os.environ.get("BASE_NAME")+"-AML-WS"
resource_group = os.environ.get("BASE_NAME")+"-AML-RG"
subscription_id = os.environ.get("SUBSCRIPTION_ID")
tenant_id = os.environ.get("TENANT_ID")
app_id = os.environ.get("SP_APP_ID")
app_secret = os.environ.get("SP_APP_SECRET")
MODEL_NAME = os.environ.get('MODEL_NAME')
model_data_path = os.environ.get("MODEL_DATA_PATH_DATASTORE")
ws = get_workspace(
workspace_name,
resource_group,
subscription_id,
tenant_id,
app_id,
app_secret)
modelName = MODEL_NAME.rstrip('h5')+'onnx'
model = Model(workspace=ws, name=modelName)
print(model)
model.download()
ds = ws.get_default_datastore()
print(ds)
ds.download(target_path='.', prefix=model_data_path, show_progress=True)
if __name__ == '__main__':
main()

Просмотреть файл

@ -0,0 +1,29 @@
import sys
from azureml.core import Workspace
from azureml.core.authentication import ServicePrincipalAuthentication
def get_workspace(
name: str,
resource_group: str,
subscription_id: str,
tenant_id: str,
app_id: str,
app_secret: str):
service_principal = ServicePrincipalAuthentication(
tenant_id=tenant_id,
service_principal_id=app_id,
service_principal_password=app_secret)
try:
aml_workspace = Workspace.get(
name=name,
subscription_id=subscription_id,
resource_group=resource_group,
auth=service_principal)
return aml_workspace
except Exception as caught_exception:
print("Error while retrieving Workspace...")
print(str(caught_exception))
sys.exit(1)

Просмотреть файл

@ -1,10 +1,12 @@
# Bringing Computer Vision models to the Edge with Azure IoT Edge samples
# Bringing Computer Vision models to the Edge with Azure IoT Edge samples [UPDATED NOV 2019]
## Overview
This repository hosts the code samples for the [Bringing Computer Vision models to the Edge with Azure IoT Edge - A guide for developers and data scientists](https://github.com/microsoft/azure-iot-edge-cv-model-samples/tree/master/Documentation/Bringing%20Computer%20Vision%20models%20to%20the%20Intelligent%20Edge%20with%20Azure%20IoT%20Edge%20-%20A%20guide%20for%20developers%20and%20data%20scientists.pdf).
The objective of this guide is to walk you through an end-to-end AI object detection on a Raspberry Pi 3 via a series of modules that are entirely customizable with ease. This guide is designed for developers as well as data scientists who wish to easily put their AI models in practice on edge devices without focusing too much on the deployment.
**[UPDATED NOV] This new version features a fully operational MLOps implementation of the above IoT solution. Check out MLOps/ as well as the associated section in the updated guide**
<p align="center"><img width="80%" src="https://github.com/microsoft/azure-iot-edge-cv-model-samples/blob/master/Documentation/overview.png" /></p>
From the training of the YOLOv3 object detection to the deployment on the Raspberry Pi 3, you will have a wide overview of how to build an IoT device performing computer vision models.
@ -12,6 +14,7 @@ From the training of the YOLOv3 object detection to the deployment on the Raspbe
## Contents
* [Azure ML Training](https://github.com/microsoft/azure-iot-edge-cv-model-samples/tree/master/Azure%20ML%20Training): contains a notebook to train the state-of-the-art object detection YOLOv3 based on this Keras implementation [repository](https://github.com/qqwweee/keras-yolo3) with Azure Machine Learning.
* [IoT](https://github.com/microsoft/azure-iot-edge-cv-model-samples/tree/master/IoT): contains the IoT solution presented in the guide as well as a notebook to quickly set up an Azure IoT Hub via az commands.
* [MLOps](https://github.com/microsoft/azure-iot-edge-cv-model-samples/tree/master/MLOps): contains an end-to-end MLOps implementation of the IoT solution above from the training of YOLOv3 on the VOC dataset to the deployment in a **dev-qa-prod** environment with Azure DevOps.
## Contributing