This commit is contained in:
Jim Bennett 2021-08-24 13:48:48 -07:00
Родитель 62da80f1c3
Коммит a8fd20925c
93 изменённых файлов: 1004 добавлений и 138 удалений

130
.gitignore поставляемый
Просмотреть файл

@ -1,129 +1,3 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
__pycache__
.DS_Store

152
README.md
Просмотреть файл

@ -1,14 +1,146 @@
# Project
# Scenario: The Mutt Matcher (IoT version)
> This repo has been populated by an initial template to help get you started. Please
> make sure to update the content to build a great experience for community-building.
According to the World Health Organization there are more than 200 million stray dogs worldwide. The American Society for the Prevention of Cruelty to Animals estimates over 3 million dogs enter their shelters annually - about 6 dogs per minute! Anything that can reduce the time and effort to take in strays can potentially help millions of dogs every year.
As the maintainer of this project, please make a few updates:
Different breeds have different needs, or react differently to people, so when a stray or lost dog is found, identifying the breed can be a great help.
- Improving this README.MD file to provide a great experience
- Updating SUPPORT.MD with content about this project's support experience
- Understanding the security reporting process in SECURITY.MD
- Remove this section from the README
![A Raspberry Pi with a camera](./images/mutt-matcher-device.png)
Your team has been asked by a fictional animal shelter to build a Mutt Matcher - a device to help determine the breed of a dog when it has been found. This will be an IoT (Internet of Things) device based around a Raspberry Pi with a camera, and will take a photo of the dog, and then use an image classifier Machine learning (ML) model to determine the breed, before uploading the results to a web-based IoT application.
This device will help workers and volunteers to be able to quickly detect the breed and make decisions on the best way to approach and care for the dog.
![An application dashboard showing the last detected breed as a German wire pointer, as well as a pie chart of detected breeds](./images/iot-central-dashboard.png)
The animal shelter has provided [a set of images](./model-images) for a range of dog breeds to get you started. These can be used to train the ML model using a service called Custom Vision.
![Pictures of dogs](./images/dog-pictures.png)
## Prerequisites
Each team member will need an Azure account. With [Azure for Students](https://azure.microsoft.com/free/students/?WT.mc_id=academic-36256-jabenn), you can access $100 in free credit, and a large suite of free services!
Your team should be familiar with the following:
- Git and GitHub
- [Forking](https://docs.github.com/github/getting-started-with-github/quickstart/fork-a-repo) and [cloning](https://docs.github.com/github/creating-cloning-and-archiving-repositories/cloning-a-repository-from-github/cloning-a-repository) repositories
- [Python](https://channel9.msdn.com/Series/Intro-to-Python-Development?WT.mc_id=academic-36256-jabenn)
### Hardware
To complete this workshop fully, ideally you will need a [Raspberry Pi (model 3 or 4)](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/), and a camera. The camera can be a [Raspberry Pi Camera module](https://www.raspberrypi.org/products/camera-module-v2/), or a USB web cam.
> 💁 If you don't have a Raspberry Pi, you can run this workshop using a PC or Mac to simulate an IoT device, with either a built in or external webcam.
### Software
Each member of your team will also need the following software installed:
- [Git](https://git-scm.com/downloads)
- [Install git on macOS](https://git-scm.com/download/mac)
- [Install git on Windows](https://git-scm.com/download/win)
- [Install git on Linux](https://git-scm.com/download/linux)
- [Visual Studio Code](https://code.visualstudio.com/?WT.mc_id=academic-36256-jabenn)
## Resources
A series of resources will be provided to help your team determine the appropriate steps for completion. The resources provided should provide your team with enough information to achieve each goal.
These resources include:
- Appropriate links to documentation to learn more about the services you are using and how to do common tasks
- A pre-built application template for the cloud service part of your IoT application
- Full source code for your IoT device
If you get stuck, you can always ask a mentor for additional help.
## Exploring the application
![Icons for Custom Vision, IoT Central and Raspberry Pi](./images/app-icons.png)
The application your team will build will consist of 3 components:
- An image classifier running in the cloud using Microsoft Custom Vision
- An IoT application running in the cloud using Azure IoT Central
- A Raspberry Pi based IoT device with a camera
![The application flow described below](./images/app-flow.png)
When a dog breed needs to be detected:
1. A button on the IoT application is clicked
1. The IoT application sends a command to the IoT device to detect the breed
1. The IoT device captures an image using it's camera
1. The image is sent to the image classifier ML model in the cloud to detect the breed
1. The results of the classification are sent back to the IoT device
1. The detected breed is sent from the IoT device to the IoT application
## Goals
Your team will set up the Pi, ML model and IoT application, then connect everything to gether by deploying code to the IoT device.
> 💁 Each goal below defines what you need to achieve, and points you to relevant on-line resources that will show you how the cloud services or tools work. The aim here is not to provide you with detailed steps to complete the task, but allow you to explore the documentation and learn more about the services as you work out how to complete each goal.
1. [Set up your Raspberry Pi and camera](set-up-pi.md): You will need to set up a clean install of Raspberry Pi OS on your Pi and ensure all the required software is installed.
> 💻 If you are using a PC or Mac instead of a Pi, your team will need to [set this up instead](set-up-pc-mac.md).
1. [Train your ML model](train-model.md): Your team will need to train the ML model in the cloud using Microsoft Custom Vision. You can train and test this model using the images that have been provided by the animal shelter.
1. [Set up your IoT application](set-up-iot-central.md): Your team will set up an IoT application in the cloud using IoT Central, an IoT software-as-a-service (SaaS) platform. You will be provided with a pre-built application template to use.
1. [Deploy device code to your Pi](deploy-device-code.md): The code for the IoT device needs to be configured and deployed to the Raspberry Pi. You will then be able to test out your application.
> 💻 If you are using a PC or Mac instead of a Pi, your team will need to [run the device code locally](run-code-locally.md).
> 💁 The first 3 goals can be worked on concurrently, with different team members working on different steps. Once these 3 are completed, the final step can be worked on by the team.
## Validation
This workshop is designed to be a goal-oriented self-exploration of Azure and related technologies. Your team can validate some of the goals using the supplied validation scripts, and instructions are provided where relevant. Your team can then validate the final solution by using the IoT device to take a picture of one of the provided testing images and ensuring the correct result appears in the IoT application.
## Where do we go from here?
This project is designed as a potential seed for ideas and future development during your hackathon. Other hack ideas for similar IoT devices that use image classification include:
- Trash sorting into landfill, recycling, and compost.
- Identification of disease in plant leaves.
- Detecting skin cancer by classification of moles.
Improvements you could make to this device include:
- Adding hardware such as a button to take a photograph, instead of relying on the IoT application.
- Adding a screen or LCD display to the IoT device to show the breed.
- Migrating the image classifier to the edge to allow the device to run without connectivity using [Azure IoT Edge](https://docs.microsoft.com/azure/iot-edge/about-iot-edge?WT.mc_id=academic-36256-jabenn).
### Learn more
You can learn more about using Custom Vision to train image classifiers and object detectors using the following resources:
- [Custom Vision documentation](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/?WT.mc_id=academic-36256-jabenn)
- [Custom Vision modules on Microsoft Learn, a free, hands-on, self-guided learning platform](https://docs.microsoft.com/users/jimbobbennett/collections/qe2ehjny7z7zgd?WT.mc_id=academic-36256-jabenn)
You can learn more about Azure IoT Central using the following resources:
- [IoT Central documentation](https://docs.microsoft.com/azure/iot-central/?WT.mc_id=academic-36256-jabenn)
- [IoT Central modules on Microsoft Learn, a free, hands-on, self-guided learning platform](https://docs.microsoft.com/users/jimbobbennett/collections/o5w5c3eyre61x7?WT.mc_id=academic-36256-jabenn)
If you enjoy working with IoT, you can learn more using the following resource:
- [IoT for beginners, a 24-lesson curriculum all about IoT basics](https://github.com/microsoft/IoT-For-Beginners)
## Contributing
@ -26,8 +158,8 @@ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additio
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.

6
code/.env Normal file
Просмотреть файл

@ -0,0 +1,6 @@
ID_SCOPE=
DEVICE_ID=
PRIMARY_KEY=
CAMERA_TYPE=PiCamera
PREDICTION_URL=
PREDICTION_KEY=

60
code/app.py Normal file
Просмотреть файл

@ -0,0 +1,60 @@
"""
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
Main application code for the Mutt Matcher hackathon workshop
"""
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
import asyncio
import json
from camera import Camera
from classifier import Classifier
from device_client import DeviceClient
CAMERA = Camera()
CLASSIFIER = Classifier()
DEVICE_CLIENT = DeviceClient()
# The main app loop that keeps the app alive
async def main() -> None:
"""The main loop
"""
# Connect the IoT device client
await DEVICE_CLIENT.connect_device()
# Define a callback that is called when a command is received from IoT Central
async def command_handler(command_name: str) -> int:
# Define a return status - 404 for not found unless the command name is one we know about
status = 404
# If the detect breed command is invoked, use the camera to detect the breed
if command_name == 'DetectBreed':
status = 200
# Take a picture
image = CAMERA.take_picture()
# Get the highest predted breed from the picture
breed = CLASSIFIER.classify_image(image)
print('Breed detected:', breed)
# Send the breed to IoT Central
telemetry = {'Breed': breed}
await DEVICE_CLIENT.send_message(json.dumps(telemetry))
return status
# Connect the command handler
DEVICE_CLIENT.on_command = command_handler
# Loop forever
while True:
await asyncio.sleep(60)
# Start the app running
asyncio.run(main())

63
code/camera.py Normal file
Просмотреть файл

@ -0,0 +1,63 @@
"""
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
Camera class for capturing images from a Raspberry Pi Camera module, or a USB web cam
"""
from io import BytesIO
import os
import time
from dotenv import load_dotenv
# Allow PiCamera to fail as local versions won't have a camera
try:
from picamera import PiCamera
except:
pass
import cv2
CAMERA_ROTATION = 180
CAMERA_INDEX = 0
# pylint: disable=too-few-public-methods,no-member
class Camera:
"""Camera class for capturing images from a Raspberry Pi Camera module, or a USB web cam
"""
def __init__(self):
# Load the camera settings
load_dotenv()
self.__camera_type = os.environ['CAMERA_TYPE']
# pylint: disable=no-else-return
if self.__camera_type.lower() == 'picamera':
self.__camera = PiCamera()
self.__camera.resolution = (640, 480)
self.__camera.rotation = CAMERA_ROTATION
time.sleep(2)
else:
self.__camera = cv2.VideoCapture(CAMERA_INDEX)
# Take a picture
def take_picture(self) -> BytesIO:
"""Takes a picture from a Raspberry Pi Camera module, or a USB web cam
:return: The image as a byte stream
:rtype: BytesIO
"""
# pylint: disable=no-else-return
if self.__camera_type.lower() == 'picamera':
# If we are using the PiCamera, capture a jpeg directly into a BytesIO object
image = BytesIO()
self.__camera.capture(image, 'jpeg')
# Rewind the BytesIO and return it
image.seek(0)
return image
else:
# If we are using a USB webcam, capture an image using OpenCV
_, image = self.__camera.read()
# Encode the image as a JPEG into a byte buffer
_, buffer = cv2.imencode('.jpg', image)
# Copy the byte buffer into a BytesIO object and retturn it
return BytesIO(buffer)

55
code/classifier.py Normal file
Просмотреть файл

@ -0,0 +1,55 @@
"""
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
Classifier class for sending images to a Custom Vision image classifier and getting the top
prediction
"""
from io import BytesIO
import os
from dotenv import load_dotenv
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
# pylint: disable=too-few-public-methods
class Classifier:
"""Classifier class for sending images to a Custom Vision image classifier and getting
the top prediction
"""
def __init__(self):
# Load the custom vision settings
load_dotenv()
prediction_url = os.environ['PREDICTION_URL']
prediction_key = os.environ['PREDICTION_KEY']
# Decompose the prediction URL
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
self.__project_id = parts[6]
self.__iteration_name = parts[9]
# Create the image classifier predictor
prediction_credentials = ApiKeyCredentials(in_headers={'Prediction-key': prediction_key})
self.__predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
def classify_image(self, image: BytesIO) -> str:
"""Classifies an image and returns the tag with the highest probability
:param image: Required. The image to classify
:type image: BytesIO
:return: The tag for the highest prediction
:rtype: str
"""
# Send the image to the classifier to get the predictions
results = self.__predictor.classify_image(self.__project_id, self.__iteration_name, image)
# The predictions come in order of probability, so the first is the best
best_prediction = results.predictions[0]
# print the predictions and find the one with the highest probability
print('Predictions:')
for prediction in results.predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
return best_prediction.tag_name

89
code/device_client.py Normal file
Просмотреть файл

@ -0,0 +1,89 @@
"""
Copyright (c) Microsoft Corporation.
Licensed under the MIT license.
IoT device client class for connecting to Azure IoT Central
as an IoT device, receiving commands and sending telemetry
"""
import os
from dotenv import load_dotenv
from azure.iot.device.aio import IoTHubDeviceClient, ProvisioningDeviceClient
from azure.iot.device import MethodRequest, MethodResponse
class DeviceClient:
"""IoT device client class for connecting to Azure IoT Central
as an IoT device, receiving commands and sending telemetry
"""
def __init__(self):
# Load the connection details from IoT Central for the device
load_dotenv()
self.__id_scope = os.environ['ID_SCOPE']
self.__device_id = os.environ['DEVICE_ID']
self.__primary_key = os.environ['PRIMARY_KEY']
self.__device_client = None
self.__on_command = None
async def __method_request_handler(self, method_request: MethodRequest) -> None:
print('Command received:', method_request.name)
if self.__on_command is not None:
status = await self.__on_command(method_request.name)
else:
status = 404
# Send a response - all commands need a response
method_response = MethodResponse.create_from_method_request(method_request, status, {})
await self.__device_client.send_method_response(method_response)
async def connect_device(self) -> None:
"""Connects this device to IoT Central
"""
# Connect to the device provisioning service and request the connection details
# for the device
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host='global.azure-devices-provisioning.net',
registration_id=self.__device_id,
id_scope=self.__id_scope,
symmetric_key=self.__primary_key)
registration_result = await provisioning_device_client.register()
# Build the connection string - this is used to connect to IoT Central
conn_str = 'HostName=' + registration_result.registration_state.assigned_hub + \
';DeviceId=' + self.__device_id + \
';SharedAccessKey=' + self.__primary_key
# The device client object is used to interact with Azure IoT Central.
self.__device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Connect the device client
print('Connecting')
await self.__device_client.connect()
print('Connected')
# Connect the command handler
self.__device_client.on_method_request_received = self.__method_request_handler
@property
def on_command(self):
"""A callback for when a command is received
:return: The callback to call
"""
return self.__on_command
@on_command.setter
def on_command(self, value):
"""A callback for when a command is received
:value: Required. The callback to call
"""
self.__on_command = value
async def send_message(self, message: str) -> None:
"""Sends a telemetry message to IoT Central
:param message: Required. The telemetry to send
:type message: str
"""
await self.__device_client.send_message(message)

Просмотреть файл

@ -0,0 +1,5 @@
azure-cognitiveservices-vision-customvision==3.1.0
azure-iot-device==2.7.1
opencv-contrib-python==4.5.3.56
python-dotenv==0.19.0
requests==2.26.0

Просмотреть файл

@ -0,0 +1,5 @@
azure-cognitiveservices-vision-customvision==3.1.0
azure-iot-device==2.7.1
opencv-contrib-python==4.5.3.56
python-dotenv==0.19.0
requests==2.26.0

Просмотреть файл

@ -0,0 +1,5 @@
azure-cognitiveservices-vision-customvision==3.1.0
azure-iot-device==2.7.1
opencv-contrib-python==4.5.3.56
python-dotenv==0.19.0
requests==2.26.0

6
code/requirements.txt Normal file
Просмотреть файл

@ -0,0 +1,6 @@
azure-cognitiveservices-vision-customvision==3.1.0
azure-iot-device==2.7.1
opencv-contrib-python==4.1.0.25
picamera==1.13
python-dotenv==0.19.0
requests==2.26.0

87
deploy-device-code.md Normal file
Просмотреть файл

@ -0,0 +1,87 @@
# Goal 3: Deploy device code
Your team has trained an ML model and set up an IoT application. The final task for your team is to deploy code to your Raspberry Pi and test out your application.
> 💻 If you are using a PC or Mac instead of a Pi, your team can run the code locally.
## The code
The code has been provided for you in the [code](./code) folder, so make sure you clone this repo. This code is Python code that will connect to your IoT Central application and wait for the *Detect Breed* command. Once this command is received, it will take a picture using either the Raspberry Pi Camera module, or a USB web cam. The detected breed will be sent back to the IoT Central application and you will be able to see it on the dashboard.
The code has the following files:
| File | Description |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------- |
| .env | This file contains configuration for the app |
| app.py | This file contains the core application logic |
| camera.py | This file contains the `Camera` class that interacts with the Raspberry Pi Camera module or USB web cam to capture images |
| classifier.py | This file contains the `Classifier` class that uses the Custom Vision image classifier to classify an image from the camera |
| device_client.py | This file contains the `DeviceClient` class that connects to IoT Central, listens for commands and sends telemetry |
| requirements.txt | This file contains the Pip packages that are needed to run this code on a Raspberry Pi |
| requirements-linux.tx t | This file contains the Pip packages that are needed to run this code on Linux |
| requirements-macos.txt | This file contains the Pip packages that are needed to run this code on macOS |
| requirements-windows.txt | This file contains the Pip packages that are needed to run this code on Windows |
Take some time to read this code and understand what it does, particularly the `device_client.py` and `classifier.py` files. These use the `azure-iot-device` abd `azure-cognitiveservices-vision-customvision` Pip packages respectively to work with the various cloud services.
The configuration for the code is in the `.env` file. You will need to set the following values:
| Value | Description |
| -------------- | ----------------------------------------------------------------------------------------------------- |
| ID_SCOPE | The value of the ID Scope from the connection dialog for your `mutt-matcher` device in IoT Central |
| DEVICE_ID | The value of the device ID from the connection dialog for your `mutt-matcher` device in IoT Central |
| PRIMARY_KEY | The value of the primary key from the connection dialog for your `mutt-matcher` device in IoT Central |
| CAMERA_TYPE | Set this to PiCamera if you are using the Raspberry Pi Camera module, otherwise set it to USB |
| PREDICTION_URL | The prediction URL for your published model iteration from Custom Vision |
| PREDICTION_KEY | The prediction key for your published model iteration from Custom Vision |
You will need to install the Pip packages from one of the `requirements.txt` files:
```sh
pip3 install -r requirements.txt
```
- If you are using a Raspberry Pi, use `requirements.txt`
- If you are using Linux, use `requirements-linux.txt`
- If you are using macOS, use `requirements-macos.txt`
- If you are using Windows, use `requirements-windows.txt`
This code needs to be run with Python 3. Raspberry Pi's come with Python 2 as well as Python 3.
```sh
python3 app.py
```
> 💻 If you are using a PC or Mac instead of a Pi, you should [set up a virtual environment](https://docs.python.org/3/library/venv.html) and install the Pip packages inside that, as well as running code from the activated virtual environment.
## Success criteria
Your team will work together to deploy this code on your Mutt Matcher device. Your team will have achieved this goal when the following success criteria is met:
- All the code has been copied to the device.
- The `.env` file has been updated with the relevant values.
- Your code is running, and able to take a picture and detect a breed, seeing the results in you IoT Central app.
## Validation
Have a mentor check your device. They should be able to point the device at an one of the training images loaded on a screen, select the *Detect Breed* button, then see the correct result in the IoT Central application.
## Tips
- You can copy code onto your device using VS Code. Connect to the Pi using the VS Code remote SSH extension, open a folder on your Pi, then drag the code into the VS Code explorer from your File Explorer or Finder. You will need all 3 files in the [code](./code) folder.
- If you are using the Raspberry Pi camera module, the code assumes you have this cable side up. If you want to have the camera cable side down, or sideways, you need to change the value of `CAMERA_ROTATION` in `camera.py` to the right value. `0` means the cable is at the bottom, `90` for the cable on one side, `270` for the cable on the other side.
- If you are using a USB web cam, the `CAMERA_INDEX` value in `camera.py` defines which camera is used. If you only have 1 camera attached, this should be 0. If you have multiple, you can change this value.
- You can validate the images taken from the Custom Vision portal. When images are classified, they appear in the *Predictions* tab in Custom Vision. You can use this to validate your camera is configured correctly.
- Unless you have a dog of the relevant breed handy, you can test the app out by loading one of the pictures from the [model-images/testing-images](./model-images/testing-images) and having that on your screen, then pointing the camera on the Mutt Matcher at your screen.
- The `requirements-macos.txt` file has been tested on an M1 mac running macOS Big Sur 11.5.2. If you have any issues on other configurations, you may need to change the versions of the Pip packages installed.
- The `requirements-windows.txt` file has been tested on a Intel PC running Windows 10 21H1. If you have any issues on other configurations, you may need to change the versions of the Pip packages installed.
## Final result
Once your code is running, you should have a complete IoT application!

Двоичные данные
images/app-flow.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 60 KiB

Двоичные данные
images/app-icons.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 32 KiB

Двоичные данные
images/azure-iot-central-logo.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 7.8 KiB

Двоичные данные
images/custom-vision-detect-dog.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 445 KiB

Двоичные данные
images/custom-vision-logo.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 12 KiB

Двоичные данные
images/dog-pictures.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 608 KiB

Двоичные данные
images/iot-central-dashboard.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 79 KiB

Двоичные данные
images/mutt-matcher-device-new.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 61 KiB

Двоичные данные
images/mutt-matcher-device.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 387 KiB

Двоичные данные
images/pi-camera-ribbon-cable.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 297 KiB

Двоичные данные
images/pi-camera-socket-ribbon-cable.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 402 KiB

Двоичные данные
images/prediction-key-url.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 63 KiB

Двоичные данные
images/published-iteration.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 147 KiB

Двоичные данные
images/raspberry-pi-imager.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 226 KiB

Двоичные данные
images/raspberry-pi-logo.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 26 KiB

Двоичные данные
model-images/testing-images/american-staffordshire-terrier-10.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 385 KiB

Двоичные данные
model-images/testing-images/american-staffordshire-terrier-9.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 265 KiB

Двоичные данные
model-images/testing-images/australian-shepherd-10.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 577 KiB

Двоичные данные
model-images/testing-images/australian-shepherd-9.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 500 KiB

Двоичные данные
model-images/testing-images/buggle-10.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 301 KiB

Двоичные данные
model-images/testing-images/buggle-9.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 302 KiB

Двоичные данные
model-images/testing-images/german-wirehaired-pointer-10.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 817 KiB

Двоичные данные
model-images/testing-images/german-wirehaired-pointer-9.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 533 KiB

Двоичные данные
model-images/testing-images/shorkie-14.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 309 KiB

Двоичные данные
model-images/testing-images/shorkie-15.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 252 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 145 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 233 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 289 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 282 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-5.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 319 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-6.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 399 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-7.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 200 KiB

Двоичные данные
model-images/training-images/american-staffordshire-terrier-8.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 220 KiB

Двоичные данные
model-images/training-images/australian-shepherd-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 436 KiB

Двоичные данные
model-images/training-images/australian-shepherd-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 347 KiB

Двоичные данные
model-images/training-images/australian-shepherd-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 397 KiB

Двоичные данные
model-images/training-images/australian-shepherd-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 352 KiB

Двоичные данные
model-images/training-images/australian-shepherd-5.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 398 KiB

Двоичные данные
model-images/training-images/australian-shepherd-6.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 248 KiB

Двоичные данные
model-images/training-images/australian-shepherd-7.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 413 KiB

Двоичные данные
model-images/training-images/australian-shepherd-8.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 382 KiB

Двоичные данные
model-images/training-images/buggle-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 168 KiB

Двоичные данные
model-images/training-images/buggle-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 251 KiB

Двоичные данные
model-images/training-images/buggle-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 293 KiB

Двоичные данные
model-images/training-images/buggle-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 301 KiB

Двоичные данные
model-images/training-images/buggle-5.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 290 KiB

Двоичные данные
model-images/training-images/buggle-6.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 291 KiB

Двоичные данные
model-images/training-images/buggle-7.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 287 KiB

Двоичные данные
model-images/training-images/buggle-8.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 295 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 1.0 MiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 493 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 604 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 604 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-5.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 883 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-6.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 838 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-7.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 502 KiB

Двоичные данные
model-images/training-images/german-wirehaired-pointer-8.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 749 KiB

Двоичные данные
model-images/training-images/golden-doodle-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 326 KiB

Двоичные данные
model-images/training-images/golden-doodle-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 167 KiB

Двоичные данные
model-images/training-images/golden-doodle-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 167 KiB

Двоичные данные
model-images/training-images/golden-doodle-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 489 KiB

Двоичные данные
model-images/training-images/shorkie-1.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 305 KiB

Двоичные данные
model-images/training-images/shorkie-10.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 309 KiB

Двоичные данные
model-images/training-images/shorkie-11.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 239 KiB

Двоичные данные
model-images/training-images/shorkie-12.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 407 KiB

Двоичные данные
model-images/training-images/shorkie-13.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 359 KiB

Двоичные данные
model-images/training-images/shorkie-2.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 526 KiB

Двоичные данные
model-images/training-images/shorkie-3.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 269 KiB

Двоичные данные
model-images/training-images/shorkie-4.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 439 KiB

Двоичные данные
model-images/training-images/shorkie-5.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 282 KiB

Двоичные данные
model-images/training-images/shorkie-6.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 263 KiB

Двоичные данные
model-images/training-images/shorkie-7.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 359 KiB

Двоичные данные
model-images/training-images/shorkie-8.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 330 KiB

Двоичные данные
model-images/training-images/shorkie-9.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 347 KiB

105
set-up-iot-central.md Normal file
Просмотреть файл

@ -0,0 +1,105 @@
# Goal 2: Set up your IoT application
The Mutt Matcher is an IoT device, so needs to connect to an IoT service in the cloud. IoT stands for *Internet of Things*, and involves *Things* that interact with the physical world, and *Internet* services to work with the things. The IoT service for the Mutt Matcher will control the IoT device - sending commands to the device to take a picture and classify it, store the results of the classification, and make them available on a dashboard.
The goal of this section is to deploy an IoT Central application, and define a device in the application.
## The Azure Service
[![The IoT Cental logo](./images/azure-iot-central-logo.png)](https://azure.microsoft.com/services/iot-central/?WT.mc_id=academic-36256-jabenn)
Azure IoT Central is an IoT software-as-a-service (SaaS) application. You can use it to create an IoT application in the cloud that can manage IoT devices, as well as communicating with those devices. You can use it to define *device templates* that define that telemetry data an IoT device will send, and what commands you can send to that device to control it.
You can then register devices inside the application, ensuring you have control over what physical IoT devices will connect to your IoT application. You can also set up views and dashboards to visualize the data send from your devices.
![An application dashboard showing the last detected breed as a German wirehaired pointer, as well as a pie chart of detected breeds](./images/iot-central-dashboard.png)
For this workshop, you will need an IoT application that has a device template for Mutt Matchers, defining a *command* that instructs the device to take a picture and detect the breed, and *telemetry* data that the device will send with the detected breed.
Your application will then need a dashboard to allow you to run the command, as well as see the last detected breed and a history of detected breeds.
Inside your application, you will need to register a device that represents your Raspberry Pi, and this device has connection details that will allow your Raspberry Pi to connect.
## Application template
Rather than build your application and device template from scratch, you can use a pre-built template to help. Select the button below to deploy the application.
[![Deploy button](https://img.shields.io/badge/Deploy_the_IoT_Central_application-0078D4?style=for-the-badge&logo=microsoftazure)](https://aka.ms/mutt-matcher-iot-central-app)
Set your applications name to `Mutt Matcher`, then set a unique URL, then select a pricing plan.
Once your application has been deployed, you will need to register a new device using the `mutt-matcher` device template that is already defined in the application. You can use this device registration to get connection details for your Mutt Matcher.
When you view the device, you will see a dashboard with a *Detect Breed* button, the last detected breed, and a Pie chart of all the detected breeds. When you select the *Detect Breed* button, it will take you to the *Command* tab in the dashboard where you can run this command.
## Success criteria
Your team will work together to create the IoT application and set up your Mutt Matcher device. Your team will have achieved this goal when the following success criteria is met:
- Your IoT Central application is deployed.
- You have created a device using the `mutt-matcher` device template.
- You have the connection details needed for your IoT device to connect as the created device in the IoT Central application.
## Validation
You can validate that your IoT application has been set up correctly using a Python script inside this repo.
1. You will need an API key for your IoT Central application. To get one, follow the instructions in the [Get an API token documentation](https://docs.microsoft.com/azure/iot-central/core/howto-authorize-rest-api?WT.mc_id=academic-36256-jabenn#get-an-api-token).
1. From wherever you cloned this repo, navigate to the `validation` folder.
1. Create an activate a Python virtual environment. If you've not done this before, you can refer to the [Python creation of virtual environments documentation](https://docs.python.org/3/library/venv.html).
1. Install the Pip packages in the `requirements.txt` file using the following command from inside the activated virtual environment:
```sh
pip install -r requirements.txt
```
1. Run the validation script using the following command:
```sh
python validate-iot-application.py
```
1. When prompted, enter the API token, and the URL of your application.
This validation script will take the testing images, and test them against the model to ensure the correct tag is found as the most probable. You will see output like the following:
```output
(.venv) ➜ validation git:(main) ✗ python3 validate-iot-application.py
IoT application validation
What is your IoT application API token?
SharedAccessSignature sr=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=Validation&se=1660961122741
What is your application URL?
https://mutt-matcher.azureiotcentral.com/
Connecting as device mutt-matcher
Connected
Command received: DetectBreed
Last detected breed: american-staffordshire-terrier
Validation passed!
```
The validation tool will listen for a command and send back a dog breed. After you run this, you will see a value for the last detected breed and data in the Pie chart on the `mutt-matcher` device dashboard.
## Resources
Your team might find these resources helpful:
- [Use an application template guide in the IoT Central documentation](https://docs.microsoft.com/azure/iot-central/core/howto-create-iot-central-application?WT.mc_id=academic-36256-jabenn#use-an-application-template)
- [Add a device guide in the IoT Central documentation](https://docs.microsoft.com/azure/iot-central/core/howto-manage-devices-individually?WT.mc_id=academic-36256-jabenn#add-a-device)
- [Register a single device in advance guide in the IoT Central documentation](https://docs.microsoft.com/azure/iot-central/core/concepts-get-connected?WT.mc_id=academic-36256-jabenn#register-a-single-device-in-advance)
## Tips
- Make sure you select the `mutt-matcher` device template when registering your new device
- You will only be using one device, so it doesn't matter which pricing plan you use, they will all be free for 1 device.
- The application URL needs to be globally unique, so don't use `mutt-matcher`!
## Final result
![A mutt matcher device in IoT Central with no data](./images/mutt-matcher-device-new.png)
## Next challenge
The next goal is to [deploy your code to your an IoT device](./deploy-device-code.md).

15
set-up-pc-mac.md Normal file
Просмотреть файл

@ -0,0 +1,15 @@
# Goal 0: Set up your PC or Mac
If you don't have a Raspberry Pi to use, you can simulate one using your PC or Mac, with a built in or external web cam.
## Configure your OS
The code will use [OpenCV](https://opencv.org) to capture images from your camera, so you will need to install this.
To install OpenCV, you can follow the instructions for your OS in the [Introduction to OpenCV documentation](https://docs.opencv.org/4.5.3/df/d65/tutorial_table_of_content_introduction.html).
The code for the IoT device is Python, so you will need to have a recent version of Python installed. You can install this from the [Python downloads page](https://www.python.org/downloads/) if you don't already have it installed.
## Next challenge
Once your PC or Mac is ready, it's time to [train the ML model](./train-model).

89
set-up-pi.md Normal file
Просмотреть файл

@ -0,0 +1,89 @@
# Goal 0: Set up your Pi
The goal here is to set up your Rasbperry Pi ready to use as an IoT device.
> ⏱ This will take a while, but most of it is hands-off time waiting for SD cards to be imaged or updates to be installed, so it can be done concurrently with the other goals in this workshop.
## Raspberry Pi
![The Raspberry Pi logo](./images/raspberry-pi-logo.png)
## Flash an SD card
Raspberry Pi's use SD cards for their file system, so before you can use a Pi you will need to flash the Raspberry Pi OS onto an SD Card.
> 💁 It is recommended to use a clean OS to ensure you don't get errors caused by software issues
You will need to install either Raspberry Pi OS (which contains a full desktop environment), or Raspberry Pi OS Lite. The Lite version is preferred as you won't need the desktop, and it installs and boots faster.
> 🐧 Raspberry Pi OS is a variant of Debian Linux
To install Raspberry Pi OS on an SD Card, you will need to use the Raspberry Pi imager. You can download the imager from the [Raspberry Pi downloads page](https://www.raspberrypi.org/software/).
![The imageer choosing Raspberry Pi OS lite](./images/raspberry-pi-imager.png)
Run the imager, select the Raspberry Pi OS Lite OS, and select your SD card. Don't select the write button yet!
Before you write the image, you will need to configure a few things such as the WiFi details that the Pi will use to connect. These need to be configured up front as you won't be connecting your Pi to a keyboard/mouse/monitor, instead you will be running it *headless*. The Pi will need to be on the same WiFi network as at least on team members computer.
> 💁 If you have any issues with the hackathon WiFi, a popular fallback is to connect the Pi and one computer to a mobile hot-spot or tether to a phone.
Follow the instructions in the [Raspberry Pi Imager blog post](https://www.raspberrypi.org/blog/raspberry-pi-imager-update-to-v1-6/) to launch the advanced settings. This setting dialog is launched by pressing `Ctrl+Shift+x`.
Set up the WiFi details on the Pi, ensure *Enable SSH* is set, set a password, and set a unique hostname. You will connect to the Pi via this hostname, so it will need to be unique - if other hackers at this event have the same Pi name you won't be able to log into your Pi. You can leave the username as `pi`.
> 💁 If the WiFi you are using has a captive portal, or other restrictions beyond a simple SSID/password combination then you will need to install the full Raspberry Pi OS (not Lite) and connect your Pi to a keyboard/monitor/mouse to configure the WiFi.
Once you have configured these options, write the image to the SD card.
> ⏱ This will take a few minutes, so you can work on the next goal whilst this is happening.
## Hardware setup
When the write has completed, insert the SD card into your Pi.
![The camera cable connected to the Pi](./images/pi-camera-socket-ribbon-cable.png)
![The camera cable connected to the Pi Camera module](./images/pi-camera-ribbon-cable.png)
If you are using a Raspberry Pi Camera module, insert the camera into the camera socket using the ribbon cable. You can find instructions on doing this in the [Raspberry Pi Getting started with the Camera Module documentation](https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2). If you are using a USB web cam, connect it to one of the USB ports on the Pi.
Connect your Pi to a power supply and turn the power on.
## Configure the Pi OS
Once the Pi has booted, you should be able to SSH into the Pi using the following command:
```sh
ssh pi@<hostname>.local
```
Replace `<hostname>` with the hostname you chose for your Pi. WHen prompted for a password, enter the password you set in the imager.
> 💁 Most modern computers running a recent version of Linux or Windows should be able to find devices on your local network using the `<hostname>.local` syntax. macOS supports this out of the box. If you are using Linux or Windows and have issues, there are steps to make this work in [this set of steps in IoT for beginners from Microsoft](https://github.com/microsoft/IoT-For-Beginners/blob/main/1-getting-started/lessons/1-introduction-to-iot/pi.md#task---connect-to-the-pi).
Once this is working, you will be able to use your Pi.
> 💁 If you have any issues connecting, you can install the full Raspberry Pi OS (not Lite) and connect your Pi to a keyboard/monitor/mouse to configure the WiFi, and get the IP address of the Pi, then connect using that.
You will need to update your Pi, and install some software that interacts with cameras. Run the following code on the Pi:
```sh
sudo apt update && sudo apt full-upgrade --yes
sudo apt install --yes libatlas-base-dev libhdf5-dev libhdf5-serial-dev libatlas-base-dev libjasper-dev libqtgui4 libqt4-test
sudo raspi-config nonint do_camera 0
sudo reboot
```
These commands will update all the software on the Pi, install some libraries for interacting with images, turn the Pi's camera port on, then reboot the Pi. Once the install is finished, your SSH session will end as the Pi reboots.
> ⏱ This will take a few minutes, so you can work on the next goal whilst this is happening.
## Connect using VS Code
To make it easier to program the Pi, including copying source code onto the device, you can use VS Code. You can connect VS Code remotely to the Pi over SSH, and work as if you were running locally.
Follow the instructions in the [VS Code remote development using SSH documentation](https://code.visualstudio.com/docs/remote/ssh?WT.mc_id=academic-36256-jabenn) to connect VS Code to your Pi.
## Next challenge
Once your Pi is ready, it's time to [train the ML model](./train-model).

107
train-model.md Normal file
Просмотреть файл

@ -0,0 +1,107 @@
# Goal 1: Train your ML model
The Mutt Matcher will need to use a machine learning (ML) model to be able to detect different dog breeds in images. The ML model used is an image classifier - a model that can classify an image into different tags depending on what is in the image. These models are trained by giving them tagged images - multiple images for each tag. For example, you can train a model with images tagged as `cat`, and images tagged as `dog`. The model can then be given a new image and it will predict if the image should be a `cat` or a `dog`.
A typical image classifier needs many thousands if not millions of images to train, but you can take a shortcut using a technique called *transfer learning*. Transfer learning allows to take an image classifier trained on a wide range of tags, then re-train it using a small number of images for your tags.
Your team will use a cloud service that uses transfer learning to train an ML model to identify different dog breeds using images provided by the fictional animal shelter. These training images consist of a number of images of a range of breeds. You would use these to train the model, using the breed as the tag.
> ⏱ Training a model can take a while, so you can work on this concurrently with the other goals in the workshop.
## The Azure Service
[![The custom vision logo](./images/custom-vision-logo.png)](https://customvision.ai?WT.mc_id=academic-36256-jabenn)
The transfer learning service to use is [Custom Vision](https://customvision.ai?WT.mc_id=academic-36256-jabenn). This is a service from Microsoft that can train image classifiers using only a small number of images for each tag. Once your model has been trained, it can be published to be run in the cloud, using one of the Custom Vision SDKs for programming languages such as Python, Java or C#, or via a REST API. You can also download your model and run it locally on an IoT device, a web page, or in an application.
![An american staffordshire terrier detected with an 89.5% probability](./images/custom-vision-detect-dog.png)
Image classifiers don't give a single fixed answer of the detected tag, instead they provide a list of all the tags that the model has been trained on with the probability that the image matches that tag. In the image above, the results show values against each tag:
| Tag | Probability |
| ------------------------------ | ----------: |
| american-staffordshire-terrier | 69.8% |
| german-wirehaired-pointer | 28.2% |
| buggle | 1.6% |
| shorkie | 0.2% |
| australian-shepherd | 0.0% |
To use Custom Vision, you will need an [Azure for Students subscription](https://azure.microsoft.com/free/students/?WT.mc_id=academic-36256-jabenn). Custom Vision has a generous free tier, so this workshop will not use any of your Azure credit.
## Image files
The images you can use to train your model are in the [model-images](./model-images) folder. You will need to clone this GitHub repo (or download it as a zip file) to access these images. The images are in 2 different folders:
- [training-images](./model-images/training-images) - these are the images to use to train the model. The filename contains the tag for the image, along with a number to make the files uniquely named. For example, `american-staffordshire-terrier-1.jpg`, `american-staffordshire-terrier-1.jpg`, and `american-staffordshire-terrier-3.jpg` are all images for the tag `american-staffordshire-terrier`.
- [testing-images](./model-images/testing-images) - these are images you can use to test the image classifier once it is trained. These are different from the training images.
> ⚠️ It is important that when you tag these images, you use the tag from the file name.
## Success criteria
Your team will work together to train the ML model using the training images, then test it with the testing images. Your team will have achieved this goal when the following success criteria is met:
- Your model has been trained on all the images in the `training-images` folder.
- You have verified that the `testing-images` have the correct tags detected as the highest probability tag using the *Quick test* button.
- Your model has a published iteration.
## Validation
You can validate your model using a Python script inside this repo.
1. From wherever you cloned this repo, navigate to the `validation` folder.
1. Create and activate a Python 3 virtual environment. If you've not done this before, you can refer to the [Python creation of virtual environments documentation](https://docs.python.org/3/library/venv.html).
1. Install the Pip packages in the `requirements.txt` file using the following command from inside the activated virtual environment:
```sh
pip install -r requirements.txt
```
1. Run the validation script using the following command:
```sh
python validate-model.py
```
1. When prompted, enter the prediction key and the image file URL for your published model iteration. You can get these from the prediction API dialog from the **Prediction URL** button of the published iteration. You then need the *Image file* url and prediction key.
![The prediction key and url dialog](./images/prediction-key-url.png)
This validation script will take the testing images, and test them against the model to ensure the correct tag is found as the most probable. You will see output like the following:
```output
(.venv) ➜ validation git:(main) ✗ python validate-model.py
ML model validation
What is your prediction key?
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
What is your prediction URL?
https://xxxxxxx.cognitiveservices.azure.com/customvision/v3.0/Prediction/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx/classify/iterations/Iteration1/image
..........
Validation passed!
```
## Resources
Your team might find these resources helpful:
- [Quickstart: Build a classifier with the Custom Vision website](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier?WT.mc_id=academic-36256-jabenn)
- [Use your model with the prediction API](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/use-prediction-api?WT.mc_id=academic-36256-jabenn)
- [Classify images learning module on Microsoft Learn](https://docs.microsoft.com/learn/modules/classify-images/?WT.mc_id=academic-36256-jabenn)
## Tips
- When you create the training and prediction resource, use the **Free F0** tier, as this is free to use!
- You **MUST** set the tags to be the breed name from the image file name. The validation script assumes this is the case, and the IoT application you will set up in the next goal relies on these names matching.
- When you train, use *Quick training*.
- The training can take a while - even with quick training, so whilst your model is training, work on the other goals.
## Final result
![A published iteration of the model](./images/published-iteration.png)
## Next challenge
The next goal is to [deploy an IoT application to the cloud and configure an IoT device](./set-up-iot-central.md).

Просмотреть файл

@ -0,0 +1,3 @@
azure-cognitiveservices-vision-customvision
azure-iot-device
requests

Просмотреть файл

@ -0,0 +1,113 @@
import asyncio
import json
import requests
from azure.iot.device.aio import IoTHubDeviceClient, ProvisioningDeviceClient
from azure.iot.device import MethodRequest, MethodResponse
print('IoT application validation')
# Ask for the API token and URL
print('What is your IoT application API token?')
api_token = input().strip()
print('What is your application URL?')
url = input().strip()
# Get all the devices
headers = {
'Authorization': api_token
}
url = url.lower().replace('https://', '')
url = url.split('/')[0]
if url.find('.azureiotcentral.com') == -1:
url += '.azureiotcentral.com'
devices_url = f'https://{url}/api/devices?api-version=1.0'
response = requests.get(devices_url, headers=headers)
# Get the first device
device_id = response.json()['value'][0]['id']
# Get the device credentials
device_credentials_url = f'https://{url}/api/devices/{device_id}/credentials?api-version=1.0'
response = requests.get(device_credentials_url, headers=headers)
id_scope = response.json()['idScope']
primary_key = response.json()['symmetricKey']['primaryKey']
command_executed = False
async def main():
global command_executed
# Pretend to be the device and provision a connection
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host='global.azure-devices-provisioning.net',
registration_id=device_id,
id_scope=id_scope,
symmetric_key=primary_key)
registration_result = await provisioning_device_client.register()
# Build the connection string - this is used to connect to IoT Central
conn_str='HostName=' + registration_result.registration_state.assigned_hub + \
';DeviceId=' + device_id + \
';SharedAccessKey=' + primary_key
# The device client object is used to interact with Azure IoT Central.
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Connect the device client
print(f'Connecting as device {device_id}')
await device_client.connect()
print('Connected')
async def method_request_handler(method_request: MethodRequest) -> None:
print('Command received:', method_request.name)
# Send the breed to IoT Central
telemetry = {'Breed': 'american-staffordshire-terrier'}
await device_client.send_message(json.dumps(telemetry))
# Send a response - all commands need a response
method_response = MethodResponse.create_from_method_request(method_request, 200, {})
await device_client.send_method_response(method_response)
global command_executed
command_executed = True
device_client.on_method_request_received = method_request_handler
# Execute the command
command_url = f'https://{url}/api/devices/{device_id}/commands/DetectBreed?api-version=1.0'
requests.post(command_url, headers=headers, json={})
time = 0
while not command_executed and time < 30:
await asyncio.sleep(1)
time += 1
# Check that the command was executed
if not command_executed:
print('Validation failed - unable to execute Detect Breed command')
exit(1)
# Get the last breed
telemetry_url = f'https://{url}/api/devices/{device_id}/telemetry/Breed?api-version=1.0'
response = requests.get(telemetry_url, headers=headers)
telemetry_value = response.json()['value']
print(f'Last detected breed: {telemetry_value}')
if telemetry_value != 'american-staffordshire-terrier':
print(f'Validation failed - unable to read back the expected telemetry. Value received was {telemetry_value}')
exit(1)
# Disconnect the client
await device_client.disconnect()
print('Validation passed!')
# Start the app running
asyncio.run(main())

Просмотреть файл

@ -0,0 +1,47 @@
import os
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
print('ML model validation')
print()
# Get the users prediction key and URL
print('What is your prediction key?')
prediction_key = input().strip()
print('What is your prediction URL?')
prediction_url = input().strip()
# Decompose the prediction URL
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
# Create the image classifier predictor
prediction_credentials = ApiKeyCredentials(in_headers={'Prediction-key': prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
directory = '../model-images/testing-images'
pass_count = 0
fail_count = 0
for entry in os.scandir(directory):
tag = entry.name[:entry.name.rfind('-')]
with open(entry.path, 'rb') as image:
results = predictor.classify_image(project_id, iteration_name, image)
best_prediction = results.predictions[0]
if best_prediction.tag_name == tag:
pass_count += 1
else:
fail_count += 1
print()
if fail_count > 0:
print('Validation failed - please check your model')
else:
print('Validation passed!')