* Adding content

* Update en.json

* Update README.md

* Update TRANSLATIONS.md

* Adding lesson tempolates

* Fixing code files with each others code in

* Update README.md

* Adding lesson 16

* Adding virtual camera

* Adding Wio Terminal camera capture

* Adding wio terminal code

* Adding SBC classification to lesson 16

* Adding challenge, review and assignment
This commit is contained in:
Jim Bennett 2021-06-07 17:46:13 -07:00 коммит произвёл GitHub
Родитель d98f3cbc58
Коммит b7a989648a
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
52 изменённых файлов: 1506 добавлений и 45 удалений

2
.vscode/settings.json поставляемый
Просмотреть файл

@ -15,6 +15,8 @@
"geofencing",
"microcontrollers",
"mosquitto",
"photodiode",
"photodiodes",
"sketchnote"
]
}

Просмотреть файл

@ -14,7 +14,7 @@ Libraries can be installed globally and compiled in if needed, or into a specifi
✅ You can learn more about library management and how to find and install libraries in the [PlatformIO library documentation](https://docs.platformio.org/en/latest/librarymanager/index.html).
### Task
### Task - install the WiFi and MQTT Arduino libraries
Install the Arduino libraries.
@ -49,7 +49,7 @@ Install the Arduino libraries.
The Wio Terminal can now be connected to WiFi.
### Task
### Task - connect to WiFi
Connect the Wio Terminal to WiFi.
@ -85,7 +85,7 @@ Connect the Wio Terminal to WiFi.
#include "config.h"
```
This includes header files for the libraries you added earlier, as well as the config header file.
This includes header files for the libraries you added earlier, as well as the config header file. These header files are needed to tell PlatformIO to bring in the code from the libraries. Without explicitly including these header files, some code won't be compiled in and you will get compiler errors.
1. Add the following code above the `setup` function:
@ -128,7 +128,7 @@ Connect the Wio Terminal to WiFi.
Once the Wio Terminal is connected to WiFi, it can connect to the MQTT broker.
### Task
### Task - connect to MQTT
Connect to the MQTT broker.

Просмотреть файл

@ -16,7 +16,7 @@ print("Connecting")
device_client.connect()
print("Connected")
def print_gps_data(line):
def printGPSData(line):
msg = pynmea2.parse(line)
if msg.sentence_type == 'GGA':
lat = pynmea2.dm_to_sd(msg.lat)
@ -37,7 +37,7 @@ while True:
line = serial.readline().decode('utf-8')
while len(line) > 0:
print_gps_data(line)
printGPSData(line)
line = serial.readline().decode('utf-8')
time.sleep(1)

Просмотреть файл

@ -24,7 +24,7 @@ void setup()
pinPeripheral(PIN_WIRE_SCL, PIO_SERCOM_ALT);
}
void print_gps_data()
void printGPSData()
{
if (gps.encode(Serial3.read()))
{
@ -44,7 +44,7 @@ void loop()
{
while (Serial3.available() > 0)
{
print_gps_data();
printGPSData();
}
delay(1000);

Просмотреть файл

@ -5,14 +5,14 @@ serial = serial.Serial('/dev/ttyAMA0', 9600, timeout=1)
serial.reset_input_buffer()
serial.flush()
def print_gps_data():
def printGPSData():
print(line.rstrip())
while True:
line = serial.readline().decode('utf-8')
while len(line) > 0:
print_gps_data()
printGPSData()
line = serial.readline().decode('utf-8')
time.sleep(1)

Просмотреть файл

@ -6,14 +6,14 @@ import counterfit_shims_serial
serial = counterfit_shims_serial.Serial('/dev/ttyAMA0')
def print_gps_data(line):
def printGPSData(line):
print(line.rstrip())
while True:
line = serial.readline().decode('utf-8')
while len(line) > 0:
print_gps_data(line)
printGPSData(line)
line = serial.readline().decode('utf-8')
time.sleep(1)

Просмотреть файл

@ -22,7 +22,7 @@ void setup()
pinPeripheral(PIN_WIRE_SCL, PIO_SERCOM_ALT);
}
void print_gps_data()
void printGPSData()
{
Serial.println(Serial3.readStringUntil('\n'));
}
@ -31,7 +31,7 @@ void loop()
{
while (Serial3.available() > 0)
{
print_gps_data();
printGPSData();
}
delay(1000);

Просмотреть файл

@ -10,11 +10,11 @@ The sensor you'll use is a [Grove GPS Air530 sensor](https://www.seeedstudio.com
This is a UART sensor, so sends GPS data over UART.
### Connect the GPS sensor
## Connect the GPS sensor
The Grove GPS sensor can be connected to the Raspberry Pi.
#### Task - connect the GPS sensor
### Task - connect the GPS sensor
Connect the GPS sensor.
@ -98,14 +98,14 @@ Program the device.
1. Reboot your Pi, then reconnect in VS Code once the Pi has rebooted.
1. From the terminal, create a new folder in the `pi` users home directory called `gps-sensor`. Create a file in this folder called `app.py`:
1. From the terminal, create a new folder in the `pi` users home directory called `gps-sensor`. Create a file in this folder called `app.py`.
1. Open this folder in VS Code
1. The GPS module sends UART data over a serial port. Install the `pyserial` Pip package to communicate with the serial port from your Python code:
```sh
pip3 install pip install pyserial
pip3 install pyserial
```
1. Add the following code to your `app.py` file:
@ -118,14 +118,14 @@ Program the device.
serial.reset_input_buffer()
serial.flush()
def print_gps_data(line):
def printGPSData(line):
print(line.rstrip())
while True:
line = serial.readline().decode('utf-8')
while len(line) > 0:
print_gps_data(line)
printGPSData(line)
line = serial.readline().decode('utf-8')
time.sleep(1)
@ -133,9 +133,9 @@ Program the device.
This code imports the `serial` module from the `pyserial` Pip package. It then connects to the `/dev/ttyAMA0` serial port - this is the address of the serial port that the Grove Pi Base Hat uses for its UART port. It then clears any existing data from this serial connection.
Next a function called `print_gps_data` is defined that prints out the line passed to it to the console.
Next a function called `printGPSData` is defined that prints out the line passed to it to the console.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `print_gps_data` function for each line.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `printGPSData` function for each line.
After all the data has been read, the loop sleeps for 1 second, then tries again.

Просмотреть файл

@ -24,7 +24,7 @@ Program the device to decode the GPS data.
import pynmea2
```
1. Replace the contents of the `print_gps_data` function with the following:
1. Replace the contents of the `printGPSData` function with the following:
```python
msg = pynmea2.parse(line)

Просмотреть файл

@ -14,7 +14,7 @@ A physical GPS sensor will have an antenna to pick up radio waves from GPS satel
To use a virtual GPS sensor, you need to add one to the CounterFit app
#### Task
#### Task - add the sensor to CounterFit
Add the GPS sensor to the CounterFit app.
@ -36,7 +36,7 @@ Add the GPS sensor to the CounterFit app.
1. Leave the *Port* set to */dev/ttyAMA0*
1. Select the **Add** button to create the humidity sensor on port `/dev/ttyAMA0`
1. Select the **Add** button to create the GPS sensor on port `/dev/ttyAMA0`
![The GPS sensor settings](../../../images/counterfit-create-gps-sensor.png)
@ -77,22 +77,22 @@ Program the GPS sensor app.
1. Add the following code below this to read from the serial port and print the values to the console:
```python
def print_gps_data(line):
def printGPSData(line):
print(line.rstrip())
while True:
line = serial.readline().decode('utf-8')
while len(line) > 0:
print_gps_data(line)
printGPSData(line)
line = serial.readline().decode('utf-8')
time.sleep(1)
```
A function called `print_gps_data` is defined that prints out the line passed to it to the console.
A function called `printGPSData` is defined that prints out the line passed to it to the console.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `print_gps_data` function for each line.
Next the code loops forever, reading as many lines of text as it can from the serial port in each loop. It calls the `printGPSData` function for each line.
After all the data has been read, the loop sleeps for 1 second, then tries again.

Просмотреть файл

@ -31,7 +31,7 @@ Program the device to decode the GPS data.
TinyGPSPlus gps;
```
1. Change the contents of the `print_gps_data` function to be the following:
1. Change the contents of the `printGPSData` function to be the following:
```cpp
if (gps.encode(Serial3.read()))

Просмотреть файл

@ -98,7 +98,7 @@ Program the device.
1. Add the following function before the `loop` function to send the GPS data to the serial monitor:
```cpp
void print_gps_data()
void printGPSData()
{
Serial.println(Serial3.readStringUntil('\n'));
}
@ -109,13 +109,13 @@ Program the device.
```cpp
while (Serial3.available() > 0)
{
print_gps_data();
printGPSData();
}
delay(1000);
```
This code reads from the UART serial port. The `readStringUntil` function reads up until a terminator character, in this case a new line. This will read a whole NMEA sentence (NMEA sentences are terminated with a new line character). All the while data can be read from the UART serial port, it is read and sent to the serial monitor via the `print_gps_data` function. Once no more data can be read, the `loop` delays for 1 second (1,000ms).
This code reads from the UART serial port. The `readStringUntil` function reads up until a terminator character, in this case a new line. This will read a whole NMEA sentence (NMEA sentences are terminated with a new line character). All the while data can be read from the UART serial port, it is read and sent to the serial monitor via the `printGPSData` function. Once no more data can be read, the `loop` delays for 1 second (1,000ms).
1. Build and upload the code to the Wio Terminal.

Просмотреть файл

@ -22,6 +22,7 @@ In this lesson we'll cover:
* [Image classification via Machine Learning](#image-classification-via-machine-learning)
* [Train an image classifier](#train-an-image-classifier)
* [Test your image classifier](#test-your-image-classifier)
* [Retrain your image classifier](#retrain-your-image-classifier)
## Using AI and ML to sort food
@ -133,6 +134,8 @@ To use Custom Vision, you first need to create two cognitive services resources
### Task - create an image classifier project
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai), and sign in with the Microsoft account you used for your Azure account.
1. Follow the [Create a new Project section of the Build a classifier quickstart on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier?WT.mc_id=academic-17441-jabenn#create-a-new-project) to create a new Custom Vision project. The UI may change and these docs are always the most up to date reference.
Call your project `fruit-quality-detector`.
@ -151,6 +154,8 @@ Ideally each picture should be just the fruit, with either a consistent backgrou
> 💁 It's important not to have specific backgrounds, or specific items that are not related to the thing being classified for each tag, otherwise the classifier may just classify based on the background. There was a classifier for skin cancer that was trained on moles both normal and cancerous, and the cancerous ones all had rulers against them to measure the size. It turned out the classifier was almost 100% accurate at identifying rulers in pictures, not cancerous moles.
Image classifiers run at very low resolution. For example Custom Vision can take training and prediction images up to 10240x10240, but trains and runs the model on images at 227x227. Larger images are shrunk to this size, so ensure the thing you are classifying takes up a large part of the image otherwise it may be too small in the smaller image used by the classifier.
1. Gather pictures for your classifier. You will need at least 5 pictures for each label to train the classifier, but the more the better. You will also need a few additional images to test the classifier. These images should all be different images of the same thing. For example:
* Using 2 ripe bananas, take some pictures of each one from a few different angles, taking at least 7 pictures (5 to train, 2 to test), but ideally more.
@ -181,18 +186,28 @@ Once your classifier is trained, you can test it by giving it a new image to cla
### Task - test your image classifier
1. Follow the [Test and retrain a model with Custom Vision Service documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#test-your-model) to test your image classifier. Use the testing images you created earlier, not any of the images you used for training.
1. Follow the [Test your model documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#test-your-model) to test your image classifier. Use the testing images you created earlier, not any of the images you used for training.
![A unripe banana predicted as unripe with a 98.9% probability, ripe with a 1.1% probability](../../../images/banana-unripe-quick-test-prediction.png)
1. Try all the testing images you have access to and observe the probabilities.
## Retrain your image classifier
When you test you classifier, it may not give the results you expect. Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat. You can improve your classifier by retraining it with images it gets wrong.
Every time you make a prediction using the quick test option, the image and results are stored. You can use these images to retrain your model.
### Task - retrain your image classifier
1. Follow the [Use the predicted image for training documentation on the Microsoft docs](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/test-your-model?WT.mc_id=academic-17441-jabenn#use-the-predicted-image-for-training) to retrain your model, using the correct tag for each image.
1. Once you model has been retrained, test on new images.
---
## 🚀 Challenge
Image classifiers use machine learning to make predictions about what is in an image, based of probabilities that particular features of an image mean that it matches a particular label. It doesn't understand what is in the image - it doesn't know what a banana is or understand what makes a banana a banana instead of a boat.
What do you think would happen if you used a picture of a strawberry with a model trained on bananas, or a picture of an inflatable banana, or a person in a banana suit, or even a yellow cartoon character like someone from the Simpsons?
Try it out and see what the predictions are. You can find images to try with using [Bing Image search](https://www.bing.com/images/trending).

Просмотреть файл

@ -10,24 +10,149 @@ Add a sketchnote if possible/appropriate
## Introduction
In this lesson you will learn about
In the last lesson you learned about image classifiers, and how to train them to detect good and bad fruit. To use this image classifier in an IoT application, you need to be able to capture an image using some kind of camera, and send this image to the cloud to be classified.
In this lesson you will learn about camera sensors, and how to use them with an IoT device to capture an image. You will also learn how to call the image classifier from your IoT device.
In this lesson we'll cover:
* [Thing 1](#thing-1)
* [Camera sensors](#camera-sensors)
* [Capture an image using an IoT device](#capture-an-image-using-an-iot-device)
* [Publish your image classifier](#publish-your-image-classifier)
* [Classify images from your IoT device](#classify-images-from-your-iot-device)
* [Improve the model](#Improve-the-model)
## Thing 1
## Camera sensors
Camera sensors, as the name suggests, are cameras that you can connect to your IoT device. They can take still images, or capture streaming video. Some will return raw image data, others will compress the image data into an image file such as a JPEG or PNG. Usually the cameras that work with IoT devices are much smaller and lower resolution that what you might be used to, but you can get high resolution cameras that will rival top end phones. You can get all manner of interchangeable lenses, multiple camera setups, infra-red thermal cameras, or UV cameras.
![The light from a scene passes through a lens and is focused on a CMOS sensor](../../../images/cmos-sensor.png)
Most camera sensors use image sensors where each pixel is a photodiode. A lens focuses the image onto the image sensor, and thousands or millions of photodiodes detect the light falling on each one, and record that as pixel data.
> 💁 Lenses invert images, the camera sensor then flips the image back the right way round. This is the same in your eyes - what you see is detected upside down on the back of your eye and your brain corrects it.
> 🎓 The image sensor is known as an Active-Pixel Sensor (APS), and the most popular type of APS is a complementary metal-oxide semiconductor sensor, or CMOS. You may have heard the term CMOS sensor used for camera sensors.
Camera sensors are digital sensors, sending image data as digital data, usually with the help of a library that provides the communication. Cameras connect using protocols like SPI to allow them to send large quantities of data - images are substantially larger than single numbers from a sensor such as a temperature sensor.
✅ What are the limitations around image size with IoT devices? Think about the constraints especially on microcontroller hardware.
## Capture an image using an IoT device
You can use your IoT device to capture and image to be classified.
### Task - capture an image using an IoT device
Work through the relevant guide to capture an image using your IoT device:
* [Arduino - Wio Terminal](wio-terminal-camera.md)
* [Single-board computer - Raspberry Pi](pi-camera.md)
* [Single-board computer - Virtual device](virtual-device-camera.md)
## Publish your image classifier
You trained your image classifier in the last lesson. Before you can use it from your IoT device, you need to publish the model.
### Model iterations
When your model was training in the last lesson, you may notice that the **Performance** tab shows iterations on the side. When you first trained the model you would have seen *Iteration 1* in training. When you improved the model using the prediction images, you would have seen *Iteration 2* in training.
Every time you train the model, you get a new iteration. This is a way to keep track of the different versions of your model trained on different data sets. When you do a **Quick Test**, there is a drop-down you can use to select the iteration, so you can compare the results across multiple iterations.
When you are happy with an iteration, you can publish it to make it available to be used from external applications. This way you can have a published version that is used by your devices, then work on a new version over multiple iterations, then publish that once you are happy with it.
### Task - publish an iteration
Iterations are published from the Custom Vision portal.
1. Launch the Custom Vision portal at [CustomVision.ai](https://customvision.ai) and sign in if you don't have it open already.
1. Select the **Performance** tab from the options at the top
1. Select the latest iteration from the *Iterations* list on the side
1. Select the **Publish** button for the iteration
![The publish button](../../../images/custom-vision-publish-button.png)
1. In the *Publish Model* dialog, set the *Prediction resource* to the `fruit-quality-detector-prediction` resource you created in the last lesson. Leave the name as `Iteration2`, and select the **Publish** button.
1. Once published, select the **Prediction URL** button. This will show details of the prediction API, and you will need these to call the model from your IoT device. The lower section is labelled *If you have an image file*, and this is the details you want. Take a copy of the URL that is shown which will be something like:
```output
https://<location>.api.cognitive.microsoft.com/customvision/v3.0/Prediction/<id>/classify/iterations/Iteration2/image
```
Where `<location>` will be the location you used when creating your custom vision resource, and `<id>` will be a long ID made up of letters and numbers.
Also take a copy of the *Prediction-Key* value. This is a secure key that you have to pass when you call the model. Only applications that pass this key are allowed to use the model, any other applications are rejected.
![The prediction API dialog showing the URL and key](../../../images/custom-vision-prediction-key-endpoint.png)
✅ When a new iteration is published, it will have a different name. How do you think you would change the iteration an IoT device is using?
## Classify images from your IoT device
You can now use these connection details to call the image classifier from your IoT device.
### Task - classify images from your IoT device
Work through the relevant guide to classify images using your IoT device:
* [Arduino - Wio Terminal](wio-terminal-classify-image.md)
* [Single-board computer - Raspberry Pi/Virtual IoT device](single-board-computer-classify-image.md)
## Improve the model
You may find that the results you get when using the camera connected to your IoT device don't match what you would expect. The predictions are not always as accurate as using images uploaded from your computer. This is because the model was trained on different data to what is being used for predictions.
To get the best results for an image classifier, you want to train the model with images that are as similar to the images used for predictions as possible. If you used your phone camera to capture images for training, for example, the image quality, sharpness, and color will be different to a camera connected to an IoT device.
![2 banana pictures, a low resolution one with poor lighting from an IoT device, and a high resolution one with good lighting from a phone](../../../images/banana-picture-compare.png)
In the image above, the banana picture on the left was taken using a Raspberry Pi Camera, the one on the right was taken of the same banana in the same location using an iPhone. There is a noticeable difference in quality - the iPhone picture is sharper, with brighter colors and more contrast.
✅ What else might cause the images captured by your IoT device to have incorrect predictions? Think about the environment an IoT device might be used in, what factors can affect the image being captured?
To improve the model, you can retrain it using the images captured from the IoT device.
### Task -improve the model
1. Classify multiple images of both ripe and unripe fruit using your IoT device.
1. In the Custom Vision portal, retrain the model using the images on the *Predictions* tab.
> ⚠️ You can refer to [the instructions for retraining your classifier in lesson 1 if needed](../1-train-fruit-detector/README.md#retrain-your-image-classifier).
1. If your images look very different to the original ones used to train, you can delete all the original images by selecting them in the *Training Images* tab and selecting the **Delete** button. To select an image, move your cursor over it and a tick will appear, select that tick to select or deselect the image.
1. Train a new iteration of the model and publish it using the steps above.
1. Update the endpoint URL in your code, and re-run the app.
1. Repeat these steps until you are happy with the results of the predictions.
---
## 🚀 Challenge
How much does image resolution or lighting affect the prediction?
Try changing the resolution of the images in your device code and see if it makes a difference to the quality of the images. Also try changing lighting.
If you were to create a production device to sell to farms or factories, how would you ensure it gives consistent results all the time?
## Post-lecture quiz
[Post-lecture quiz](https://brave-island-0b7c7f50f.azurestaticapps.net/quiz/32)
## Review & Self Study
You trained your custom vision model using the portal. This relies on having images available - and in the real world you may not be able to get training data that matches what the camera on your device captures. You can work round this by training directly from your device using the training API, to train a model using images captured from your IoT device.
* Read up on the training API in the [Using the Custom Vision SDK quickstart](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/quickstarts/image-classification?tabs=visual-studio&pivots=programming-language-python&WT.mc_id=academic-17441-jabenn)
## Assignment
[](assignment.md)
[Respond to classification results](assignment.md)

Просмотреть файл

@ -1,9 +1,13 @@
#
# Respond to classification results
## Instructions
Your device has classified images, and has the values for the predictions. Your device could use this information to do something - it could sent it to IoT Hub for processing by other systems, or it could control an actuator such as an LED to light up when the fruit is unripe.
Add code to your device to respond in a way of your choosing - either send data to an IoT Hub, control an actuator, or combine the two and send data to an IoT Hub with some serverless code that determines if the fruit is ripe or not and sends back a command to control an actuator.
## Rubric
| Criteria | Exemplary | Adequate | Needs Improvement |
| -------- | --------- | -------- | ----------------- |
| | | | |
| Respond to predictions | Was able to implement a response to predictions that works consistently with predictions of the same value. | Was able to implement a response that is not dependant on the predictions, such as just sending raw data to an IoT Hub | Was unable to program the device to respond to the predictions |

Просмотреть файл

@ -0,0 +1,16 @@
import io
import time
from picamera import PiCamera
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())

Просмотреть файл

@ -0,0 +1,16 @@
from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000)
import io
from counterfit_shims_picamera import PiCamera
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())

Просмотреть файл

@ -0,0 +1,5 @@
.pio
.vscode/.browse.c_cpp.db*
.vscode/c_cpp_properties.json
.vscode/launch.json
.vscode/ipch

Просмотреть файл

@ -0,0 +1,7 @@
{
// See http://go.microsoft.com/fwlink/?LinkId=827846
// for the documentation about the extensions.json format
"recommendations": [
"platformio.platformio-ide"
]
}

Просмотреть файл

@ -0,0 +1,39 @@
This directory is intended for project header files.
A header file is a file containing C declarations and macro definitions
to be shared between several project source files. You request the use of a
header file in your project source file (C, C++, etc) located in `src` folder
by including it, with the C preprocessing directive `#include'.
```src/main.c
#include "header.h"
int main (void)
{
...
}
```
Including a header file produces the same results as copying the header file
into each source file that needs it. Such copying would be time-consuming
and error-prone. With a header file, the related declarations appear
in only one place. If they need to be changed, they can be changed in one
place, and programs that include the header file will automatically use the
new version when next recompiled. The header file eliminates the labor of
finding and changing all the copies as well as the risk that a failure to
find one copy will result in inconsistencies within a program.
In C, the usual convention is to give header files names that end with `.h'.
It is most portable to use only letters, digits, dashes, and underscores in
header file names, and at most one dot.
Read more about using header files in official GCC documentation:
* Include Syntax
* Include Operation
* Once-Only Headers
* Computed Includes
https://gcc.gnu.org/onlinedocs/cpp/Header-Files.html

Просмотреть файл

@ -0,0 +1,46 @@
This directory is intended for project specific (private) libraries.
PlatformIO will compile them to static libraries and link into executable file.
The source code of each library should be placed in a an own separate directory
("lib/your_library_name/[here are source files]").
For example, see a structure of the following two libraries `Foo` and `Bar`:
|--lib
| |
| |--Bar
| | |--docs
| | |--examples
| | |--src
| | |- Bar.c
| | |- Bar.h
| | |- library.json (optional, custom build options, etc) https://docs.platformio.org/page/librarymanager/config.html
| |
| |--Foo
| | |- Foo.c
| | |- Foo.h
| |
| |- README --> THIS FILE
|
|- platformio.ini
|--src
|- main.c
and a contents of `src/main.c`:
```
#include <Foo.h>
#include <Bar.h>
int main (void)
{
...
}
```
PlatformIO Library Dependency Finder will find automatically dependent
libraries scanning project source files.
More information about PlatformIO Library Dependency Finder
- https://docs.platformio.org/page/librarymanager/ldf.html

Просмотреть файл

@ -0,0 +1,24 @@
; PlatformIO Project Configuration File
;
; Build options: build flags, source filter
; Upload options: custom upload port, speed and extra flags
; Library options: dependencies, extra library storages
; Advanced options: extra scripting
;
; Please visit documentation for the other options and examples
; https://docs.platformio.org/page/projectconf.html
[env:seeed_wio_terminal]
platform = atmelsam
board = seeed_wio_terminal
framework = arduino
lib_deps =
seeed-studio/Seeed Arduino rpcWiFi @ 1.0.3
seeed-studio/Seeed Arduino FS @ 2.0.2
seeed-studio/Seeed Arduino SFUD @ 2.0.1
seeed-studio/Seeed Arduino rpcUnified @ 2.1.3
seeed-studio/Seeed_Arduino_mbedtls @ 3.0.1
seeed-studio/Seeed Arduino RTC @ 2.0.0
build_flags =
-DARDUCAM_SHIELD_V2
-DOV2640_CAM

Просмотреть файл

@ -0,0 +1,160 @@
#pragma once
#include <ArduCAM.h>
#include <Wire.h>
class Camera
{
public:
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
{
_format = format;
_image_size = image_size;
}
bool init()
{
// Reset the CPLD
_arducam.write_reg(0x07, 0x80);
delay(100);
_arducam.write_reg(0x07, 0x00);
delay(100);
// Check if the ArduCAM SPI bus is OK
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
{
return false;
}
// Change MCU mode
_arducam.set_mode(MCU2LCD_MODE);
uint8_t vid, pid;
// Check if the camera module type is OV2640
_arducam.wrSensorReg8_8(0xff, 0x01);
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
{
return false;
}
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
_arducam.OV2640_set_Light_Mode(Auto);
_arducam.OV2640_set_Special_effects(Normal);
delay(1000);
return true;
}
void startCapture()
{
_arducam.flush_fifo();
_arducam.clear_fifo_flag();
_arducam.start_capture();
}
bool captureReady()
{
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
}
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
{
if (!captureReady()) return false;
// Get the image file length
uint32_t length = _arducam.read_fifo_length();
buffer_length = length;
if (length >= MAX_FIFO_SIZE)
{
return false;
}
if (length == 0)
{
return false;
}
// create the buffer
byte *buf = new byte[length];
uint8_t temp = 0, temp_last = 0;
int i = 0;
uint32_t buffer_pos = 0;
bool is_header = false;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
while (length--)
{
temp_last = temp;
temp = SPI.transfer(0x00);
//Read JPEG data from FIFO
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_HIGH();
}
if (is_header == true)
{
//Write image data to buffer if not full
if (i < 256)
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
else
{
_arducam.CS_HIGH();
i = 0;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
}
}
else if ((temp == 0xD8) & (temp_last == 0xFF))
{
is_header = true;
buf[buffer_pos] = temp_last;
buffer_pos++;
i++;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
}
_arducam.clear_fifo_flag();
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
// return the buffer
*buffer = buf;
}
private:
ArduCAM _arducam;
int _format;
int _image_size;
};

Просмотреть файл

@ -0,0 +1,9 @@
#pragma once
#include <string>
using namespace std;
// WiFi credentials
const char *SSID = "<SSID>";
const char *PASSWORD = "<PASSWORD>";

Просмотреть файл

@ -0,0 +1,112 @@
#include <Arduino.h>
#include <rpcWiFi.h>
#include "SD/Seeed_SD.h"
#include <Seeed_FS.h>
#include <SPI.h>
#include "config.h"
#include "camera.h"
Camera camera = Camera(JPEG, OV2640_640x480);
void setupCamera()
{
pinMode(PIN_SPI_SS, OUTPUT);
digitalWrite(PIN_SPI_SS, HIGH);
Wire.begin();
SPI.begin();
if (!camera.init())
{
Serial.println("Error setting up the camera!");
}
}
void connectWiFi()
{
while (WiFi.status() != WL_CONNECTED)
{
Serial.println("Connecting to WiFi..");
WiFi.begin(SSID, PASSWORD);
delay(500);
}
Serial.println("Connected!");
}
void setupSDCard()
{
while (!SD.begin(SDCARD_SS_PIN, SDCARD_SPI))
{
Serial.println("SD Card Error");
}
}
void setup()
{
Serial.begin(9600);
while (!Serial)
; // Wait for Serial to be ready
delay(1000);
connectWiFi();
setupCamera();
pinMode(WIO_KEY_C, INPUT_PULLUP);
setupSDCard();
}
int fileNum = 1;
void saveToSDCard(byte *buffer, uint32_t length)
{
char buff[16];
sprintf(buff, "%d.jpg", fileNum);
fileNum++;
File outFile = SD.open(buff, FILE_WRITE );
outFile.write(buffer, length);
outFile.close();
Serial.print("Image written to file ");
Serial.println(buff);
}
void buttonPressed()
{
camera.startCapture();
while (!camera.captureReady())
delay(100);
Serial.println("Image captured");
byte *buffer;
uint32_t length;
if (camera.readImageToBuffer(&buffer, length))
{
Serial.print("Image read to buffer with length ");
Serial.println(length);
saveToSDCard(buffer, length);
delete(buffer);
}
}
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW)
{
buttonPressed();
delay(2000);
}
delay(200);
}

Просмотреть файл

@ -0,0 +1,11 @@
This directory is intended for PlatformIO Unit Testing and project tests.
Unit Testing is a software testing method by which individual units of
source code, sets of one or more MCU program modules together with associated
control data, usage procedures, and operating procedures, are tested to
determine whether they are fit for use. Unit testing finds problems early
in the development cycle.
More information about PlatformIO Unit Testing:
- https://docs.platformio.org/page/plus/unit-testing.html

Просмотреть файл

@ -0,0 +1,36 @@
import io
import time
from picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.classify_image(project_id, iteration_name, image)
for prediction in results.predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')

Просмотреть файл

@ -0,0 +1,36 @@
from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000)
import io
from counterfit_shims_picamera import PiCamera
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
image.seek(0)
results = predictor.classify_image(project_id, iteration_name, image)
for prediction in results.predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')

Просмотреть файл

@ -0,0 +1,133 @@
# Capture an image - Raspberry Pi
In this part of the lesson, you will add a camera sensor to your Raspberry Pi, and read images from it.
## Hardware
The Raspberry Pi needs a camera.
The camera you'll use is a [Raspberry Pi Camera Module](https://www.raspberrypi.org/products/camera-module-v2/). This camera is designed to work with the Raspberry Pi and connects via a dedicated connector on the Pi.
> 💁 This camera uses the [Camera Serial Interface, a protocol from the Mobile Industry Processor Interface Alliance](https://wikipedia.org/wiki/Camera_Serial_Interface), known as MIPI-CSI. This is a dedicated protocol for sending images
## Connect the camera
The camera can be connected to the Raspberry Pi using a ribbon cable.
### Task - connect the camera
![A Raspberry Pi Camera](../../../images/pi-camera-module.png)
1. Power off the Pi.
1. Connect the ribbon cable that comes with the camera to the camera. To do this, pull gently on the black plastic clip in the holder so that it comes out a little bit, then slide the cable into the socket, with the blue side facing away from the lens, the metal pin strips facing towards the lens. Once it is all the way in, push the black plastic clip back into place.
You can find an animation showing how to open the clip and insert the cable on the [Raspberry Pi Getting Started with the Camera module documentation](https://projects.raspberrypi.org/en/projects/getting-started-with-picamera/2).
![The ribbon cable inserted into the camera module](../../../images/pi-camera-ribbon-cable.png)
1. Remove the Grove Base Hat from the Pi.
1. Pass the ribbon cable through the camera slot in the Grove Base Hat. Make sure the blue side of the cable faces towards the analog ports labelled **A0**, **A1** etc.
![The ribbon cable passing through the grove base hat](../../../images/grove-base-hat-ribbon-cable.png)
1. Inset the ribbon cable into the camera port on the Pi. Once again, pull the black plastic clip up, insert the cable, then push the clip back in. The blue side of the cable should face the USB and ethernet ports.
![The ribbon cable connected to the camera socket on the Pi](../../../images/pi-camera-socket-ribbon-cable.png)
1. Refit the Grove Base Hat
## Program the camera
The Raspberry Pi can now be programmed to use the camera using the [PiCamera](https://pypi.org/project/picamera/) Python library.
### Task - program the camera
Program the device.
1. Power up the Pi and wait for it to boot
1. Launch VS Code, either directly on the Pi, or connect via the Remote SSH extension.
1. By default the camera socket on the Pi is turned off. You can turn it on by running the following commands from your terminal:
```sh
sudo raspi-config nonint do_camera 0
sudo reboot
```
This will toggle a setting to enable the camera, then reboot the Pi to make that setting take effect. Wait for the Pi to reboot, then re-launch VS Code.
1. From the terminal, create a new folder in the `pi` users home directory called `fruit-quality-detector`. Create a file in this folder called `app.py`.
1. Open this folder in VS Code
1. To interact with the camera, you can use the PiCamera Python library. Install the Pip package for this with the following command:
```sh
pip3 install picamera
```
1. Add the following code to your `app.py` file:
```python
import io
import time
from picamera import PiCamera
```
This code imports some libraries needed, including the `PiCamera` library.
1. Add the following code below this to initialize the camera:
```python
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
time.sleep(2)
```
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
The `camera.rotation = 0` line sets the rotation of the image. The ribbon cable comes in to the bottom of the camera, but if your camera was rotated to allow it to point easier at the item you want to classify, then you can change this line to the number of degrees of rotation.
![The camera hanging down over a drink can](../../../images/pi-camera-upside-down.png)
For example, if you suspend the ribbon cable over something so that it is at the top of the camera, then set the rotation to be 180:
```python
camera.rotation = 180
```
The camera takes a few seconds to start up, hence the `time.sleep(2)`
1. Add the following code below this to capture the image as binary data:
```python
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
```
This codes creates a `BytesIO` object to store binary data. The image is read from the camera as a JPEG file and stored in this object. This object has a position indicator to know where it is in the data so that more data can be written to the end if needed, so the `image.seek(0)` line moves this position back to the start so that all the data can be read later.
1. Below this, add the following to save the image to a file:
```python
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
```
This code opens a file called `image.jpg` for writing, then reads all the data from the `BytesIO` object and writes that to the file.
> 💁 You can capture the image directly to a file instead of a `BytesIO` object by passing the file name to the `camera.capture` call. The reason for using the `BytesIO` object is so that later in this lesson you can send the image to your image classifier.
1. Point the camera at something and run this code.
1. An image will be captured and saved as `image.jpg` in the current folder. You will see this file in the VS Code explorer. Select the file to view the image. If it needs rotation, update the `camera.rotation = 0` line as necessary and take another picture.
> 💁 You can find this code in the [code-camera/pi](code-camera/pi) folder.
😀 Your camera program was a success!

Просмотреть файл

@ -0,0 +1,91 @@
# Classify an image - Virtual IoT Hardware and Raspberry Pi
In this part of the lesson, you will add send the image captured by the camera to the Custom Vision service to classify it.
## Send images to Custom Vision
The Custom Vision service has a Python SDK you can use to classify images.
### Task - send images to Custom Vision
1. Open the `fruit-quality-detector` folder in VS Code. If you are using a virtual IoT device, make sure the virtual environment is running in the terminal.
1. The Python SDK to send images to Custom Vision is available as a Pip package. Install it with the following command:
```sh
pip3 install azure-cognitiveservices-vision-customvision
```
1. Add the following import statements at the top of the `app.py` file:
```python
from msrest.authentication import ApiKeyCredentials
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
```
This brings in some modules from the Custom Vision libraries, one to authenticate with the prediction key, and one to provide a prediction client class that can call Custom Vision.
1. Add the following code to to the end of the file:
```python
prediction_url = '<prediction_url>'
prediction_key = '<prediction key>'
```
Replace `<prediction_url>` with the URL you copied from the *Prediction URL* dialog earlier in this lesson. Replace `<prediction key>` with the prediction key you copied from the same dialog.
1. The prediciton URL that was provided by the *Prediction URL* dialog is designed to be used when calling the REST endpoint directly. The Python SDK uses parts of the URL in different places. Add the following code to break apart this URL into the parts needed:
```python
parts = prediction_url.split('/')
endpoint = 'https://' + parts[2]
project_id = parts[6]
iteration_name = parts[9]
```
This splits the URL, extracting the endpoint of `https://<location>.api.cognitive.microsoft.com`, the project ID, and the name of the published iteration.
1. Create a predictor object to perform the prediction with the following code:
```python
prediction_credentials = ApiKeyCredentials(in_headers={"Prediction-key": prediction_key})
predictor = CustomVisionPredictionClient(endpoint, prediction_credentials)
```
The `prediction_credentials` wrap the prediction key. These are then used to create a prediction client object pointing at the endpoint.
1. Send the image to custom vision using the following code:
```python
image.seek(0)
results = predictor.classify_image(project_id, iteration_name, image)
```
This rewinds the image back to the start, then sends it to the prediction client.
1. Finally, show the results with the following code:
```python
for prediction in results.predictions:
print(f'{prediction.tag_name}:\t{prediction.probability * 100:.2f}%')
```
This will loop through all the predictions that have been returned and show them on the terminal. The probabilities returned are floating point numbers from 0-1, with 0 being a 0% chance of matching the tag, and 1 being a 100% chance.
> 💁 Image classifiers will return the percentages for all tags that have been used. Each tag will have a probability that the image matches that tag.
1. Run your code, with your camera pointing at some fruit, or an appropriate image set, or fruit visible on your webcam if using virtual IoT hardware. You will see the output in the console:
```output
(.venv) ➜ fruit-quality-detector python app.py
ripe: 56.84%
unripe: 43.16%
```
You will be able to see the image that was taken, and these values in the **Predictions** tab in Custom Vision.
![A banana in custom vision predicted ripe at 56.8% and unripe at 43.1%](../../../images/custom-vision-banana-prediction.png)
> 💁 You can find this code in the [code-classify/pi](code-classify/pi) or [code-classify/virtual-device](code-classify/virtual-device) folder.
😀 Your camera program was a success!

Просмотреть файл

@ -0,0 +1,112 @@
# Capture an image - Virtual IoT Hardware
In this part of the lesson, you will add a camera sensor to your yirtual IoT device, and read images from it.
## Hardware
The virtual IoT device will use a simulated camera that sends either images from files, or from your webcam.
### Add the camera to CounterFit
To use a virtual camera, you need to add one to the CounterFit app
#### Task - add the camera to CounterFit
Add the Camera to the CounterFit app.
1. Create a new Python app on your computer in a folder called `fruit-quality-detector` with a single file called `app.py` and a Python virtual environment, and add the CounterFit pip packages.
> ⚠️ You can refer to [the instructions for creating and setting up a CounterFit Python project in lesson 1 if needed](../../../1-getting-started/lessons/1-introduction-to-iot/virtual-device.md).
1. Install an additional Pip package to install a CounterFit shim that can talk to Camera sensors by simulating some of the [Picamera Pip package](https://pypi.org/project/picamera/). Make sure you are installing this from a terminal with the virtual environment activated.
```sh
pip install counterfit-shims-picamera
```
1. Make sure the CounterFit web app is running
1. Create a camera:
1. In the *Create sensor* box in the *Sensors* pane, drop down the *Sensor type* box and select *Camera*.
1. Set the *Name* to `Picamera`
1. Select the **Add** button to create the camera
![The camera settings](../../../images/counterfit-create-camera.png)
The camera will be created and appear in the sensors list.
![The camera created](../../../images/counterfit-camera.png)
## Program the camera
The virtual IoT device can now be programmed to use the virtual camera.
### Task - program the camera
Program the device.
1. Make sure the `fruit-quality-detector` app is open in VS Code
1. Open the `app.py` file
1. Add the following code to the top of `app.py` to connect the app to CounterFit:
```python
from counterfit_connection import CounterFitConnection
CounterFitConnection.init('127.0.0.1', 5000)
```
1. Add the following code to your `app.py` file:
```python
import io
from counterfit_shims_picamera import PiCamera
```
This code imports some libraries needed, including the `PiCamera` class from the counterfit_shims_picamera library.
1. Add the following code below this to initialize the camera:
```python
camera = PiCamera()
camera.resolution = (640, 480)
camera.rotation = 0
```
This code creates a PiCamera object, sets the resolution to 640x480. Although higher resolutions are supported, the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
The `camera.rotation = 0` line sets the rotation of the image in degrees. If you need to rotate the image from the webcam or the file, set this as appropriate. For example, if you want to change the image of a banana on a webcam in landscape mode to be portrait, set `camera.rotation = 90`.
1. Add the following code below this to capture the image as binary data:
```python
image = io.BytesIO()
camera.capture(image, 'jpeg')
image.seek(0)
```
This codes creates a `BytesIO` object to store binary data. The image is read from the camera as a JPEG file and stored in this object. This object has a position indicator to know where it is in the data so that more data can be written to the end if needed, so the `image.seek(0)` line moves this position back to the start so that all the data can be read later.
1. Below this, add the following to save the image to a file:
```python
with open('image.jpg', 'wb') as image_file:
image_file.write(image.read())
```
This code opens a file called `image.jpg` for writing, then reads all the data from the `BytesIO` object and writes that to the file.
> 💁 You can capture the image directly to a file instead of a `BytesIO` object by passing the file name to the `camera.capture` call. The reason for using the `BytesIO` object is so that later in this lesson you can send the image to your image classifier.
1. Configure the image that the camera in CounterFit will capture. You can either set the *Source* to *File*, then upload an image file, or set the *Source* to *WebCam*, and images will be captures from your web cam. Make sure you select the **Set** button after selecting a picture or selecting your webcam.
![CounterFit with a file set as the image source, and a web cam set showing a person holding a banana in a preview of the webcam](../../../images/counterfit-camera-options.png)
1. An image will be captured and saved as `image.jpg` in the current folder. You will see this file in the VS Code explorer. Select the file to view the image. If it needs rotation, update the `camera.rotation = 0` line as necessary and take another picture.
> 💁 You can find this code in the [code-camera/virtual-iot-device](code-camera/virtual-iot-device) folder.
😀 Your camera program was a success!

Просмотреть файл

@ -0,0 +1,458 @@
# Capture an image - Wio Terminal
In this part of the lesson, you will add a camera to your Wio Terminal, and capture images from it.
## Hardware
The Wio Terminal needs a camera.
The camera you'll use is an [ArduCam Mini 2MP Plus](https://www.arducam.com/product/arducam-2mp-spi-camera-b0067-arduino/). This is a 2 megapixel camera based on the OV2640 image sensor. It communicates over an SPI interface to capture images, and uses I<sup>2</sup>C to configure the sensor.
## Connect the camera
The ArduCam doesn't have a Grove socket, instead it connects to both the SPI and I<sup>2</sup>C busses via the GPIO pins on the Wio Terminal.
### Task - connect the camera
Connect the camera.
![An ArduCam sensor](../../../images/arducam.png)
1. The pins on the base of the ArduCam need to be connected to the GPIO pins on the Wio Terminal. To make it easier to find the right pins, attach the GPIO pin sticker that comes with the Wio Terminal around the pins:
![The wio terminal with the GPIO pin sticker on](../../../images/wio-terminal-pin-sticker.png)
1. Using jumper wires, make the following connections:
| ArduCAM pin | Wio Terminal pin | Description |
| ----------- | ---------------- | --------------------------------------- |
| CS | 24 (SPI_CS) | SPI Chip Select |
| MOSI | 19 (SPI_MOSI) | SPI Controller Output, Peripheral Input |
| MISO | 21 (SPI_MISO) | SPI Controller Input, peripheral Output |
| SCK | 23 (SPI_SCLK) | SPI Serial Clock |
| GND | 6 (GND) | Ground - 0V |
| VCC | 4 (5V) | 5V power supply |
| SDA | 3 (I2C1_SDA) | I<sup>2</sup>C Serial Data |
| SCL | 5 (I2C1_SCL) | I<sup>2</sup>C Serial Clock |
![The wio terminal connected to the ArduCam with jumper wires](../../../images/arducam-wio-terminal-connections.png)
The GND and VCC connections provide a 5V power supply to the ArduCam. It runs at 5V, unlike Grove sensors that run at 3V. This power comes directly from the USB-C connection that powers the device.
> 💁 For the SPI connection the pin labels on the ArduCam and the Wio Terminal pin names used in code still use the old naming convention. The instructions in this lesson will use the new naming convention, except when the pin names are used in code.
1. You can now connect the Wio Terminal to your computer.
## Program the device to connect to the camera
The Wio Terminal can now be programmed to use the attached ArduCAM camera.
### Task - program the device to connect to the camera
1. Create a brand new Wio Terminal project using PlatformIO. Call this project `fruit-quality-detector`. Add code in the `setup` function to configure the serial port.
1. Add code to connect to WiFi, with your WiFi credentials in a file called `config.h`. Don't forget to add the required libraries to the `platformio.ini` file.
1. The ArduCam library isn't available as an Arduino library that can be installed from the `platformio.ini` file. Instead it will need to be installed from source from their GitHub page. You can get this by either:
* Cloning the repo from [https://github.com/ArduCAM/Arduino.git](https://github.com/ArduCAM/Arduino.git)
* Heading to the repo on GitHub at [github.com/ArduCAM/Arduino](https://github.com/ArduCAM/Arduino) and downloading the code as a zip from from the **Code** button
1. You only need the `ArduCAM` folder from this code. Copy the entire folder into the `lib` folder in your project.
> ⚠️ The entire folder must be copied, so the code is in `lib/ArduCam`. Do not just copy the contents of the `ArduCam` folder into the `lib` folder, copy the entire folder over.
1. The ArduCam library code works for multiple types of camera. The type of camera you want to use is configured using compiler flags - this keeps the built library as small as possible by removing code for cameras you are not using. To configure the library for the OV2640 camera, add the following to the end of the `platformio.ini` file:
```ini
build_flags =
-DARDUCAM_SHIELD_V2
-DOV2640_CAM
```
This sets 2 compiler flags:
* `ARDUCAM_SHIELD_V2` to tell the library the camera is on an Arduino board, known as a shield.
* `OV2640_CAM` to tell the library to only include code for the OV2640 camera
1. Add a header file into the `src` folder called `camera.h`. This will contain code to communicate with the camera. Add the following code to this file:
```cpp
#pragma once
#include <ArduCAM.h>
#include <Wire.h>
class Camera
{
public:
Camera(int format, int image_size) : _arducam(OV2640, PIN_SPI_SS)
{
_format = format;
_image_size = image_size;
}
bool init()
{
// Reset the CPLD
_arducam.write_reg(0x07, 0x80);
delay(100);
_arducam.write_reg(0x07, 0x00);
delay(100);
// Check if the ArduCAM SPI bus is OK
_arducam.write_reg(ARDUCHIP_TEST1, 0x55);
if (_arducam.read_reg(ARDUCHIP_TEST1) != 0x55)
{
return false;
}
// Change MCU mode
_arducam.set_mode(MCU2LCD_MODE);
uint8_t vid, pid;
// Check if the camera module type is OV2640
_arducam.wrSensorReg8_8(0xff, 0x01);
_arducam.rdSensorReg8_8(OV2640_CHIPID_HIGH, &vid);
_arducam.rdSensorReg8_8(OV2640_CHIPID_LOW, &pid);
if ((vid != 0x26) && ((pid != 0x41) || (pid != 0x42)))
{
return false;
}
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
_arducam.OV2640_set_Light_Mode(Auto);
_arducam.OV2640_set_Special_effects(Normal);
delay(1000);
return true;
}
void startCapture()
{
_arducam.flush_fifo();
_arducam.clear_fifo_flag();
_arducam.start_capture();
}
bool captureReady()
{
return _arducam.get_bit(ARDUCHIP_TRIG, CAP_DONE_MASK);
}
bool readImageToBuffer(byte **buffer, uint32_t &buffer_length)
{
if (!captureReady()) return false;
// Get the image file length
uint32_t length = _arducam.read_fifo_length();
buffer_length = length;
if (length >= MAX_FIFO_SIZE)
{
return false;
}
if (length == 0)
{
return false;
}
// create the buffer
byte *buf = new byte[length];
uint8_t temp = 0, temp_last = 0;
int i = 0;
uint32_t buffer_pos = 0;
bool is_header = false;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
while (length--)
{
temp_last = temp;
temp = SPI.transfer(0x00);
//Read JPEG data from FIFO
if ((temp == 0xD9) && (temp_last == 0xFF)) //If find the end ,break while,
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_HIGH();
}
if (is_header == true)
{
//Write image data to buffer if not full
if (i < 256)
{
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
else
{
_arducam.CS_HIGH();
i = 0;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
_arducam.CS_LOW();
_arducam.set_fifo_burst();
}
}
else if ((temp == 0xD8) & (temp_last == 0xFF))
{
is_header = true;
buf[buffer_pos] = temp_last;
buffer_pos++;
i++;
buf[buffer_pos] = temp;
buffer_pos++;
i++;
}
}
_arducam.clear_fifo_flag();
_arducam.set_format(_format);
_arducam.InitCAM();
_arducam.OV2640_set_JPEG_size(_image_size);
// return the buffer
*buffer = buf;
}
private:
ArduCAM _arducam;
int _format;
int _image_size;
};
```
This is low level code that configures the camera using the ArduCam libraries, and extracts the images when required using the SPI bus. This code is very specific to the ArduCam, so you don't need to worry about how it works at this point.
1. In `main.cpp`, add the following code beneath the other `include` statements to include this new file and create an instance of the camera class:
```cpp
#include "camera.h"
Camera camera = Camera(JPEG, OV2640_640x480);
```
This creates a `Camera` saving the images as JPEGs at a resolution of 640 by 480. Although higher resolutions are supported (up to 3280x2464), the image classifier works on much smaller images (227x227) so there is no need to capture and send larger images.
1. Add the following code below this to define a function to setup the camera:
```cpp
void setupCamera()
{
pinMode(PIN_SPI_SS, OUTPUT);
digitalWrite(PIN_SPI_SS, HIGH);
Wire.begin();
SPI.begin();
if (!camera.init())
{
Serial.println("Error setting up the camera!");
}
}
```
This `setupCamera` function starts by configuring the SPI chip select pin (`PIN_SPI_SS`) as high, making the Wio Terminal the SPI controller. It then starts the I<sup>2</sup>C and SPI busses. Finally it initializes the camera class which configures the camera sensor settings and ensures everything it wired up correctly.
1. Call this function at the end of the `setup` function:
```cpp
setupCamera();
```
1. Build and upload this code, and check the output from the serial monitor. If you see `Error setting up the camera!` then check the wiring to ensure all cables are connecting the correct pins on the ArduCam to the correct GPIO pins on the Wio Terminal, and all jumper cables are seated correctly.
## Capture an image
The Wio Terminal can now be programmed to capture an image when a button is pressed.
### Task - capture an image
1. Microcontrollers run your code continuously, so it's not easy to trigger something like taking a photo without reacting to a sensor. The Wio Terminal has buttons, so the camera can be set up to be triggered by one of the buttons. Add the following code to the end of the `setup` function to configure the C button (one of the three buttons on the top, the one closest to the power switch):
```cpp
pinMode(WIO_KEY_C, INPUT_PULLUP);
```
The mode of `INPUT_PULLUP` essentially inverts an input. For example, normally a button would send a low signal when not pressed, and a high signal when pressed. When set to `INPUT_PULLUP`, they send a high signal when not pressed, and a low signal when pressed.
1. Add an empty function to respond to the button press before the `loop` function:
```cpp
void buttonPressed()
{
}
```
1. Call this function in the `loop` method when the button is pressed:
```cpp
void loop()
{
if (digitalRead(WIO_KEY_C) == LOW)
{
buttonPressed();
delay(2000);
}
delay(200);
}
```
This key checks to see if the button is pressed. If it is pressed, the `buttonPressed` function is called, and the loop delays for 2 seconds. This is to allow time for the button to be released so that a long press isn't registered twice.
> 💁 The button on the Wio Terminal is set to `INPUT_PULLUP`, so send a high signal when not pressed, and a low signal when pressed.
1. Add the following code to the `buttonPressed` function:
```cpp
camera.startCapture();
while (!camera.captureReady())
delay(100);
Serial.println("Image captured");
byte *buffer;
uint32_t length;
if (camera.readImageToBuffer(&buffer, length))
{
Serial.print("Image read to buffer with length ");
Serial.println(length);
delete(buffer);
}
```
This code begins the camera capture by calling `startCapture`. The camera hardware doesn't work by returning the data when you request it, instead you send an instruction to start capturing, and the camera will work in the background to capture the image, convert it to a JPEG, and store it in a local buffer on the camera itself. The `captureReady` call then checks to see if the image capture has finished.
Once the capture has finished, the image data is copied from the buffer on the camera into a local buffer (array of bytes) with the `readImageToBuffer` call. The length of the buffer is then sent to the serial monitor.
1. Build and upload this code, and check the output on the serial monitor. Every time you press the C button, an image will be captured and you will see the image size sent to the serial monitor.
```output
Connecting to WiFi..
Connected!
Image captured
Image read to buffer with length 9224
Image captured
Image read to buffer with length 11272
```
Different images will have different sizes. They are compressed as JPEGs and the size of a JPEG file for a given resolution depends on what is in the image.
> 💁 You can find this code in the [code-camera/wio-terminal](code-camera/wio-terminal) folder.
😀 You have successfully captured images with your Wio Terminal.
## Optional - verify the camera images using an SD card
The easiest way to see the images that were captured by the camera is to write them to an SD card in the Wio Terminal and then view them on your computer. Do this step if you have a spare microSD card and a microSD card socket in your computer, or an adapter.
The Wio Terminal only supports microSD cards of up to 16GB in size. If you have a larger SD card then it won't work.
### Task - verify the camera images using an SD card
1. Format a microSD card as FAT32 or exFAT using the relevant applications on your computer (Disk Utility on macOS, File Explorer on Windows, or using command line tools in Linux)
1. Insert the microSD card in the socket just below the power switch. Make sure it is all the way in until it clicks and stays in place, you may need to push it using a fingernail or a thin tool.
1. Add the following include statements at the top of the `main.cpp` file:
```cpp
#include "SD/Seeed_SD.h"
#include <Seeed_FS.h>
```
1. Add the following function before the `setup` function:
```cpp
void setupSDCard()
{
while (!SD.begin(SDCARD_SS_PIN, SDCARD_SPI))
{
Serial.println("SD Card Error");
}
}
```
This configures the SD card using the SPI bus.
1. Call this from the `setup` function:
```cpp
setupSDCard();
```
1. Add the following code above the `buttonPressed` function:
```cpp
int fileNum = 1;
void saveToSDCard(byte *buffer, uint32_t length)
{
char buff[16];
sprintf(buff, "%d.jpg", fileNum);
fileNum++;
File outFile = SD.open(buff, FILE_WRITE );
outFile.write(buffer, length);
outFile.close();
Serial.print("Image written to file ");
Serial.println(buff);
}
```
This defines a global variable for a file count. This is used for the image file names so multiple images can be captured with incrementing file names - `1.jpg`, `2.jpg` and so on.
It then defines the `saveToSDCard` that takes a buffer of byte data, and the length of the buffer. A file name is created using the file count, and the file count is incremented ready for the next file. The binary data from the buffer is then written to the file.
1. Call the `saveToSDCard` function from the `buttonPressed` function. The call should be **before** the buffer is deleted:
```cpp
Serial.print("Image read to buffer with length ");
Serial.println(length);
saveToSDCard(buffer, length);
delete(buffer);
```
1. Build and upload this code, and check the output on the serial monitor. Every time you press the C button, an image will be captured and saved to the SD card.
```output
Connecting to WiFi..
Connected!
Image captured
Image read to buffer with length 16392
Image written to file 1.jpg
Image captured
Image read to buffer with length 14344
Image written to file 2.jpg
```
1. Power off the microSD card and eject it by pushing it in slightly and releasing, and it will pop out. You may need to use a thin tool to do this. Plug the microSD card into your computer to view the images.
![A picture of a banana captured using the ArduCam](../../../images/banana-arducam.jpg)
> 💁 It may take a few images for the white balance of the camera to adjust itself. You will notice this based on the color of the images captured, the first few may look off color. You can always work around this by changing the code to capture a few images that are ignored during the setup.

Просмотреть файл

@ -0,0 +1,3 @@
# Classify an image - Wio Terminal
Coming soon!

Просмотреть файл

@ -17,7 +17,7 @@ All the device code for Arduino is in C++. To complete all the assignments you w
### Arduino hardware
* [Wio Terminal](https://www.seeedstudio.com/Wio-Terminal-p-4509.html)
* Optional - USB-C cable or USB-A to USB-C adapter. The Wio terminal has a USB-C port and comes with a USB-C to USB-A cable. If your PC or Mac only has USB-C ports you will need a USB-C cable, or a USB-A to USB-C adapter.
* *Optional* - USB-C cable or USB-A to USB-C adapter. The Wio terminal has a USB-C port and comes with a USB-C to USB-A cable. If your PC or Mac only has USB-C ports you will need a USB-C cable, or a USB-A to USB-C adapter.
### Arduino specific sensors and actuators
@ -25,6 +25,7 @@ These are specific to using the Wio terminal Arduino device, and are not relevan
* [ArduCam Mini 2MP Plus - OV2640](https://www.arducam.com/product/arducam-2mp-spi-camera-b0067-arduino/)
* [Grove speaker plus](https://www.seeedstudio.com/Grove-Speaker-Plus-p-4592.html)
* *Optional* - microSD Card 16GB or less for testing image capture, along with a connector to use the SD card with your computer if you don't have one built-in. **NOTE** - the Wio Terminal only supports SD cards up to 16GB, it does not support higher capacities.
## Raspberry Pi
@ -34,8 +35,8 @@ All the device code for Raspberry Pi is in Python. To complete all the assignmen
* [Raspberry Pi](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/)
> 💁 Versions from the Pi 2B and above should work with the assignments in these lessons.
* SD Card (You can get Raspberry Pi kits that come with an SD Card)
* USB power supply (You can get Raspberry Pi 4 kits that come with a power supply). If you are using a Raspberry Pi 4 you need a USB-C power supply, earlier devices need a micro-USB power supply
* microSD Card (You can get Raspberry Pi kits that come with a microSD Card), along with a connector to use the SD card with your computer if you don't have one built-in.
* USB power supply (You can get Raspberry Pi 4 kits that come with a power supply). If you are using a Raspberry Pi 4 you need a USB-C power supply, earlier devices need a micro-USB power supply.
### Raspberry Pi specific sensors and actuators

Двоичные данные
images/Diagrams.sketch

Двоичный файл не отображается.

Двоичные данные
images/arducam-wio-terminal-connections.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 449 KiB

Двоичные данные
images/arducam.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 283 KiB

Двоичные данные
images/banana-arducam.jpg Executable file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 14 KiB

Двоичные данные
images/banana-picture-compare.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 566 KiB

Двоичные данные
images/cmos-sensor.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 32 KiB

Двоичные данные
images/counterfit-camera-options.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 143 KiB

Двоичные данные
images/counterfit-camera.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 45 KiB

Двоичные данные
images/counterfit-create-camera.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 20 KiB

Двоичные данные
images/custom-vision-banana-prediction.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 531 KiB

Двоичные данные
images/custom-vision-prediction-key-endpoint.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 117 KiB

Двоичные данные
images/custom-vision-publish-button.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 152 KiB

Двоичные данные
images/grove-base-hat-ribbon-cable.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 424 KiB

Двоичные данные
images/pi-camera-module.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 353 KiB

Двоичные данные
images/pi-camera-ribbon-cable.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 297 KiB

Двоичные данные
images/pi-camera-socket-ribbon-cable.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 402 KiB

Двоичные данные
images/pi-camera-upside-down.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 264 KiB

Двоичные данные
images/wio-terminal-pin-sticker.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 433 KiB