From 9fd1ae31b353028547eb494a8ef3f1146b61388e Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Tue, 3 Sep 2019 12:55:36 -0500 Subject: [PATCH 01/12] Add Time Series Insights --- README.md | 47 +++++++++++++++++++++++++++++------------------ 1 file changed, 29 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index 72a53f0..0cc8c87 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![](https://pbs.twimg.com/media/D_ANZnbWsAA4EVK.jpg) -The IntelligentEdgeHOL walks through the process of deploying an IoT Edge module to an Nvidia Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, or an attached web cam. It achieves performance of around 10 frames per second for most video data. +The IntelligentEdgeHOL walks through the process of deploying an [IoT Edge](https://docs.microsoft.com/en-us/azure/iot-edge/about-iot-edge?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo) module to an Nvidia Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, or an attached web cam. It achieves performance of around 10 frames per second for most video data. The module ships as a fully self-contained docker image totalling around 5.5GB. This image contains all necessary dependencies including the [Nvidia Linux for Tegra Drivers](https://developer.nvidia.com/embedded/linux-tegra) for Jetson Nano, [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit), [NVIDIA CUDA Deep Neural Network library (CUDNN)](https://developer.nvidia.com/cudnn), [OpenCV](https://github.com/opencv/opencv), and [Darknet](https://github.com/AlexeyAB/darknet). For details on how the base images are built, see the included `docker` folder. @@ -114,7 +114,7 @@ Development Environment: Before we install IoT Edge, we need to install a few utitilies onto the Nvidia Jetson Nano device with: ``` -apt-get install -y curl nano python3-pip +sudo apt-get install -y curl nano python3-pip ``` ARM64 builds of IoT Edge are currently being offered in preview and will eventually go into General Availability. We will make use of the ARM64 builds to ensure that we get the best performance out of our IoT Edge solution. @@ -153,7 +153,7 @@ Once you have obtained a connection string, open the configuration file: sudo nano /etc/iotedge/config.yaml ``` -Find the provisioning section of the file and uncomment the manual provisioning mode. Update the value of device_connection_string with the connection string from your IoT Edge device. +Find the provisioning section of the file and uncomment the manual provisioning mode. Update the value of `device_connection_string` with the connection string from your IoT Edge device. ``` provisioning: @@ -168,6 +168,12 @@ provisioning: ``` +After you have updated the value of `device_connection_string`, restart the iotedge service with: + +``` +sudo service iotedge restart +``` + You can check the status of the IoT Edge Daemon using: ``` @@ -251,7 +257,7 @@ You can Open this Web Server using the IP Address or Host Name of the Nvidia Jet Example : - http://JetsonNano + http://jetson-nano-00 or @@ -309,20 +315,6 @@ WARNING: Assuming --restrict-filenames since file system encoding cannot encode Download Complete ``` -# Enable Object Detection by modifying the Module Twin - -While in VSCode, select the Azure IoT Hub Devices window, find your IoT Edge device and expand the modules sections until you see the `YoloModule` entry. - -Right click on `YoloModule` and select `Edit Module Twin` - -A new window name `azure-iot-module-twin.json` should open. - -Set the value of `properties -> desired -> Inference` to 1 - -Right click anywhere in the Editor window, then select `Update Module Twin` - - After a few moments the object detection feature will become enabled in the module. Now, if you reconnect to the video stream connected to in the previous step, you should see a bounding box and tags appearing around any detected objects in the video stream. - # Monitor the GPU utilization stats On the Jetson device, you can monitor the GPU utilization by installing `jetson-stats` with: @@ -363,3 +355,22 @@ Confidence Level threshold. The module ignores any inference results below this `VideoSource` : (string) Source of video stream/capture source + +# Pushing Detected Object Data into Azure Time Series Insights + +[Azure Time Series Insights](https://docs.microsoft.com/en-us/azure/time-series-insights/time-series-insights-overview?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo) is built to store, visualize, and query large amounts of time series data, such as that generated by IoT devices. This service can allow us to extract insights that may allow us to build something very interesting. For example, imagine getting an alert when the mail truck is actually at the driveway, counting wildlife species using camera feeds from the National Park Service, or being able to tell that people are in a place that they should not be or counting them over time! + +To begin, navigate to the resource group that contains the IoT Hub that was created in the previous steps. Add a new Time Series Insights environment into the Resource Group and select `S1` tier for deployment. Be sure to place the Time Series Insights instance into the same Geographical region which contains your IoT Hub to minimize latency and egress charges. + +![](https://hackster.imgix.net/uploads/attachments/939871/image_11Mggcf7p3.png?auto=compress) + +Next, choose a unique name for your Event Source and configure the Event Source to point to the IoT Hub you created in the previous steps. Set the `IoT Hub Access Policy Name` to "iothubowner", be sure to create a new IoT Hub Consumer Group named "tsi", and leave the `TimeStamp Propery Name` empty as shown below: + +![](https://hackster.imgix.net/uploads/attachments/939872/image_4DsJXUVxvt.png?auto=compress) + +Complete the steps to "Review and Create" your deployment of Time Series Insights. Once the instance has finished deploying, you can navigate to the Time Insights Explorer by viewing the newly deployed Time Series Insights Environment resource, selecting "Overview" and clicking the "Time Series Insights explorer URL". Once you have clicked the link, you may begin working with your detected object data. + +For details on how to explore and query your data in the Azure Time Series Insights explorer, you may consult the [Time Series Insights documentation](https://docs.microsoft.com/en-us/azure/time-series-insights/time-series-insights-explorer?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo). + +![](https://hackster.imgix.net/uploads/attachments/939873/image_JWWcQszXsh.png?auto=compress) + From 6e6e3653133b5b70024495ec9faddc7bfa68622a Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Fri, 6 Sep 2019 11:17:17 -0500 Subject: [PATCH 02/12] version lock azure-iot-sdk-python --- modules/YoloModule/Dockerfile.arm64v8 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/modules/YoloModule/Dockerfile.arm64v8 b/modules/YoloModule/Dockerfile.arm64v8 index 7cc4d7e..7631300 100644 --- a/modules/YoloModule/Dockerfile.arm64v8 +++ b/modules/YoloModule/Dockerfile.arm64v8 @@ -13,7 +13,7 @@ WORKDIR /usr/sdk RUN python -m virtualenv --python=python3 env3 RUN source env3/bin/activate && pip install --upgrade pip && pip install -U setuptools wheel -RUN git clone --recursive --depth=1 https://github.com/Azure/azure-iot-sdk-python.git src +RUN git clone --recursive --branch release_2019_01_03 --depth=1 https://github.com/Azure/azure-iot-sdk-python.git src # Build for Python 3 RUN add-apt-repository ppa:deadsnakes/ppa From aed6d17b5c6be66098624d5d4d74907e855da26c Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Fri, 6 Sep 2019 11:47:18 -0500 Subject: [PATCH 03/12] Add support for Hololens Video Stream --- modules/YoloModule/app/VideoCapture.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/modules/YoloModule/app/VideoCapture.py b/modules/YoloModule/app/VideoCapture.py index 4dbbb78..b139a1c 100644 --- a/modules/YoloModule/app/VideoCapture.py +++ b/modules/YoloModule/app/VideoCapture.py @@ -70,7 +70,8 @@ class VideoCapture(object): def __IsRtsp(self, videoPath): try: - return 'rtsp:' in videoPath.lower() + if 'rtsp:' in videoPath.lower() or '/api/holographic/stream' in videoPath.lower(): + return True except ValueError: return False From f0563c9d61128b456ab6e79e0b91cf0bb79029e5 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Fri, 6 Sep 2019 12:33:52 -0500 Subject: [PATCH 04/12] Add instructions for HoloLens video stream --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 0cc8c87..5b76608 100644 --- a/README.md +++ b/README.md @@ -203,6 +203,9 @@ In VS Code, navigate to the `.env` file and modify the following value: For an rtsp stream, provide a link to the rtsp stream in the format, rtsp:// + To use a HoloLens video stream, see this [article](https://blog.kloud.com.au/2016/09/01/streaming-hololens-video-to-your-web-browser/) to enable a user account in the HoloLens Web Portal, once this is configured, provide the url to the HoloLens video streaming endpoint, ex: + https://[USERNAME]:[PASSWORD]@[HOLOLENS_IP]/api/holographic/stream/live_high.mp4?holo=true&pv=true&mic=true&loopback=true + If you have an attached USB web cam, provide the V4L device path (this can be obtained from the terminal with `ls -ltrh /dev/video*`, ex: /dev/video0 and open the included `deployment.template.json` and look for: ``` From 3a80614e71d4d5f12daccbebc4ca169c6e541962 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Wed, 11 Sep 2019 12:47:01 -0500 Subject: [PATCH 05/12] Let's use the Microsoft Musical by default --- .env | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.env b/.env index cf1566b..1a748a1 100644 --- a/.env +++ b/.env @@ -2,4 +2,4 @@ CONTAINER_REGISTRY_URL=toolboc CONTAINER_REGISTRY_USERNAME= CONTAINER_REGISTRY_PASSWORD= CONTAINER_MODULE_VERSION=latest -CONTAINER_VIDEO_SOURCE=https://www.youtube.com/watch?v=YZkp0qBBmpw +CONTAINER_VIDEO_SOURCE=https://www.youtube.com/watch?v=ZGeWNR8CWnA From 5958ca14adfd41fca0c3589fb6e13ca45079f585 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Wed, 11 Sep 2019 12:47:14 -0500 Subject: [PATCH 06/12] Fix Youtube Downloader --- modules/YoloModule/Dockerfile.arm64v8 | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/modules/YoloModule/Dockerfile.arm64v8 b/modules/YoloModule/Dockerfile.arm64v8 index 7631300..93fa589 100644 --- a/modules/YoloModule/Dockerfile.arm64v8 +++ b/modules/YoloModule/Dockerfile.arm64v8 @@ -43,8 +43,16 @@ RUN cp /usr/local/src/darknet/libdarknet.so /app/libdarknet.so COPY /build/requirements.txt ./ RUN pip3 install --upgrade pip RUN pip3 install --no-cache-dir -r requirements.txt -RUN pip3 install tornado==4.5.3 trollius && \ - pip3 install -U youtube-dl +RUN pip3 install tornado==4.5.3 trollius + +RUN apt-get update && \ + apt-get install -y --no-install-recommends zip pandoc && \ + rm -rf /var/lib/apt/lists/* + +RUN git clone --depth=1 https://github.com/ytdl-org/youtube-dl.git && \ + cd youtube-dl && \ + make && \ + make install ADD /app/ . From 14215f6779778c8c789eeddf55cbe69996b051f7 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Thu, 26 Sep 2019 09:40:21 -0500 Subject: [PATCH 07/12] Use Gap video for people detection --- .env | 2 +- config/deployment.arm32v7.json | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/.env b/.env index 1a748a1..7528b6a 100644 --- a/.env +++ b/.env @@ -2,4 +2,4 @@ CONTAINER_REGISTRY_URL=toolboc CONTAINER_REGISTRY_USERNAME= CONTAINER_REGISTRY_PASSWORD= CONTAINER_MODULE_VERSION=latest -CONTAINER_VIDEO_SOURCE=https://www.youtube.com/watch?v=ZGeWNR8CWnA +CONTAINER_VIDEO_SOURCE=https://www.youtube.com/watch?v=XJ735krOiPo diff --git a/config/deployment.arm32v7.json b/config/deployment.arm32v7.json index db4c2e6..ec0e934 100644 --- a/config/deployment.arm32v7.json +++ b/config/deployment.arm32v7.json @@ -43,7 +43,7 @@ "restartPolicy": "always", "settings": { "image": "toolboc/yolomodule:latest-arm32v7", - "createOptions": "{\"Env\":[\"VIDEO_PATH=https://www.youtube.com/watch?v=YZkp0qBBmpw\",\"VIDEO_WIDTH=0\",\"VIDEO_HEIGHT=0\",\"FONT_SCALE=0.8\"],\"HostConfig\":{\"Devices\":[{\"PathOnHost\":\"/dev/nvhost-ctrl\",\"PathInContainer\":\"/dev/nvhost-ctrl\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-ctrl-gpu\",\"PathInContainer\":\"dev/nvhost-ctrl-gpu\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-prof-gpu\",\"PathInContainer\":\"dev/nvhost-prof-gpu \",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvmap\",\"PathInContainer\":\"/dev/nvmap\",\"Cgroup", + "createOptions": "{\"Env\":[\"VIDEO_PATH=https://www.youtube.com/watch?v=XJ735krOiPo\",\"VIDEO_WIDTH=0\",\"VIDEO_HEIGHT=0\",\"FONT_SCALE=0.8\"],\"HostConfig\":{\"Devices\":[{\"PathOnHost\":\"/dev/nvhost-ctrl\",\"PathInContainer\":\"/dev/nvhost-ctrl\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-ctrl-gpu\",\"PathInContainer\":\"dev/nvhost-ctrl-gpu\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-prof-gpu\",\"PathInContainer\":\"dev/nvhost-prof-gpu \",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvmap\",\"PathInContainer\":\"/dev/nvmap\",\"Cgroup", "createOptions01": "Permissions\":\"rwm\"},{\"PathOnHost\":\"dev/nvhost-gpu\",\"PathInContainer\":\"dev/nvhost-gpu\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-as-gpu\",\"PathInContainer\":\"/dev/nvhost-as-gpu\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/nvhost-vic\",\"PathInContainer\":\"/dev/nvhost-vic\",\"CgroupPermissions\":\"rwm\"},{\"PathOnHost\":\"/dev/tegra_dc_ctrl\",\"PathInContainer\":\"/dev/tegra_dc_ctrl\",\"CgroupPermissions\":\"rwm\"}],\"PortBindings\":{\"80/tcp\":[{\"HostPort\":\"80\"}]}}}" } } From 67702c76385df54b5980ba1bb20e114ade793520 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Thu, 26 Sep 2019 13:32:54 -0500 Subject: [PATCH 08/12] Add HOL materials --- README.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 5b76608..e8600d9 100644 --- a/README.md +++ b/README.md @@ -91,13 +91,19 @@ hair drier toothbrush ``` +# Hands-On Lab Materials + +* [Presentation Deck](http://aka.ms/intelligentedgeholdeck) +* [Presentation Video](http://youtube.com) + # Getting Started This lab requires that you have the following: Hardware: * [Nvidia Jetson Nano Device](https://amzn.to/2WFE5zF) -* A cooling fan installed on or pointed at the Nvidia Jetson Nano device -* USB Webcam (Optional) +* A [cooling fan](https://amzn.to/2ZI2ki9) installed on or pointed at the Nvidia Jetson Nano device +* USB Webcam (Optional) + - Note: The power consumption will require that your device is configured to use a [5V/4A barrel adapter](https://amzn.to/32DFsTq) as mentioned [here](https://www.jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/) with an [Open-CV compatible camera](https://web.archive.org/web/20120815172655/http://opencv.willowgarage.com/wiki/Welcome/OS/). Development Environment: - [Visual Studio Code (VSCode)](https://code.visualstudio.com/Download?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo) From 47832b0848f10168654a8d0ef844eb3b15125b0a Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Thu, 26 Sep 2019 17:56:47 -0500 Subject: [PATCH 09/12] Add Presentation Video --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index e8600d9..a6b2e7d 100644 --- a/README.md +++ b/README.md @@ -94,7 +94,9 @@ toothbrush # Hands-On Lab Materials * [Presentation Deck](http://aka.ms/intelligentedgeholdeck) -* [Presentation Video](http://youtube.com) +* [Presentation Video](https://1drv.ms/v/s!AlKHLaNiha1UlvxB-wowhJPAtORfRA?e=g4Cqon) + - Note: If you want to view a full walkthrough of this lab, skip to 38:00 + # Getting Started This lab requires that you have the following: From db81a3417466f833b1e6d689788e00336d294557 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Wed, 2 Oct 2019 15:26:19 -0500 Subject: [PATCH 10/12] Update README.md --- README.md | 85 ++----------------------------------------------------- 1 file changed, 3 insertions(+), 82 deletions(-) diff --git a/README.md b/README.md index a6b2e7d..df0db3f 100644 --- a/README.md +++ b/README.md @@ -8,88 +8,9 @@ The module ships as a fully self-contained docker image totalling around 5.5GB. Object Detection is accomplished using YOLOv3-tiny with [Darknet](https://github.com/AlexeyAB/darknet) which supports detection of the following: -``` -person -bicycle -car -motorbike -aeroplane -bus -train -truck -boat -traffic light -fire hydrant -stop sign -parking meter -bench -bird -cat -dog -horse -sheep -cow -elephant -bear -zebra -giraffe -backpack -umbrella -handbag -tie -suitcase -frisbee -skis -snowboard -sports ball -kite -baseball bat -baseball glove -skateboard -surfboard -tennis racket -bottle -wine glass -cup -fork -knife -spoon -bowl -banana -apple -sandwich -orange -broccoli -carrot -hot dog -pizza -donut -cake -chair -sofa -pottedplant -bed -diningtable -toilet -tvmonitor -laptop -mouse -remote -keyboard -cell phone -microwave -oven -toaster -sink -refrigerator -book -clock -vase -scissors -teddy bear -hair drier -toothbrush -``` + +*person, bicycle, car, motorbike, aeroplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove,skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, sofa, pottedplant, bed, diningtable, toilet, tv monitor, laptop, mouse, remote, keyboard, cell phone, microwave, oventoaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush* + # Hands-On Lab Materials From e7dc31cbc5e69e7197d0440a14ffab3dcd8b3986 Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Thu, 17 Oct 2019 19:21:01 -0500 Subject: [PATCH 11/12] Update README.md --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index df0db3f..755d09c 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![](https://pbs.twimg.com/media/D_ANZnbWsAA4EVK.jpg) -The IntelligentEdgeHOL walks through the process of deploying an [IoT Edge](https://docs.microsoft.com/en-us/azure/iot-edge/about-iot-edge?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo) module to an Nvidia Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, or an attached web cam. It achieves performance of around 10 frames per second for most video data. +The IntelligentEdgeHOL walks through the process of deploying an [IoT Edge](https://docs.microsoft.com/en-us/azure/iot-edge/about-iot-edge?WT.mc_id=github-IntelligentEdgeHOL-pdecarlo) module to an Nvidia Jetson Nano device to allow for detection of objects in YouTube videos, RTSP streams, Hololens Mixed Reality Capture, or an attached web cam. It achieves performance of around 10 frames per second for most video data. The module ships as a fully self-contained docker image totalling around 5.5GB. This image contains all necessary dependencies including the [Nvidia Linux for Tegra Drivers](https://developer.nvidia.com/embedded/linux-tegra) for Jetson Nano, [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit), [NVIDIA CUDA Deep Neural Network library (CUDNN)](https://developer.nvidia.com/cudnn), [OpenCV](https://github.com/opencv/opencv), and [Darknet](https://github.com/AlexeyAB/darknet). For details on how the base images are built, see the included `docker` folder. @@ -11,6 +11,9 @@ Object Detection is accomplished using YOLOv3-tiny with [Darknet](https://github *person, bicycle, car, motorbike, aeroplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove,skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, sofa, pottedplant, bed, diningtable, toilet, tv monitor, laptop, mouse, remote, keyboard, cell phone, microwave, oventoaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush* +# Demos + +* [Yolo Object Detection with Nvidia Jetson and Hololens](https://www.youtube.com/watch?v=zxGcUmcl1qo&feature=youtu.be) # Hands-On Lab Materials From ee622fb69b961bde335e624f8991dc94999d289d Mon Sep 17 00:00:00 2001 From: Paul DeCarlo Date: Fri, 18 Oct 2019 15:00:45 -0500 Subject: [PATCH 12/12] Spell Check --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 755d09c..b792c0d 100644 --- a/README.md +++ b/README.md @@ -18,7 +18,7 @@ Object Detection is accomplished using YOLOv3-tiny with [Darknet](https://github # Hands-On Lab Materials * [Presentation Deck](http://aka.ms/intelligentedgeholdeck) -* [Presentation Video](https://1drv.ms/v/s!AlKHLaNiha1UlvxB-wowhJPAtORfRA?e=g4Cqon) +* [Presentation Video](http:aka.ms/intelligentedgeholvideo) - Note: If you want to view a full walkthrough of this lab, skip to 38:00 @@ -43,7 +43,7 @@ Development Environment: # Installing IoT Edge onto the Jetson Nano Device -Before we install IoT Edge, we need to install a few utitilies onto the Nvidia Jetson Nano device with: +Before we install IoT Edge, we need to install a few utilities onto the Nvidia Jetson Nano device with: ``` sudo apt-get install -y curl nano python3-pip