* Update custom-operation-topology.json

* Update person-count-operation-topology.json

* Update custom-operation-topology.json

* Update person-distance-operation-topology.json

* Update person-line-crossing-operation-topology.json

* Update person-zone-crossing-operation-topology.json

* Update topology.json

* Update topology.json

* Update topology.json

* Update custom-operation-topology.json

* Update person-count-operation-topology.json

* Update person-distance-operation-topology.json

* Update person-zone-crossing-operation-topology.json

* Update person-line-crossing-operation-topology.json

* Update topology.json

* Update topology.json

* Update topology.json

* Update readme.md

* Update topology.json

* Add files via upload

Updated block diagram

* Update topology.json

Update apiVersion

* Update topology.json

Updating apiVersion

* Update topology.json

Update apiVersion

* Update readme.md

Added NVIDIA

* Update readme.md

Live and batch pipeline topologies

* Update topology.json

Updating API version to 1.1

* Update topology.json

* Update topology.json

* Update topology.json

Validated - works with AVA 1.1

* Update topology.json

* updated version to 1.1

* Create test

* Add files via upload

* Delete test

* Create test

* Add files via upload

* Delete test

* Update custom-operation-topology.json

* Update readme.md

* Update topology.json

* Update topology.json

* Update readme.md

* Update topology.json

* Update topology.json

* Update topology.json

* Update README.md (#63)

Ignite updates

* Updated readme and support and apiversion (#65)

* Update readme.md (#67)

* Updating to API version 1.1 (#69)

Upgrading to 1.1 isn't giving me any errors at the moment

* Update lineCrossing to v1.1 (#70)

* Update readme.md (#75)

adding link to quick start in readme file https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/analyze-live-video-use-your-model-grpc?pivots=programming-language-csharp

* pipeline details (#77)

* Ignite update for ARM deployment (#66)

* Update readme.md

* Update name of dockerfile in README

* adding retailshop-15fps.mkv

* Update video-analyzer.deploy.json

* Ignite-ARM Template updates

Updated 3 files for Ignite release.  Needed to add a UAMI for IoT Hub, and then link the Video Analyzer account to use that UAMI.

* Update iot.deploy.json

* Update iot.deploy.json

Co-authored-by: Naiteek Sangani <42254991+naiteeks@users.noreply.github.com>
Co-authored-by: Jason Weijun Xian <66283214+jasonxian-msft@users.noreply.github.com>

Co-authored-by: Naiteek Sangani <42254991+naiteeks@users.noreply.github.com>
Co-authored-by: Anil Murching <anilmur@microsoft.com>
Co-authored-by: Keith Hill <Keith@microsoft.com>
Co-authored-by: Nandakishor Basavanthappa <nandab@microsoft.com>
Co-authored-by: russell-cooks <30545008+russell-cooks@users.noreply.github.com>
Co-authored-by: Jason Weijun Xian <66283214+jasonxian-msft@users.noreply.github.com>
This commit is contained in:
Nikita Pitliya 2021-11-01 17:23:45 -07:00 коммит произвёл GitHub
Родитель 1300b24b5b
Коммит 2db3c7c606
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
56 изменённых файлов: 1257 добавлений и 1075 удалений

Просмотреть файл

@ -3,23 +3,25 @@
## Introduction ## Introduction
[Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer) (AVA) provides a platform for you to build intelligent video applications that span the edge and the cloud. The platform is an evolution of [Live Video Analytics on IoT Edge](https://docs.microsoft.com/azure/media-services/live-video-analytics-edge/overview) that offers the capability to capture, record, analyze live video and publish the results (video and/or video analytics) to Azure services (in the cloud and/or the edge). The platform can be used to enhance IoT solutions with video analytics. [Azure Video Analyzer](https://azure.microsoft.com/products/video-analyzer) (AVA) provides a platform for you to build intelligent video applications that span the edge and the cloud. The platform consists of an IoT Edge module and an Azure service. It offers the capability to capture, record, and analyze live videos and publish the results, namely video and insights from video, to edge or cloud.
## Azure Video Analyzer on IoT Edge ## Azure Video Analyzer on IoT Edge
Video Analyzer is an [IoT Edge module](http://docs.microsoft.com/azure/marketplace/iot-edge-module) which offers functionality that can be combined with other Azure edge modules such as Stream Analytics on IoT Edge, Cognitive Services on IoT Edge as well as Azure services in the cloud such as Event Hub, Cognitive Services, etc. to build powerful hybrid (i.e. edge + cloud) applications. Video Analyzer is designed to be a pluggable platform, enabling you to plug video analysis edge modules (e.g. Cognitive services containers, custom edge modules built by you with open source machine learning models or custom models trained with your own data) and use them to analyze live video without worrying about the complexity of building and running a live video pipeline. Azure Video Analyzer is an [IoT Edge module](http://docs.microsoft.com/azure/marketplace/iot-edge-module) which offers functionality that can be combined with other Azure edge modules such as Stream Analytics on IoT Edge, Cognitive Services on IoT Edge as well as Azure services in the cloud such as Event Hub, Cognitive Services, etc. to build powerful hybrid (i.e. edge + cloud) applications. Video Analyzer is designed to be a pluggable platform, enabling you to plug video analysis edge modules (e.g. Cognitive services containers, custom edge modules built by you with open source machine learning models or custom models trained with your own data) and use them to analyze live video without worrying about the complexity of building and running a live video pipeline.
With Video Analyzer, you can continue to use your CCTV cameras with your existing video management systems (VMS) and build video analytics apps independently. Video Analyzer can be used in conjunction with existing computer vision SDKs and toolkits to build cutting edge hardware accelerated live video analytics enabled IoT solutions. The diagram below illustrates this. With Video Analyzer, you can continue to use your CCTV cameras with your existing video management systems (VMS) and build video analytics apps independently. Video Analyzer can be used in conjunction with existing computer vision SDKs and toolkits to build cutting edge hardware accelerated live video analytics enabled IoT solutions. Apart from analyzing live video, the edge module also enables you to optionally record video locally on the edge or to the cloud, and to publish video insights to Azure services (on the edge and/or in the cloud). If video and video insights are recorded to the cloud, then the Video Analyzer cloud service can be used to manage them.
The Video Analyzer cloud service can therefore be also used to enhance IoT solutions with VMS capabilities such as recording, playback, and exporting (generating video files that can be shared externally). It can also be used to build a cloud-native solution with the same capabilities, as shown in the diagram below, with cameras connecting directly to the cloud.
<br> <br>
<p align="center"> <p align="center">
<img src="./images/AVA-product-diagram.png" title="AVA on IoT Edge"/> <img src="./images/ava-product-diagram-edge-cloud.png" title="Azure Video Analyzer - Edge module and service"/>
</p> </p>
<br> <br>
## This repo ## This repo
This repository is a starting point to learn about and engage in AVA open source projects.This repository is not an official AVA product support location, however, we will respond to issues filed here as best we can. This repository is a starting point to learn about and engage in Video Analyzer open source projects.This repository is not an official Video Analyzer product support location, however, we will respond to issues filed here as best we can.
## Contributing ## Contributing

Просмотреть файл

@ -1,25 +1,21 @@
# TODO: The maintainer of this repo has not yet edited this file
**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
- **No CSS support:** Fill out this template with information about how to file issues and get help.
- **Yes CSS support:** Fill out an intake form at [aka.ms/spot](https://aka.ms/spot). CSS will work with/help you to determine next steps. More details also available at [aka.ms/onboardsupport](https://aka.ms/onboardsupport).
- **Not sure?** Fill out a SPOT intake as though the answer were "Yes". CSS will help you decide.
*Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
# Support # Support
## How to file issues and get help ## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing ### Troubleshoot
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue. Find resolution steps to deployment issues and common errors encountered in [troubleshoot guide](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/troubleshoot).
### GitHub issues
We use [GitHub Issues](https://github.com/Azure/video-analyzer/issues) to track bugs, questions, and feature requests. GitHub issues are free, but response time is not guaranteed.
### Azure support tickets
Customers with an [Azure support plan](https://azure.microsoft.com/support/options/) can open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
**We recommend this option if your problem requires immediate attention.**
For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
## Microsoft Support Policy ## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above. Support for **Azure Video Analyzer** is limited to the resources listed above.

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithGrpcExtension", "name": "InferencingWithGrpcExtension",
"properties": { "properties": {
"description": "Inferencing using gRPC Extension", "description": "Inferencing using gRPC Extension",

Просмотреть файл

@ -8,4 +8,5 @@ This folder contains a set of IoT Edge module extensions that can be used in con
|---------|-------------| |---------|-------------|
|customvision|Docker container to build a custom vision model | |customvision|Docker container to build a custom vision model |
|intel|Docker containers to build Intel extension modules | |intel|Docker containers to build Intel extension modules |
|yolo|Docker containers to build yolov modules| |nvidia|Docker container to build NVIDIA DeepStream extension module |
|yolo|Docker containers to build yolo modules|

Двоичные данные
images/AVA-product-diagram.png

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 24 KiB

Двоичные данные
images/ava-product-diagram-edge-cloud.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 41 KiB

Просмотреть файл

@ -0,0 +1,13 @@
# Export a portion of a video archive to an MP4 file
This batch topology enables you to export a portion of a video archive to an MP4 file, using Azure Video Analyzer service. You can read more about the scenario in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/cloud/export-portion-of-video-as-mp4) article.
In the topology, you can see that it uses an encoder processor with a pre-build preset of "SingleLayer_1080p_H264_AAC", which is described in the [swagger file](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/videoanalyzer/resource-manager/Microsoft.Media/preview/2021-11-01-preview/PipelineTopologies.json).
The time sequence is parameterized, allowing you to choose a desired start and end timestamp for each clip you export for the specified input video resource. The maximum span of the time sequence (end timestamp - start timestamp) must be less than or equal to 24 hours.
<br>
<p align="center">
<img src="./topology.png" title="Export a portion of a video archive to an MP4 file"/>
</p>
<br>

Просмотреть файл

@ -0,0 +1,69 @@
{
"Name": "export-from-video-archive",
"Kind": "batch",
"Sku": {
"Name": "Batch_S1"
},
"Properties": {
"description": "Export a portion or clip from a video archive and write to a file, after encoding it to 1080p resolution",
"parameters": [
{
"name": "sourceVideoNameParameter",
"type": "string",
"description": "Parameter for the name of the source or input video resource"
},
{
"name": "timeSequenceParameter",
"type": "string",
"description": "Parameter for the start and end timestamp values between which content is extracted and encoded to a file"
},
{
"name": "outputVideoNameParameter",
"type": "string",
"description": "Parameter for the name of the output video resource to which the MP4 file is written"
}
],
"sources": [
{
"@type": "#Microsoft.VideoAnalyzer.VideoSource",
"videoName": "${sourceVideoNameParameter}",
"timeSequences": {
"@type": "#Microsoft.VideoAnalyzer.VideoSequenceAbsoluteTimeMarkers",
"ranges": "${timeSequenceParameter}"
},
"name": "videoSource"
}
],
"processors": [
{
"@type": "#Microsoft.VideoAnalyzer.EncoderProcessor",
"preset": {
"@type": "#Microsoft.VideoAnalyzer.EncoderSystemPreset",
"name": "SingleLayer_1080p_H264_AAC"
},
"inputs": [
{
"nodeName": "videoSource"
}
],
"name": "encoderProcessor"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"videoName": "${outputVideoNameParameter}",
"videoCreationProperties": {
"title": "export-from-video-archive",
"description": "Sample for exporting portion of recorded video as an MP4 file"
},
"inputs": [
{
"nodeName": "encoderProcessor"
}
],
"name": "videoSink"
}
]
}
}

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 20 KiB

Двоичные данные
pipelines/images/pipeline-in-cloud.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 9.6 KiB

Просмотреть файл

@ -1,20 +0,0 @@
# Pipelines
Pipeline Topologies lets you define where media should be captured from, how it should be processed, and where the results should be delivered. A pipeline topology consists of source, processor, and sink nodes. The diagram below provides a graphical representation of a pipeline topology.
<br>
<p align="center">
<img src="./pipeline.png" title="pipeline topology"/>
</p>
<br>
A pipeline topology can have one or more of the following types of nodes:
* **Source nodes** enable capturing of media into the pipeline topology. Media in this context, conceptually, could be an audio stream, a video stream, a data stream, or a stream that has audio, video, and/or data combined together in a single stream.
* **Processor nodes** enable processing of media within the pipeline topology.
* **Sink nodes** enable delivering the processing results to services and apps outside the pipeline topology.
Azure Video Analyzer on IoT Edgeenables you to manage pipelines via two entities – “Pipeline Topology” and “Live Pipeline”. A pipeline enables you to define a blueprint of the pipeline topologies with parameters as placeholders for values. This pipeline defines what nodes are used in the pipeline topology, and how they are connected within it. A live pipeline enables you to provide values for parameters in a pipeline topology. The live pipeline can then be activated to enable the flow of data.
You can learn more about this in the [pipeline topologies](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/pipeline) concept page.

Просмотреть файл

@ -0,0 +1,15 @@
# Capture, record, and stream live video from a cameras behind a firewall
This topology enables you to capture, record, and stream live video from an RTSP-capable camera that is behind a firewall, using Azure Video Analyzer service. You can read more about the scenario in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/cloud/use-remote-device-adapter) article.
In the topology, you can see that it uses
* **segmentLength** of PT30S or 30 seconds, which means the service waits until at least 30 seconds of the video has been aggregated before it records it to Azure storage. Increasing the value of segmentLength has the benefit of further lowering your storage transaction costs. However, this will mean an increase in the delay before you can watch recorded content.
* **retentionPeriod** of 30 days, which means the service will periodically scan the video archive and delete content older than 30 days
The RTSP credentials, the IoT Hub device ID, and the video resource name (to which content will be archived) are all parametrized - meaning you would specify unique values for these for each unique camera when creating a live pipeline under this topology. The IoT Hub name is also parametrized since it cannot be restricted to a common string across all users.
<br>
<p align="center">
<img src="./topology.png" title="Capture, record, and stream live video from a camera behind a firewall"/>
</p>
<br>

Просмотреть файл

@ -0,0 +1,85 @@
{
"Name": "record-camera-behind-firewall",
"Kind": "live",
"Sku": {
"Name": "Live_S1"
},
"Properties": {
"description": "Sample pipeline topology for capture, record, and stream live video from a camera that is behind a firewall",
"parameters": [
{
"name": "rtspUrlParameter",
"type": "String",
"description": "RTSP source URL parameter"
},
{
"name": "rtspUsernameParameter",
"type": "SecretString",
"description": "RTSP source username parameter"
},
{
"name": "rtspPasswordParameter",
"type": "SecretString",
"description": "RTSP source password parameter"
},
{
"name": "ioTHubDeviceIdParameter",
"type": "String",
"description": "IoT Hub Device ID parameter"
},
{
"name": "ioTHubNameParameter",
"type": "String",
"description": "IoT Hub name parameter"
},
{
"name": "videoName",
"type": "String",
"description": "Video resource name parameter"
}
],
"sources": [
{
"@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource",
"transport": "tcp",
"endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrlParameter}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUsernameParameter}",
"password": "${rtspPasswordParameter}"
},
"tunnel": {
"@type": "#Microsoft.VideoAnalyzer.SecureIotDeviceRemoteTunnel",
"iotHubName" : "${ioTHubNameParameter}",
"deviceId": "${ioTHubDeviceIdParameter}"
}
}
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "${videoName}",
"videoCreationProperties": {
"title": "sample-record-camera-behind-firewall",
"description": "Sample video using CVR from private camera",
"segmentLength": "PT30S",
"retentionPeriod": "P30D"
},
"videoPublishingOptions": {
"disableRtspPublishing": "false",
"disableArchive": "false"
},
"inputs": [
{
"nodeName": "rtspSource"
}
]
}
]
}
}

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 9.3 KiB

Просмотреть файл

@ -0,0 +1,15 @@
# Capture, record, and stream live video from a camera accessible over the internet
This topology enables you to capture, record, and stream live video from an RTSP-capable camera that is accessible over the intenet, using Azure Video Analyzer service. You can read more about the scenario in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/cloud/get-started-livepipelines-portal) article.
In the topology, you can see that it uses
* **segmentLength** of PT0M30S or 30 seconds, which means the service waits until at least 30 seconds of the video has been aggregated before it records it to Azure storage. Increasing the value of segmentLength has the benefit of further lowering your storage transaction costs. However, this will mean an increase in the delay before you can watch recorded content.
* **retentionPeriod** of 30 days, which means the service will periodically scan the video archive and delete content older than 30 days
The RTSP credentials and the video resource name (to which content will be archived) are parametrized - meaning you would specify unique values for these for each unique camera when creating a live pipeline under this topology.
<br>
<p align="center">
<img src="./topology.png" title="Capture, record, and stream live video from a camera accessible over the internet"/>
</p>
<br>

Просмотреть файл

@ -0,0 +1,70 @@
{
"Name": "record-camera-open-internet",
"Kind": "live",
"Sku": {
"Name": "Live_S1"
},
"Properties": {
"description": "Sample pipeline topology for capture, record, and stream live video from a camera that is accessible over the internet",
"parameters": [
{
"name": "rtspUrlParameter",
"type": "String",
"description": "RTSP source URL parameter"
},
{
"name": "rtspUsernameParameter",
"type": "SecretString",
"description": "RTSP source username parameter"
},
{
"name": "rtspPasswordParameter",
"type": "SecretString",
"description": "RTSP source password parameter"
},
{
"name": "videoName",
"type": "String",
"description": "Video resource name parameter"
}
],
"sources": [
{
"@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource",
"transport": "tcp",
"endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrlParameter}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUsernameParameter}",
"password": "${rtspPasswordParameter}"
}
}
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "${videoName}",
"videoCreationProperties": {
"title": "sample-record-camera-open-internet",
"description": "Sample video using CVR from public camera",
"segmentLength": "PT30S",
"retentionPeriod": "P30D"
},
"videoPublishingOptions": {
"disableRtspPublishing": "false",
"disableArchive": "false"
},
"inputs": [
{
"nodeName": "rtspSource"
}
]
}
]
}
}

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 9.3 KiB

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "AIComposition", "name": "AIComposition",
"properties": { "properties": {
"description": "AI Composition runs 2 AI models of your choice", "description": "AI Composition runs 2 AI models of your choice",
@ -249,4 +249,4 @@
} }
] ]
} }
} }

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "AudioVideo", "name": "AudioVideo",
"properties": { "properties": {
"description": "Record video clip including audio", "description": "Record video clip including audio",

Просмотреть файл

@ -1,6 +1,6 @@
{ {
"name": "CVRToVideoSink", "name": "CVRToVideoSink",
"@apiVersion": "1.0", "@apiVersion": "1.1",
"properties": { "properties": {
"description": "Continuous video recording to Azure Video Sink", "description": "Continuous video recording to Azure Video Sink",
"parameters": [ "parameters": [

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "CVRWithGrpcExtension", "name": "CVRWithGrpcExtension",
"properties": { "properties": {
"description": "Continuous video recording and inferencing using gRPC Extension", "description": "Continuous video recording and inferencing using gRPC Extension",
@ -21,12 +21,6 @@
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{
"name": "videoSinkName",
"type": "String",
"description": "video sink name",
"default": "sampleVideoSinkFromCVR-AVAEdge"
},
{ {
"name": "grpcExtensionAddress", "name": "grpcExtensionAddress",
"type": "String", "type": "String",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "CVRHttpExtensionObjectTracking", "name": "CVRHttpExtensionObjectTracking",
"properties": { "properties": {
"description": "Continuous video recording and inferencing using HTTP Extension", "description": "Continuous video recording and inferencing using HTTP Extension",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "CVRWithHttpExtension", "name": "CVRWithHttpExtension",
"properties": { "properties": {
"description": "Continuous video recording and inferencing using HTTP Extension", "description": "Continuous video recording and inferencing using HTTP Extension",
@ -44,12 +44,6 @@
"type": "String", "type": "String",
"description": "hub sink output name", "description": "hub sink output name",
"default": "inferenceOutput" "default": "inferenceOutput"
},
{
"name": "videoSinkName",
"type": "String",
"description": "video sink name",
"default": "sampleVideoSinkFromCVR-AVAEdge"
} }
], ],
"sources": [ "sources": [

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "CVRWithMotionDetection", "name": "CVRWithMotionDetection",
"properties": { "properties": {
"description": "Continuous video recording with Motion Detection", "description": "Continuous video recording with Motion Detection",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRtoVideoSinkByGrpcExtension", "name": "EVRtoVideoSinkByGrpcExtension",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on events from grpc extension", "description": "Event-based video recording to Video Sink based on events from grpc extension",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRtoVideoSinkByHttpExtension", "name": "EVRtoVideoSinkByHttpExtension",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on events from http extension", "description": "Event-based video recording to Video Sink based on events from http extension",

Просмотреть файл

@ -2,7 +2,7 @@
This topology enables you to record video clips to the local file system of the edge device whenever an external sensor sends a message to the pipeline topology. The diagram below shows a door sensor as an external module, but it could be any sensor or app sending the message. This topology enables you to record video clips to the local file system of the edge device whenever an external sensor sends a message to the pipeline topology. The diagram below shows a door sensor as an external module, but it could be any sensor or app sending the message.
Note: This topology is similar to the [topology](../evr-hubMessage-video-sink/topology.json) where you record video clips only, when desired objects are detected. To trigger recording of video with this topology, you will need to send events to the IoT Hub message source node. An option to accomplish this is to deploy another IoT Edge module which generates events. You would then configure message routing in the IoT Edge deployment manifest to send those events from the latter module to the IoT Hub message source node in this topology. Note: This topology is similar to the [topology](../evr-hubMessage-video-sink/topology.json) where you record video clips to a video sink when desired objects are detected. To trigger recording of video with this topology, you will need to send events to the IoT Hub message source node. An option to accomplish this is to deploy another IoT Edge module which generates events. You would then configure message routing in the IoT Edge deployment manifest to send those events from the latter module to the IoT Hub message source node in this topology.
<br> <br>
<p align="center"> <p align="center">

Просмотреть файл

@ -1,110 +1,105 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRtoFilesBasedOnHubMessages", "name": "EVRtoFilesBasedOnHubMessages",
"properties": { "properties": {
"description": "Event-based recording of video to files based on messages from via hub source", "description": "Event-based recording of video to files based on messages from via hub source",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
}, },
{ {
"name": "rtspPassword", "name": "rtspPassword",
"type": "String", "type": "String",
"description": "rtsp source password.", "description": "rtsp source password.",
"default": "dummyPassword" "default": "dummyPassword"
}, },
{ {
"name": "rtspUrl", "name": "rtspUrl",
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{ {
"name": "motionSensitivity", "name": "hubSourceInput",
"type": "String", "type": "String",
"description": "motion detection sensitivity", "description": "input name for hub source",
"default": "medium" "default": "recordingTrigger"
}, },
{ {
"name": "hubSourceInput", "name": "fileSinkOutputName",
"type": "String", "type": "String",
"description": "input name for hub source", "description": "file sink output name",
"default": "recordingTrigger" "default": "filesinkOutput"
},
{
"name": "fileSinkOutputName",
"type": "String",
"description": "file sink output name",
"default": "filesinkOutput"
}
],
"sources": [
{
"@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource",
"endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "{rtspUserName}",
"password": "{rtspPassword}"
}
} }
}, ],
{ "sources": [
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSource", {
"name": "iotMessageSource", "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"hubInputName": "${hubSourceInput}" "name": "rtspSource",
} "endpoint": {
], "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"processors": [ "url": "${rtspUrl}",
{ "credentials": {
"@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor", "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"name": "signalGateProcessor", "username": "${rtspUserName}",
"inputs": [ "password": "${rtspPassword}"
{ }
"nodeName": "iotMessageSource"
},
{
"nodeName": "rtspSource",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
} }
], },
"activationEvaluationWindow": "PT1S", {
"activationSignalOffset": "PT0S", "@type": "#Microsoft.VideoAnalyzer.IotHubMessageSource",
"minimumActivationTime": "PT10S", "name": "iotMessageSource",
"maximumActivationTime": "PT30S" "hubInputName": "${hubSourceInput}"
} }
], ],
"sinks": [ "processors": [
{ {
"@type": "#Microsoft.VideoAnalyzer.FileSink", "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
"name": "fileSink", "name": "signalGateProcessor",
"inputs": [ "inputs": [
{ {
"nodeName": "signalGateProcessor", "nodeName": "iotMessageSource"
"outputSelectors": [ },
{ {
"property": "mediaType", "nodeName": "rtspSource",
"operator": "is", "outputSelectors": [
"value": "video" {
} "property": "mediaType",
] "operator": "is",
} "value": "video"
], }
"fileNamePattern": "sampleFilesFromEVR-${System.TopologyName}-${System.PipelineName}-${fileSinkOutputName}-${System.Runtime.DateTime}", ]
"maximumSizeMiB":"512", }
"baseDirectoryPath":"/var/media" ],
} "activationEvaluationWindow": "PT1S",
] "activationSignalOffset": "PT0S",
"minimumActivationTime": "PT10S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.FileSink",
"name": "fileSink",
"inputs": [
{
"nodeName": "signalGateProcessor",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
}
],
"fileNamePattern": "sampleFilesFromEVR-${System.TopologyName}-${System.PipelineName}-${fileSinkOutputName}-${System.Runtime.DateTime}",
"maximumSizeMiB":"512",
"baseDirectoryPath":"/var/media"
}
]
}
} }
}

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRtoVideoSinkOnObjDetect", "name": "EVRtoVideoSinkOnObjDetect",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on specific objects being detected by external inference engine", "description": "Event-based video recording to Video Sink based on specific objects being detected by external inference engine",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRToFilesOnMotionDetection", "name": "EVRToFilesOnMotionDetection",
"properties": { "properties": {
"description": "Event-based video recording to local files based on motion events", "description": "Event-based video recording to local files based on motion events",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRToFilesAndVideoSinkOnMotion", "name": "EVRToFilesAndVideoSinkOnMotion",
"properties": { "properties": {
"description": "Event-based video recording to local files based on motion events", "description": "Event-based video recording to local files based on motion events",

Просмотреть файл

@ -1,6 +1,6 @@
# Event-based video recording to Video Sink based on motion events # Event-based video recording to Video Sink based on motion events
This topology enables you to perform event-based recording. The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node which will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the video sink containing clips where motion was detected. You can see how this topology is used in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/detect-motion-record-video-clips-media-services-quickstart) quickstart. This topology enables you to perform event-based recording. The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger the signal gate processor node which will send frames to the video sink node when motion is detected. As a result, new video clips are appended to the video sink containing clips where motion was detected. You can see how this topology is used in [this quickstart](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/detect-motion-record-video-clips-cloud).
<br> <br>
<p align="center"> <p align="center">

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVRtoVideoSinkOnMotionDetection", "name": "EVRtoVideoSinkOnMotionDetection",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on motion events", "description": "Event-based video recording to Video Sink based on motion events",

Просмотреть файл

@ -1,6 +1,6 @@
# Analyzing live video using gRPC Extension to send images to the OpenVINO(TM) DL Streamer - Edge AI Extension module from Intel # Analyzing live video using gRPC Extension to send images to the OpenVINO(TM) DL Streamer - Edge AI Extension module from Intel
This topology enables you to run video analytics on a live feed from an RTSP-capable camera. The gRPC Extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the [ OpenVINO(TM) DL Streamer - Edge AI Extension module from Intel](https://aka.ms/ava-intel-ovms) module. The results are then published to the IoT Edge Hub. You can see how this topology is used in [this](https://aka.ms/ava-intel-grpc) tutorial. This topology enables you to run video analytics on a live feed from an RTSP-capable camera. The gRPC Extension allows you to create images at video frame rate from the camera that are converted to images, and sent to the [ OpenVINO(TM) DL Streamer - Edge AI Extension module from Intel](https://aka.ms/ava-intel-ovms) module. The results are then published to the IoT Edge Hub. You can see how this topology is used in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/use-intel-grpc-video-analytics-serving-tutorial) tutorial.
<br> <br>
<p align="center"> <p align="center">

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithOpenVINOgRPC", "name": "InferencingWithOpenVINOgRPC",
"properties": { "properties": {
"description": "Analyzing live video using gRPCExtension to send video frames to the OpenVINO DL Streamer – Edge AI Extension module, from Intel", "description": "Analyzing live video using gRPCExtension to send video frames to the OpenVINO DL Streamer – Edge AI Extension module, from Intel",

Двоичный файл не отображается.

До

Ширина:  |  Высота:  |  Размер: 13 KiB

После

Ширина:  |  Высота:  |  Размер: 62 KiB

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithHttpExtension", "name": "InferencingWithHttpExtension",
"properties": { "properties": {
"description": "Analyzing live video using HTTP Extension to send images to an external inference engine", "description": "Analyzing live video using HTTP Extension to send images to an external inference engine",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithOpenVINO", "name": "InferencingWithOpenVINO",
"properties": { "properties": {
"description": "Analyzing live video using HTTP Extension to send images to the OpenVINO Model Server – AI Extension module, from Intel", "description": "Analyzing live video using HTTP Extension to send images to the OpenVINO Model Server – AI Extension module, from Intel",
@ -98,4 +98,4 @@
} }
] ]
} }
} }

Просмотреть файл

@ -1,7 +1,7 @@
# Line crossing in live video # Line crossing in live video
This topology enables you to use line crossing and get events when objects cross that line in a live rtsp video feed. It uses computer vision model to detect objects in a subset of the frames in the live video feed. The object tracker node is used to track those objects in the frames and pass them through a line crossing node. This topology enables you to use line crossing and get events when objects cross that line in a live rtsp video feed. It uses computer vision model to detect objects in a subset of the frames in the live video feed. The object tracker node is used to track those objects in the frames and pass them through a line crossing node.
The line crossing node comes in handy when you want to detect objects that cross the imaginary line and emit events. The events contain the direction (clockwise, counterclockwise) and a total counter per direction. The line crossing node comes in handy when you want to detect objects that cross the imaginary line and emit events. The events contain the direction (clockwise, counterclockwise) and a total counter per direction. You can see how this topology is used in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/use-line-crossing) tutorial.
<br> <br>
<p align="center"> <p align="center">

Просмотреть файл

@ -1,6 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name":"LineCrossingWithHttpExtension", "name":"LineCrossingWithHttpExtension",
"properties":{ "properties":{
"description":"Track Objects and use Line Crossing to emit events", "description":"Track Objects and use Line Crossing to emit events",

Просмотреть файл

@ -1,6 +1,6 @@
# Analyzing live video to detect motion and emit events # Analyzing live video to detect motion and emit events
The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Hub. You can see how this topology is used in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-quickstart) quickstart. The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Hub. You can see how this topology is used in [this](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/detect-motion-emit-events-quickstart) quickstart.
<br> <br>
<p align="center"> <p align="center">

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "MotionDetection", "name": "MotionDetection",
"properties": { "properties": {
"description": "Analyzing live video to detect motion and emit events", "description": "Analyzing live video to detect motion and emit events",
@ -69,4 +69,4 @@
} }
] ]
} }
} }

Просмотреть файл

@ -1,6 +1,6 @@
# Event-based video recording to Video Sink based on motion events, and using gRPC Extension to send images to an external inference engine # Event-based video recording to Video Sink based on motion events, and using gRPC Extension to send images to an external inference engine
This topology enables you perform event-based recording. The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node which will send frames to an video sink node only when motion is present. As a result, new video clips are appended to the Video file in the cloud containing clips where motion was detected. This topology enables you perform event-based recording. The video from an RTSP-capable camera is analyzed for the presence of motion. When motion is detected, those events are published to the IoT Edge Hub. In addition, the motion events are used to trigger a signal gate processor node which will send frames to an video sink node only when motion is present. As a result, new video clips are appended to the Video file in the cloud containing clips where motion was detected.You can see how this topology is used in [this](https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/analyze-live-video-use-your-model-grpc?pivots=programming-language-csharp) quickstart.
Additionally, this topology enables you to run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames (as controlled by the frame rate filter processor node) are sent to an external AI inference engine. The results are then published to the IoT Edge Hub. Additionally, this topology enables you to run video analytics only when motion is detected. Upon detecting motion, a subset of the video frames (as controlled by the frame rate filter processor node) are sent to an external AI inference engine. The results are then published to the IoT Edge Hub.

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVROnMotionPlusGrpcExtension", "name": "EVROnMotionPlusGrpcExtension",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on motion events, and using gRPC Extension to send images to an external inference engine", "description": "Event-based video recording to Video Sink based on motion events, and using gRPC Extension to send images to an external inference engine",

Просмотреть файл

@ -1,5 +1,5 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "EVROnMotionPlusHttpExtension", "name": "EVROnMotionPlusHttpExtension",
"properties": { "properties": {
"description": "Event-based video recording to Video Sink based on motion events, and using HTTP Extension to send images to an external inference engine", "description": "Event-based video recording to Video Sink based on motion events, and using HTTP Extension to send images to an external inference engine",

Просмотреть файл

@ -1,8 +1,8 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name":"ObjectTrackingWithHttpExtension", "name":"ObjectTrackingWithHttpExtension",
"properties":{ "properties":{
"description":"My Description", "description":"Track objects in a live video",
"parameters":[ "parameters":[
{ {
"name":"rtspUrl", "name":"rtspUrl",
@ -115,4 +115,4 @@
} }
] ]
} }
} }

Просмотреть файл

@ -1,124 +1,150 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithCVExtensionCustom", "name": "PersonAttributesTopology",
"properties": { "properties": {
"description": "Analyzing Live Video with Computer Vision for Spatial Analysis", "description": "Analyzing Person Attributes for Spatial Analysis",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
},
{
"name": "rtspPassword",
"type": "String",
"description": "rtsp source password.",
"default": "dummyPassword"
},
{
"name": "rtspUrl",
"type": "String",
"description": "rtsp Url"
},
{
"name": "grpcUrl",
"type": "String",
"description": "inferencing Url",
"default": "tcp://spatialanalysis:50051"
},
{
"name": "spatialanalysisusername",
"type": "String",
"description": "spatialanalysis endpoint username",
"default": "not-in-use"
},
{
"name": "spatialanalysispassword",
"type": "String",
"description": "spatialanalysis endpoint password",
"default": "not-in-use"
}
],
"sources": [
{
"@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource",
"transport": "tcp",
"endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}",
"password": "${rtspPassword}"
}
}
}
],
"processors": [
{
"@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"name": "computerVisionExtension",
"endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${grpcUrl}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${spatialanalysisusername}",
"password": "${spatialanalysispassword}"
}
}, },
"inputs": [ {
{ "name": "rtspPassword",
"nodeName": "rtspSource", "type": "String",
"outputSelectors": [ "description": "rtsp source password.",
{ "default": "dummyPassword"
"property": "mediaType", },
"operator": "is", {
"value": "video" "name": "rtspUrl",
} "type": "String",
] "description": "rtsp Url"
} },
], {
"operation": { "name": "grpcUrl",
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisCustomOperation", "type": "String",
"description": "inferencing Url",
"extensionConfiguration": "{\"version\":1,\"enabled\":true,\"platformloglevel\":\"info\",\"operationId\":\"cognitiveservices.vision.spatialanalysis-personcount.azurevideoanalyzer\",\"parameters\":{\"VISUALIZER_NODE_CONFIG\":\"{\\\"show_debug_video\\\":false}\",\"SINK_CONFIG\":\"{\\\"raw_output\\\":false}\",\"ENABLE_FACE_MASK_CLASSIFIER\":false,\"DETECTOR_NODE_CONFIG\":\"{\\\"gpu_index\\\":0,\\\"batch_size\\\":1}\",\"SPACEANALYTICS_CONFIG\":\"{\\\"zones\\\":[{\\\"name\\\":\\\"stairlanding\\\",\\\"polygon\\\":[[0.37,0.43],[0.48,0.42],[0.53,0.56],[0.34,0.57],[0.34,0.46],[0.37,0.43]],\\\"events\\\":[{\\\"type\\\":\\\"count\\\",\\\"config\\\":{\\\"trigger\\\":\\\"event\\\",\\\"output_frequency\\\":1,\\\"threshold\\\":16.0,\\\"focus\\\":\\\"bottom_center\\\"}}]}]}\",\"ENABLE_FACE_MASK_CLASSIFIER\":false},\"nodesLogLevel\":\"info\",\"platformLogLevel\":\"info\"}" "default": "tcp://spatialanalysis:50051"
},
{
"name": "spatialanalysisusername",
"type": "String",
"description": "spatialanalysis endpoint username",
"default": "not-in-use"
},
{
"name": "spatialanalysispassword",
"type": "String",
"description": "spatialanalysis endpoint password",
"default": "not-in-use"
} }
} ],
], "sources": [
"sinks": [ {
{ "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink", "name": "rtspSource",
"name": "hubSink", "transport": "tcp",
"hubOutputName": "inferenceOutput", "endpoint": {
"inputs": [ "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
{ "url": "${rtspUrl}",
"nodeName": "computerVisionExtension" "credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}",
"password": "${rtspPassword}"
}
} }
] }
}, ],
{ "processors": [
"@type": "#Microsoft.VideoAnalyzer.VideoSink", {
"name": "videoSink", "@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"videoName": "customoperation", "name": "computerVisionExtension",
"inputs": [ "endpoint": {
{ "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"nodeName": "rtspSource" "url": "${grpcUrl}",
"credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${spatialanalysisusername}",
"password": "${spatialanalysispassword}"
}
}, },
{ "inputs": [
"nodeName": "computerVisionExtension" {
} "nodeName": "rtspSource",
], "outputSelectors": [
"videoCreationProperties": { {
"title": "customoperation", "property": "mediaType",
"description": "Sample video using SA custom operation", "operator": "is",
"segmentLength": "PT30S" "value": "video"
}
]
}
],
"samplingOptions": {
"skipSamplesWithoutAnnotation": "false",
"maximumSamplesPerSecond": "15.0"
},
"image": {
"scale": {
"mode": "preserveAspectRatio",
"width": "640"
},
"format": {
"@type": "#Microsoft.VideoAnalyzer.ImageFormatRaw",
"pixelFormat": "bgr24"
}
},
"operation": {
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisCustomOperation",
"extensionConfiguration":"{\"version\":1,\"enabled\":true,\"platformloglevel\":\"info\",\"operationId\":\"cognitiveservices.vision.spatialanalysis-personcrossingpolygon.azurevideoanalyzer\",\"parameters\":{\"VISUALIZER_NODE_CONFIG\":\"{\\\"show_debug_video\\\":false}\",\"SINK_CONFIG\":\"{\\\"raw_output\\\":true}\",\"TRACKER_NODE_CONFIG\":\"{\\\"enable_speed\\\":true}\",\"DETECTOR_NODE_CONFIG\":\"{\\\"gpu_index\\\":0,\\\"batch_size\\\": 1,\\\"enable_orientation\\\":true}\",\"SPACEANALYTICS_CONFIG\":\"{\\\"zones\\\":[{\\\"name\\\":\\\"retailstore\\\",\\\"polygon\\\":[[0.0,0.0],[1.0,0.0],[1.0,1.0],[0.0,1.0],[0.0,0.0]],\\\"events\\\":[{\\\"type\\\":\\\"zonecrossing\\\",\\\"config\\\":{\\\"trigger\\\":\\\"event\\\",\\\"output_frequency\\\":1,\\\"threshold\\\":5.0,\\\"focus\\\":\\\"footprint\\\"}}]}]}\",\"ENABLE_FACE_MASK_CLASSIFIER\":false,\"ENABLE_PERSONATTRIBUTESCLASSIFICATION\": true,\"VISION_ENDPOINT_URL\": \"https:\/\/naiteekavasa.cognitiveservices.azure.com/\",\"VISION_MODEL_ID\": \"b836e47a-1d7f-4ba3-b7f6-b995d77824f6\",\"VISION_SUBSCRIPTION_KEY\": \"0805c5f75bfb40b0ba1132f782283266\",\"ENABLE_PERSONATTRIBUTES_TRAININGIMAGECOLLECTION\":false},\"nodesLogLevel\":\"verbose\",\"platformLogLevel\":\"info\"}"
}
}, },
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/", {
"localMediaCacheMaximumSizeMiB": "2048" "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
} "name": "signalGateProcessor",
] "inputs": [
{
"nodeName": "computerVisionExtension"
},
{
"nodeName": "rtspSource"
}
],
"activationEvaluationWindow": "PT1S",
"activationSignalOffset": "PT0S",
"minimumActivationTime": "PT30S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "Ignite-Spatial-Analysis-${System.TopologyName}",
"inputs": [
{
"nodeName": "signalGateProcessor"
}
],
"videoCreationProperties": {
"title": "Person Attributes in retail store using Spatial Analysis",
"description": "Person Attributes in retail store using Spatial Analysis",
"segmentLength": "PT30S"
},
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
"localMediaCacheMaximumSizeMiB": "2048"
}
]
}
} }
}

Просмотреть файл

@ -1,143 +1,158 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithPersonCount", "name": "PersonCountTopology",
"properties": { "properties": {
"description": "Analyzing Live Video with Computer Vision for Spatial Analysis", "description": "Analyzing Live Video with Computer Vision for Spatial Analysis",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
}, },
{ {
"name": "rtspPassword", "name": "rtspPassword",
"type": "String", "type": "String",
"description": "rtsp source password.", "description": "rtsp source password.",
"default": "dummyPassword" "default": "dummyPassword"
}, },
{ {
"name": "rtspUrl", "name": "rtspUrl",
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{ {
"name": "grpcUrl", "name": "grpcUrl",
"type": "String", "type": "String",
"description": "inferencing Url", "description": "inferencing Url",
"default": "tcp://spatialanalysis:50051" "default": "tcp://spatialanalysis:50051"
}, },
{ {
"name": "spatialanalysisusername", "name": "spatialanalysisusername",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint username", "description": "spatialanalysis endpoint username",
"default": "not-in-use" "default": "not-in-use"
}, },
{ {
"name": "spatialanalysispassword", "name": "spatialanalysispassword",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint password", "description": "spatialanalysis endpoint password",
"default": "not-in-use" "default": "not-in-use"
} }
], ],
"sources": [ "sources": [
{ {
"@type": "#Microsoft.VideoAnalyzer.RtspSource", "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource", "name": "rtspSource",
"transport": "tcp", "transport": "tcp",
"endpoint": { "endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}", "url": "${rtspUrl}",
"credentials": { "credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}", "username": "${rtspUserName}",
"password": "${rtspPassword}" "password": "${rtspPassword}"
}
} }
} }
} ],
], "processors": [
"processors": [ {
{ "@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor", "name": "computerVisionExtension",
"name": "computerVisionExtension", "endpoint": {
"endpoint": { "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "url": "${grpcUrl}",
"url": "${grpcUrl}", "credentials": {
"credentials": { "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "username": "${spatialanalysisusername}",
"username": "${spatialanalysisusername}", "password": "${spatialanalysispassword}"
"password": "${spatialanalysispassword}" }
}
},
"inputs": [
{
"nodeName": "rtspSource",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
}
],
"samplingOptions": {
"skipSamplesWithoutAnnotation": "false",
"maximumSamplesPerSecond": "15.0"
}, },
"operation": { "inputs": [
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonCountOperation",
"zones": [
{ {
"zone": { "nodeName": "rtspSource",
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString", "outputSelectors": [
"polygon": "[[0.37,0.43],[0.48,0.42],[0.53,0.56],[0.34,0.57],[0.34,0.46]]",
"name": "stairlanding"
},
"events": [
{ {
"trigger": "interval", "property": "mediaType",
"outputFrequency": "1", "operator": "is",
"threshold": "5", "value": "video"
"focus": "footprint"
} }
] ]
} }
] ],
} "samplingOptions": {
} "skipSamplesWithoutAnnotation": "false",
], "maximumSamplesPerSecond": "15.0"
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personcount-${System.TopologyName}-${System.PipelineName}",
"inputs": [
{
"nodeName": "rtspSource"
}, },
{ "operation": {
"nodeName": "computerVisionExtension" "@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonCountOperation",
"enableFaceMaskClassifier": "false",
"trackerNodeConfiguration": "{\"enable_speed\": true}",
"zones": [
{
"zone": {
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString",
"polygon": "[[0.0,0.0],[0.0,1.0],[1.0,1.0],[1.0,0.0],[0.0,0.0]]",
"name": "entireframe"
},
"events": [
{
"trigger": "event",
"outputFrequency": "1",
"threshold": "16",
"focus": "bottomCenter"
}
]
}
]
} }
],
"videoCreationProperties": {
"title": "personcount",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
}, },
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/", {
"localMediaCacheMaximumSizeMiB": "2048" "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
} "name": "signalGateProcessor",
] "inputs": [
{
"nodeName": "computerVisionExtension"
},
{
"nodeName": "rtspSource"
}
],
"activationEvaluationWindow": "PT1S",
"activationSignalOffset": "PT0S",
"minimumActivationTime": "PT30S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personCount",
"inputs": [
{
"nodeName": "signalGateProcessor"
}
],
"videoCreationProperties": {
"title": "Person Counting",
"description": "Person Counting in Retail store",
"segmentLength": "PT30S"
},
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
"localMediaCacheMaximumSizeMiB": "2048"
}
]
}
} }
}

Просмотреть файл

@ -1,141 +1,160 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithPersonDistance", "name": "PersonDistanceTopology",
"properties": { "properties": {
"description": "Analyzing Live Video with Computer Vision for Spatial Analysis", "description": "Analyzing Live Video with Computer Vision for Spatial Analysis",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
}, },
{ {
"name": "rtspPassword", "name": "rtspPassword",
"type": "String", "type": "String",
"description": "rtsp source password.", "description": "rtsp source password.",
"default": "dummyPassword" "default": "dummyPassword"
}, },
{ {
"name": "rtspUrl", "name": "rtspUrl",
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{ {
"name": "grpcUrl", "name": "grpcUrl",
"type": "String", "type": "String",
"description": "inferencing Url", "description": "inferencing Url",
"default": "tcp://spatialanalysis:50051" "default": "tcp://spatialanalysis:50051"
}, },
{ {
"name": "spatialanalysisusername", "name": "spatialanalysisusername",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint username", "description": "spatialanalysis endpoint username",
"default": "not-in-use" "default": "not-in-use"
}, },
{ {
"name": "spatialanalysispassword", "name": "spatialanalysispassword",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint password", "description": "spatialanalysis endpoint password",
"default": "not-in-use" "default": "not-in-use"
} }
], ],
"sources": [ "sources": [
{ {
"@type": "#Microsoft.VideoAnalyzer.RtspSource", "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource", "name": "rtspSource",
"transport": "tcp", "transport": "tcp",
"endpoint": { "endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}", "url": "${rtspUrl}",
"credentials": { "credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}", "username": "${rtspUserName}",
"password": "${rtspPassword}" "password": "${rtspPassword}"
}
} }
} }
} ],
], "processors": [
"processors": [ {
{ "@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor", "name": "computerVisionExtension",
"name": "computerVisionExtension", "endpoint": {
"endpoint": { "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "url": "${grpcUrl}",
"url": "${grpcUrl}", "credentials": {
"credentials": { "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "username": "${spatialanalysisusername}",
"username": "${spatialanalysisusername}", "password": "${spatialanalysispassword}"
"password": "${spatialanalysispassword}" }
} },
}, "inputs": [
"inputs": [
{
"nodeName": "rtspSource",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
}
],
"operation": {
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonDistanceOperation",
"zones": [
{ {
"zone": { "nodeName": "rtspSource",
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString", "outputSelectors": [
"polygon": "[[0.37,0.43],[0.48,0.42],[0.53,0.56],[0.34,0.57],[0.34,0.46]]",
"name": "door"
},
"events": [
{ {
"trigger": "event", "property": "mediaType",
"outputFrequency": "1", "operator": "is",
"threshold": "48.00", "value": "video"
"focus": "bottomCenter",
"minimumDistanceThreshold": "1.5",
"maximumDistanceThreshold": "14.5"
} }
] ]
} }
] ],
} "samplingOptions": {
} "skipSamplesWithoutAnnotation": "false",
], "maximumSamplesPerSecond": "15.0"
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "persondistance",
"inputs": [
{
"nodeName": "rtspSource"
}, },
{ "operation": {
"nodeName": "computerVisionExtension" "@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonDistanceOperation",
"enableFaceMaskClassifier": "false",
"trackerNodeConfiguration": "{\"enable_speed\": true}",
"zones": [
{
"zone": {
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString",
"polygon": "[[0.0,0.0],[0.0,1.0],[1.0,1.0],[1.0,0.0],[0.0,0.0]]",
"name": "entireframe"
},
"events": [
{
"trigger": "event",
"outputFrequency": "1",
"threshold": "48.00",
"focus": "bottomCenter",
"minimumDistanceThreshold": "1.5",
"maximumDistanceThreshold": "14.5"
}
]
}
]
} }
],
"videoCreationProperties": {
"title": "persondistance",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
}, },
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/", {
"localMediaCacheMaximumSizeMiB": "2048" "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
} "name": "signalGateProcessor",
] "inputs": [
{
"nodeName": "computerVisionExtension"
},
{
"nodeName": "rtspSource"
}
],
"activationEvaluationWindow": "PT1S",
"activationSignalOffset": "PT0S",
"minimumActivationTime": "PT30S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "persondistance",
"inputs": [
{
"nodeName": "signalGateProcessor"
}
],
"videoCreationProperties": {
"title": "Person Distance",
"description": "Sample video using SA Person Distance operation",
"segmentLength": "PT30S"
},
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
"localMediaCacheMaximumSizeMiB": "2048"
}
]
}
} }
}

Просмотреть файл

@ -1,137 +1,156 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithPersonCrossLine", "name": "PersonCrossingLineTopology",
"properties": { "properties": {
"description": "Analyzing Live Video with Computer Vision for Spatial Analysis", "description": "Analyzing Live Video with Computer Vision for Spatial Analysis",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
}, },
{ {
"name": "rtspPassword", "name": "rtspPassword",
"type": "String", "type": "String",
"description": "rtsp source password.", "description": "rtsp source password.",
"default": "dummyPassword" "default": "dummyPassword"
}, },
{ {
"name": "rtspUrl", "name": "rtspUrl",
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{ {
"name": "grpcUrl", "name": "grpcUrl",
"type": "String", "type": "String",
"description": "inferencing Url", "description": "inferencing Url",
"default": "tcp://spatialanalysis:50051" "default": "tcp://spatialanalysis:50051"
}, },
{ {
"name": "spatialanalysisusername", "name": "spatialanalysisusername",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint username", "description": "spatialanalysis endpoint username",
"default": "not-in-use" "default": "not-in-use"
}, },
{ {
"name": "spatialanalysispassword", "name": "spatialanalysispassword",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint password", "description": "spatialanalysis endpoint password",
"default": "not-in-use" "default": "not-in-use"
} }
], ],
"sources": [ "sources": [
{ {
"@type": "#Microsoft.VideoAnalyzer.RtspSource", "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource", "name": "rtspSource",
"transport": "tcp", "transport": "tcp",
"endpoint": { "endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}", "url": "${rtspUrl}",
"credentials": { "credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}", "username": "${rtspUserName}",
"password": "${rtspPassword}" "password": "${rtspPassword}"
}
} }
} }
} ],
], "processors": [
"processors": [ {
{ "@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor", "name": "computerVisionExtension",
"name": "computerVisionExtension", "endpoint": {
"endpoint": { "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "url": "${grpcUrl}",
"url": "${grpcUrl}", "credentials": {
"credentials": { "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "username": "${spatialanalysisusername}",
"username": "${spatialanalysisusername}", "password": "${spatialanalysispassword}"
"password": "${spatialanalysispassword}" }
} },
}, "inputs": [
"inputs": [
{
"nodeName": "rtspSource",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
}
],
"operation": {
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonLineCrossingOperation",
"lines": [
{ {
"line": { "nodeName": "rtspSource",
"@type": "#Microsoft.VideoAnalyzer.NamedLineString", "outputSelectors": [
"line": "[[0.46,0.24],[0.59,0.55]]",
"name": "door"
},
"events": [
{ {
"threshold": "10", "property": "mediaType",
"focus": "footprint" "operator": "is",
"value": "video"
} }
] ]
} }
] ],
} "samplingOptions": {
} "skipSamplesWithoutAnnotation": "false",
], "maximumSamplesPerSecond": "15.0"
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personlinecrossing",
"inputs": [
{
"nodeName": "rtspSource"
}, },
{ "operation": {
"nodeName": "computerVisionExtension" "@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonLineCrossingOperation",
"enableFaceMaskClassifier": "false",
"trackerNodeConfiguration": "{\"enable_speed\": true}",
"lines": [
{
"line": {
"@type": "#Microsoft.VideoAnalyzer.NamedLineString",
"line": "[[0.46,0.24],[0.59,0.55]]",
"name": "line"
},
"events": [
{
"threshold": "10",
"focus": "footprint"
}
]
}
]
} }
],
"videoCreationProperties": {
"title": "personlinecrossing",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
}, },
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/", {
"localMediaCacheMaximumSizeMiB": "2048" "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
} "name": "signalGateProcessor",
] "inputs": [
{
"nodeName": "computerVisionExtension"
},
{
"nodeName": "rtspSource"
}
],
"activationEvaluationWindow": "PT1S",
"activationSignalOffset": "PT0S",
"minimumActivationTime": "PT30S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personlinecrossing",
"inputs": [
{
"nodeName": "signalGateProcessor"
}
],
"videoCreationProperties": {
"title": "personlinecrossing",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
},
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
"localMediaCacheMaximumSizeMiB": "2048"
}
]
}
} }
}

Просмотреть файл

@ -1,139 +1,157 @@
{ {
"@apiVersion": "1.0", "@apiVersion": "1.1",
"name": "InferencingWithPersonCrossZone", "name": "PersonZoneCrossingTopology",
"properties": { "properties": {
"description": "Analyzing Live Video with Computer Vision for Spatial Analysis", "description": "Analyzing Live Video with Computer Vision for Spatial Analysis",
"parameters": [ "parameters": [
{ {
"name": "rtspUserName", "name": "rtspUserName",
"type": "String", "type": "String",
"description": "rtsp source user name.", "description": "rtsp source user name.",
"default": "dummyUserName" "default": "dummyUserName"
}, },
{ {
"name": "rtspPassword", "name": "rtspPassword",
"type": "String", "type": "String",
"description": "rtsp source password.", "description": "rtsp source password.",
"default": "dummyPassword" "default": "dummyPassword"
}, },
{ {
"name": "rtspUrl", "name": "rtspUrl",
"type": "String", "type": "String",
"description": "rtsp Url" "description": "rtsp Url"
}, },
{ {
"name": "grpcUrl", "name": "grpcUrl",
"type": "String", "type": "String",
"description": "inferencing Url", "description": "inferencing Url",
"default": "tcp://spatialanalysis:50051" "default": "tcp://spatialanalysis:50051"
}, },
{ {
"name": "spatialanalysisusername", "name": "spatialanalysisusername",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint username", "description": "spatialanalysis endpoint username",
"default": "not-in-use" "default": "not-in-use"
}, },
{ {
"name": "spatialanalysispassword", "name": "spatialanalysispassword",
"type": "String", "type": "String",
"description": "spatialanalysis endpoint password", "description": "spatialanalysis endpoint password",
"default": "not-in-use" "default": "not-in-use"
} }
], ],
"sources": [ "sources": [
{ {
"@type": "#Microsoft.VideoAnalyzer.RtspSource", "@type": "#Microsoft.VideoAnalyzer.RtspSource",
"name": "rtspSource", "name": "rtspSource",
"transport": "tcp", "transport": "tcp",
"endpoint": { "endpoint": {
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"url": "${rtspUrl}", "url": "${rtspUrl}",
"credentials": { "credentials": {
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"username": "${rtspUserName}", "username": "${rtspUserName}",
"password": "${rtspPassword}" "password": "${rtspPassword}"
}
} }
} }
} ],
], "processors": [
"processors": [ {
{ "@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor",
"@type": "#Microsoft.VideoAnalyzer.CognitiveServicesVisionProcessor", "name": "computerVisionExtension",
"name": "computerVisionExtension", "endpoint": {
"endpoint": { "@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint",
"@type": "#Microsoft.VideoAnalyzer.UnsecuredEndpoint", "url": "${grpcUrl}",
"url": "${grpcUrl}", "credentials": {
"credentials": { "@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials",
"@type": "#Microsoft.VideoAnalyzer.UsernamePasswordCredentials", "username": "${spatialanalysisusername}",
"username": "${spatialanalysisusername}", "password": "${spatialanalysispassword}"
"password": "${spatialanalysispassword}" }
} },
}, "inputs": [
"inputs": [
{
"nodeName": "rtspSource",
"outputSelectors": [
{
"property": "mediaType",
"operator": "is",
"value": "video"
}
]
}
],
"operation": {
"@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonZoneCrossingOperation",
"enableFaceMaskClassifier": "true",
"zones": [
{ {
"zone": { "nodeName": "rtspSource",
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString", "outputSelectors": [
"polygon": "[[0.0,0.0],[0.0,0.7],[0.7,0.7],[0.7,0.0]]",
"name": "door"
},
"events": [
{ {
"eventType": "zoneDwellTime", "property": "mediaType",
"threshold": "16.00", "operator": "is",
"focus": "footprint" "value": "video"
} }
] ]
} }
] ],
} "samplingOptions": {
} "skipSamplesWithoutAnnotation": "false",
], "maximumSamplesPerSecond": "15.0"
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personzonecrossing",
"inputs": [
{
"nodeName": "rtspSource"
}, },
{ "operation": {
"nodeName": "computerVisionExtension" "@type": "#Microsoft.VideoAnalyzer.SpatialAnalysisPersonZoneCrossingOperation",
"enableFaceMaskClassifier": "false",
"trackerNodeConfiguration": "{\"enable_speed\": true}",
"zones": [
{
"zone": {
"@type": "#Microsoft.VideoAnalyzer.NamedPolygonString",
"polygon": "[[0.0,0.0],[0.0,1.0],[1.0,1.0],[1.0,0.0],[0.0,0.0]]",
"name": "retailstore"
},
"events": [
{
"eventType": "zonecrossing",
"threshold": "5",
"focus": "footprint"
}
]
}
]
} }
],
"videoCreationProperties": {
"title": "personzonecrossing",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
}, },
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/", {
"localMediaCacheMaximumSizeMiB": "2048" "@type": "#Microsoft.VideoAnalyzer.SignalGateProcessor",
} "name": "signalGateProcessor",
] "inputs": [
{
"nodeName": "computerVisionExtension"
},
{
"nodeName": "rtspSource"
}
],
"activationEvaluationWindow": "PT1S",
"activationSignalOffset": "PT0S",
"minimumActivationTime": "PT30S",
"maximumActivationTime": "PT30S"
}
],
"sinks": [
{
"@type": "#Microsoft.VideoAnalyzer.IotHubMessageSink",
"name": "hubSink",
"hubOutputName": "inferenceOutput",
"inputs": [
{
"nodeName": "computerVisionExtension"
}
]
},
{
"@type": "#Microsoft.VideoAnalyzer.VideoSink",
"name": "videoSink",
"videoName": "personzonecrossing",
"inputs": [
{
"nodeName": "signalGateProcessor"
}
],
"videoCreationProperties": {
"title": "People dwelling in zone using Spatial Analysis within Retail Store",
"description": "Sample video using SA custom operation",
"segmentLength": "PT30S"
},
"localMediaCachePath": "/var/lib/videoanalyzer/tmp/",
"localMediaCacheMaximumSizeMiB": "2048"
}
]
}
} }
}

Просмотреть файл

@ -1,12 +1,6 @@
# Pipeline Topologies # Pipeline Topologies
Pipeline Topologies lets you define where media should be captured from, how it should be processed, and where the results should be delivered. A pipeline topology consists of source, processor, and sink nodes. The diagram below provides a graphical representation of a pipeline topology. A pipeline topology enables you to describe how live video or recorded video should be processed and analyzed for your custom needs through a set of interconnected nodes. Video analyzer supports two kinds of topologies: live and batch. Live topologies, as the name suggests, are used with live video from cameras. Batch topologies are used to process recorded videos.
<br>
<p align="center">
<img src="./images/pipeline.png" title="pipeline topology"/>
</p>
<br>
A pipeline topology can have one or more of the following types of nodes: A pipeline topology can have one or more of the following types of nodes:
@ -14,6 +8,18 @@ A pipeline topology can have one or more of the following types of nodes:
* **Processor nodes** enable processing of media within the pipeline topology. * **Processor nodes** enable processing of media within the pipeline topology.
* **Sink nodes** enable delivering the processing results to services and apps outside the pipeline topology. * **Sink nodes** enable delivering the processing results to services and apps outside the pipeline topology.
Azure Video Analyzer on IoT Edge enables you to manage pipelines via two entities – “Pipeline Topology” and “Live Pipeline”. A pipeline enables you to define a blueprint of the pipeline topologies with parameters as placeholders for values. This pipeline defines what nodes are used in the pipeline topology, and how they are connected within it. A live pipeline enables you to provide values for parameters in a pipeline topology. The live pipeline can then be activated to enable the flow of data. Pipelines can be defined and instantiated at the edge for on premises video processing, or in the cloud. The diagrams below provides graphical representations of such pipelines.
You can learn more about this in the [pipeline topologies](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/pipeline) concept page. <br>
<p align="center">
<img src="./images/pipeline.png" title="pipeline topology on the edge"/>
</p>
<br> <br>
<p align="center">
<img src="./images/pipeline-in-cloud.png" title="pipeline topology in cloud service"/>
</p>
<br>
You can create different topologies for different scenarios by selecting which nodes are in the topology, how they are connected, with parameters as placeholders for values. A pipeline is an individual instance of a specific pipeline topology. A pipeline is where media is actually processed. Pipelines can be associated with individual cameras or recorded videos through user defined parameters declared in the pipeline topology. Instances of a live topologies are called live pipelines, and instances of a batch topology are referred to as pipeline jobs.
You can learn more about this in the [Pipeline](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/pipeline) concept page.

Просмотреть файл

@ -30,16 +30,48 @@
"defaultValue": { "defaultValue": {
"sample": "azure-video-analyzer" "sample": "azure-video-analyzer"
} }
},
"roleNameGuid": {
"type": "string",
"defaultValue": "[newGuid()]",
"metadata": {
"description": "A new GUID used to identify the role assignment"
}
} }
}, },
"variables": { "variables": {
"newHubName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]", "newHubName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]",
"hubName": "[if(empty(parameters('hubName')),variables('newHubName'),parameters('hubName'))]", "hubName": "[if(empty(parameters('hubName')),variables('newHubName'),parameters('hubName'))]",
"hubApiVersion": "2019-11-04", "hubApiVersion": "2021-03-31",
"hubResourceGroup": "[if(empty(parameters('hubResourceGroup')),resourceGroup().name,parameters('hubResourceGroup'))]", "hubResourceGroup": "[if(empty(parameters('hubResourceGroup')),resourceGroup().name,parameters('hubResourceGroup'))]",
"hubResourceId": "[resourceId(variables('hubResourceGroup'),'Microsoft.Devices/IotHubs',variables('hubName'))]" "hubResourceId": "[resourceId(variables('hubResourceGroup'),'Microsoft.Devices/IotHubs',variables('hubName'))]",
"hubIdentityName": "[concat(variables('hubName'), '-identity')]"
}, },
"resources": [ "resources": [
{
"type": "Microsoft.ManagedIdentity/userAssignedIdentities",
"name": "[variables('hubIdentityName')]",
"apiVersion": "2018-11-30",
"location": "[resourceGroup().location]"
},
{
"scope": "[concat('Microsoft.Devices/IotHubs/',variables('hubName'))]",
"type": "Microsoft.Authorization/roleAssignments",
"apiVersion": "2018-09-01-preview",
"dependsOn": [
"[variables('hubIdentityName')]",
"[variables('hubName')]"
],
"name": "[parameters('roleNameGuid')]",
"properties": {
"roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', 'b24988ac-6180-42a0-ab88-20f7382dd24c')]",
"principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('hubIdentityName')), '2018-11-30').principalId]",
"principleType": "ServicePrincipal"
}
},
{ {
"condition": "[empty(parameters('hubName'))]", "condition": "[empty(parameters('hubName'))]",
"type": "Microsoft.Devices/IotHubs", "type": "Microsoft.Devices/IotHubs",
@ -47,16 +79,38 @@
"apiVersion": "[variables('hubApiVersion')]", "apiVersion": "[variables('hubApiVersion')]",
"name": "[variables('hubName')]", "name": "[variables('hubName')]",
"location": "[resourceGroup().location]", "location": "[resourceGroup().location]",
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('hubIdentityName'))]": {}
}
},
"sku": { "sku": {
"name": "S1", "name": "S1",
"capacity": 1 "capacity": 1
}, },
"properties": { "properties": {
"ipFilterRules": [],
"eventHubEndpoints": {
"events": {
"retentionTimeInDays": 7,
"partitionCount": 4
}
}
}, },
"tags": "[parameters('resourceTags')]"
"tags": "[parameters('resourceTags')]",
"dependsOn": [
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('hubIdentityName'))]"
]
} }
], ],
"outputs": { "outputs": {
"hubIdentityName": {
"type": "string",
"value": "[variables('hubIdentityName')]"
},
"hubName": { "hubName": {
"type": "string", "type": "string",
"value": "[variables('hubName')]" "value": "[variables('hubName')]"
@ -66,4 +120,4 @@
"value": "[concat('HostName=', reference(variables('hubResourceId'), variables('hubApiVersion')).hostName, ';SharedAccessKeyName=iothubowner;SharedAccessKey=', listKeys(variables('hubResourceId'), variables('hubApiVersion')).value[0].primaryKey)]" "value": "[concat('HostName=', reference(variables('hubResourceId'), variables('hubApiVersion')).hostName, ';SharedAccessKeyName=iothubowner;SharedAccessKey=', listKeys(variables('hubResourceId'), variables('hubApiVersion')).value[0].primaryKey)]"
} }
} }
} }

Просмотреть файл

@ -287,7 +287,8 @@
"comments": "Deploys the core resources for Video Analyzer", "comments": "Deploys the core resources for Video Analyzer",
"resourceGroup": "[parameters('resourceGroup')]", "resourceGroup": "[parameters('resourceGroup')]",
"dependsOn": [ "dependsOn": [
"[resourceId('Microsoft.Resources/resourceGroups',parameters('resourceGroup'))]" "[resourceId('Microsoft.Resources/resourceGroups',parameters('resourceGroup'))]",
"deploy-iot-resources"
], ],
"properties": { "properties": {
"templateLink": { "templateLink": {
@ -297,7 +298,14 @@
"parameters": { "parameters": {
"namePrefix": { "namePrefix": {
"value": "[parameters('namePrefix')]" "value": "[parameters('namePrefix')]"
},
"iotHubManagedIdendity": {
"value": "[reference('deploy-iot-resources').outputs.hubIdentityName.value]"
},
"hubName":{
"Value": "[reference('deploy-iot-resources').outputs.hubName.value]"
} }
} }
} }
}, },
@ -572,4 +580,4 @@
} }
} }
} }
} }

Просмотреть файл

@ -28,6 +28,18 @@
"baseTime": { "baseTime": {
"type": "string", "type": "string",
"defaultValue": "[utcNow('u')]" "defaultValue": "[utcNow('u')]"
},
"iotHubManagedIdendity": {
"type": "string",
"metadata": {
"description": "This is the value for the IoT Hub user assignted managed identity."
}
},
"hubName": {
"type": "string",
"metadata": {
"description": "This is the value for the IoT Hub name."
}
} }
}, },
"variables": { "variables": {
@ -35,7 +47,7 @@
"accountName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]", "accountName": "[concat(parameters('namePrefix'),uniqueString(resourceGroup().id))]",
"edgeModuleName": "[parameters('edgeModuleName')]", "edgeModuleName": "[parameters('edgeModuleName')]",
"tokenExpiration": { "tokenExpiration": {
"expirationDate": "[dateTimeAdd(parameters('baseTime'), 'P7D', 'yyyy-MM-ddTHH:mm:ss+00:00')]" "expirationDate": "[dateTimeAdd(parameters('baseTime'), 'P7D', 'yyyy-MM-ddTHH:mm:ss+00:00')]"
}, },
"managedIdentityName": "[concat(parameters('namePrefix'),'-',resourceGroup().name,'-storage-access-identity')]" "managedIdentityName": "[concat(parameters('namePrefix'),'-',resourceGroup().name,'-storage-access-identity')]"
}, },
@ -152,7 +164,7 @@
{ {
"type": "Microsoft.Media/videoAnalyzers", "type": "Microsoft.Media/videoAnalyzers",
"comments": "The Azure Video Analyzer account", "comments": "The Azure Video Analyzer account",
"apiVersion": "2021-05-01-preview", "apiVersion": "2021-11-01-preview",
"name": "[variables('accountName')]", "name": "[variables('accountName')]",
"location": "[resourceGroup().location]", "location": "[resourceGroup().location]",
"dependsOn": [ "dependsOn": [
@ -166,13 +178,22 @@
"userAssignedIdentity": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]" "userAssignedIdentity": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]"
} }
} }
],
"iotHubs": [
{
"id": "[resourceId('Microsoft.Devices/IotHubs', parameters('hubName'))]",
"identity": {
"userAssignedIdentity": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',parameters('iotHubManagedIdendity'))]"
}
}
] ]
}, },
"identity": { "identity": {
"type": "UserAssigned", "type": "UserAssigned",
"userAssignedIdentities": { "userAssignedIdentities": {
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]": {} "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))]": {},
} "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',parameters('iotHubManagedIdendity'))]": {}
}
}, },
"tags": "[parameters('resourceTags')]" "tags": "[parameters('resourceTags')]"
}, },
@ -204,4 +225,4 @@
"value": "[listProvisioningToken(resourceId('Microsoft.Media/videoAnalyzers/edgeModules', variables('accountName'), variables('edgeModuleName')),'2021-05-01-preview',variables('tokenExpiration')).token]" "value": "[listProvisioningToken(resourceId('Microsoft.Media/videoAnalyzers/edgeModules', variables('accountName'), variables('edgeModuleName')),'2021-05-01-preview',variables('tokenExpiration')).token]"
} }
} }
} }

Просмотреть файл

@ -1,232 +0,0 @@
![ava_widgets_banner_github.png](https://user-images.githubusercontent.com/51399662/119260323-fc97bf00-bbda-11eb-82d0-c31fa64b8e38.png)
# Azure Video Analyzer widgets
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![TypeScript](https://img.shields.io/badge/%3C%2F%3E-TypeScript-%230074c1.svg)](https://www.typescriptlang.org/)
[![code style: prettier](https://img.shields.io/badge/code_style-prettier-f8bc45.svg)](https://github.com/prettier/prettier)
This repo conatins the Azure Video Analyzer widgets and web component packages. Below you can find documentation and examples on how to use these pieces.
## Introduction
[Video Analyzer](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/overview) provides a platform to build intelligent video applications that span the edge and the cloud. It offers the capability to capture, record, and analyze live video along with publishing the results - video and/or video analytics.
The material in this repository is designed to help in building applications on the Video Analyzer platform. Below you'll find sections on:
- Installing the Video Analyzer widget library
- Player widget
We also have a how-to in our document site on [using the Video Analyzer player widget](https://docs.microsoft.com/azure/azure-video-analyzer/video-analyzer-docs/player-widget).
## Installing Video Analyzer library
The widgets are distributed as an NPM package. There are a couple ways to install the library.
- **Command line** - For consuming the NPM package directly, you can install it using the npm command.
```
npm install @azure/video-analyzer-widgets
```
- **Javascript HTML object** - You can import the latest version of the widget directly into your HTML file by using this script segment.
```html
...
<!-- Add Video Analyzer player web component -->
<script async type="module" src="https://unpkg.com/@azure/video-analyzer-widgets"></script>
</body>
</html>
```
- **Window object** - If you want expose the widget code on the window, you can import the latest version by using this script segment.
```html
...
<!-- Add Video Analyzer player web component -->
<script async src="https://unpkg.com/@azure/video-analyzer-widgets@latest/dist/global.min.js"></script>
</body>
</html>
```
## Player widget
The player widget can be used to play back video that has been stored in the Video Analyzer service. The section below details:
- How to add a player widget to your application
- Properties, events, and methods
- Samples showing use of the widget
### Creating a player widget
Player widget is a web-component that can be created in your base HTML code, or dynamically at run time.
- Creating using HTML
```html
<body>
<ava-player width="920px"><ava-player>
</body>
```
- Creating dynamically using window
```typescript
const avaPlayer = new window.ava.widgets.player();
document.firstElementChild.appendChild(avaPlayer);
```
- Creating dynamically with Typescript
```typescript
import { Player } from '@azure/video-analyzer-widgets';
const avaPlayer = new Player();
document.firstElementChild.appendChild(avaPlayer);
```
### Properties, events and methods
The player has a series of properties as defined in the below table. Configuration is required to get the player to run initially.
| Name | Type | Default | Description |
| ------ | ---------------- | ------- | ---------------------------------- |
| width | string | 100% | Reflects the value of widget width |
| config | IAvaPlayerConfig | null | Widget configuration |
There are also a couple of events that fire under various conditions. None of the events have parameters associated with them. It is important to deal with the TOKEN_EXPIRED event so that playback doesn't stop.
| Name | Parameters | Description |
| ------------- | ---------- | ------------------------------------------------- |
| TOKEN_EXPIRED | - | Callback to invoke when AVA JWT token is expired. |
| PLAYER_ERROR | - | Callback to invoke there is an error. |
The player has a few methods you can use in your code. These can be useful for building your own controls.
| Name | Parameters | Description |
| -------------- | ------------------------------- | -------------------------------------------------------------------------------------------------------- |
| constructor | config: IAvaPlayerConfig = null | Widget constructor. If called with config, you dont need to call _configure_ function |
| setAccessToken | jwtToken:string | Update the widget token. |
| configure | config: IAvaPlayerConfig | Update widget configuration. |
| load | - | Loads and initialize the widget according to provided configuration. If not called, widget will be empty |
| play | - | Play the player |
| pause | - | Pause the player |
### Code samples
There are a few code samples below detailing basic usage, how to dynamically create the widget during run time, how to deal with refreshing the token, and use in an Angular application.
#### Basic usage
This code shows how to create the player widget as an HTML tag, then configure the widget, and load the data to make it start playing using Javacript.
```html
<script>
function onAVALoad() {
// Get player instance
const avaPlayer = document.querySelector('ava-player');
// Configure widget with AVA API configuration
avaPlayer.configure({
token: '<AVA-API-JWT-TOKEN>',
clientApiEndpointUrl: '<CLIENT-ENDPOINT-URL>',
videoName: '<VIDEO-NAME-FROM-AVA-ACCOUNT>'
});
avaPlayer.load();
}
</script>
<script async type="module" src="https://unpkg.com/@azure/video-analyzer-widgets" onload="onAVALoad()"></script>
<body>
<ava-player></ava-player>
</body>
```
#### Dynamically creating the widget
This code shows how to create a widget dynamically with Javascript code without using the separate configure function. It adds the widget to a premade div container.
```html
<script>
function onAVALoad() {
// Get widget container
const widgetContainer = document.querySelector('#widget-container');
// Create new player widget
const playerWidget = new window.ava.widgets.player({
token: '<AVA-API-JWT-TOKEN>',
clientApiEndpointUrl: '<CLIENT-ENDPOINT-URL>',
videoName: '<VIDEO-NAME-FROM-AVA-ACCOUNT>'
});
widgetContainer.appendChild(playerWidget);
// Load the widget
playerWidget.load();
}
</script>
<script async src="https://unpkg.com/@azure/video-analyzer-widgets@latest/dist/global.min.js" onload="onAVALoad()"></script>
<body>
<div id="widget-container"></div>
</body>
```
#### Token refresh
This section shows create a widget with native JS code, configure the widget, and load the data. It adds in an event listener so that when the token is expired, it will update the token. You will of course need to provide your own code to generate a new token based on the method you used.
```html
<script>
function onAVALoad() {
// Get player instance
const avaPlayer = document.querySelector("ava-player");
// Adding token expired listener
avaPlayer.addEventListener('TOKEN_EXPIRED', async () => {
const token = await fetch(<request-to-generate-token>);
avaPlayer.setAccessToken(token);
});
// Configure widget with AVA API configuration
avaPlayer.configure({
token: '<AVA-API-JWT-TOKEN>',
clientApiEndpointUrl: '<CLIENT-ENDPOINT-URL>',
videoName: '<VIDEO-NAME-FROM-AVA-ACCOUNT>'
});
// Load the widget
avaPlayer.load();
}
</script>
<script async type="module" src="https://unpkg.com/@azure/video-analyzer-widgets" onload="onAVALoad()"></script>
<body>
<ava-player width="920px"></ava-player>
</body>
```
#### Player widget in an Angular application
To use the player widget in an Angular application, you'll need to follow the steps below.
1. Go to your _src/main.ts_ file and add the following code:
```typescript
import { Player } from '@azure/video-analyzer-widgets';
/*
* Ensure that tree-shaking doesn't remove this component from * the bundle.
* There are multiple ways to prevent tree shaking, of which this * is one.
*/
Player;
```
1. To allow an NgModule to contain Non-Angular element names, add the following code in your application module Typescript file _app.module.ts_:
```typescript
import { CUSTOM_ELEMENTS_SCHEMA } from '@angular/core';
@NgModule({
schemas: [CUSTOM_ELEMENTS_SCHEMA]
});
```
1. Now you can start using widget. Replace the HTML template in your app.component.html, file with the following markup:
```html
<template>
<ava-player width="920px"></ava-player>
</template>
```
Alternatively, you can create a new instance of the widget using Typescript, and add it to the DOM.