This commit is contained in:
Mahmoud Saleh 2018-01-19 17:05:16 -08:00
Родитель bc8fcb0c88 b5da846fbf
Коммит 1a11d6c2d3
22 изменённых файлов: 803 добавлений и 28 удалений

Просмотреть файл

@ -63,26 +63,30 @@ For additional Windows samples, see [Windows on GitHub](http://microsoft.github.
<tr>
<td><a href="Samples/HelloBlinkyBackground">HelloBlinkyBackground</a></td>
<td><a href="Samples/NFCForIoT">NFCForIoT</a></td>
<td><a href="Samples/PotentiometerSensor">PotentiometerSensor</a></td>
<td><a href="Samples/PotentiometerSensor">Potentiomete rSensor</a></td>
</tr>
<tr>
<td><a href="Samples/PushButton">PushButton</a></td>
<td><a href="Samples/RGBLED">RGBLED</a></td>
<td><a href="Samples/PushButton">Push Button</a></td>
<td><a href="Samples/RGBLED">RGB LED</a></td>
<td><a href="Samples/Accelerometer">Accelerometer</a></td>
</tr>
<tr>
<td><a href="Samples/SPIDisplay">SPIDisplay</a></td>
<td><a href="Samples/SPIDisplay">SPI Display</a></td>
<td><a href="Samples/TempForceSensor">TempForceSensor</a></td>
<td><a href="Samples/VideoCaptureSample">VideoCaptureSample</a></td>
</tr>
<tr>
<td><a href="Samples/I2CCompass">I2CCompass</a></td>
<td><a href="Samples/I2CCompass">I2C Compass</a></td>
<td><a href="Samples/ContainerWebSocket">ContainerWebSocket</a></td>
<td><a href="Samples/GpioOneWire">GpioOneWire</a></td>
</tr>
<tr>
<td><a href="Samples/I2cPortExpander">I2C Port Expander</a></td>
<td><a href="Samples/IoTBlockly">IoT Blockly</a></td>
</tr>
</table>
### Samples that demonstrate Universal Windows Application Features
### Samples that demonstrate Universal Windows Application features
<table>
<tr>
@ -96,27 +100,50 @@ For additional Windows samples, see [Windows on GitHub](http://microsoft.github.
<td><a href="Samples/HelloWorld">HelloWorld</a></td>
</tr>
<tr>
<td><a href="Samples/IotBrowser">IotBrowser</a></td>
<td><a href="Samples/IoTCoreDefaultApp">IoTCoreDefaultApp</a></td>
<td><a href="Samples/IoTCoreMediaPlayer">IoTCoreMediaPlayer</a></td>
<td><a href="Samples/IotBrowser">IoT Browser</a></td>
<td><a href="Samples/IoTCoreDefaultApp">IoTCore DefaultApp</a></td>
<td><a href="Samples/IoTCoreMediaPlayer">IoTCore MediaPlayer</a></td>
</tr>
<tr>
<td><a href="Samples/IotOnboarding">IotOnboarding</a></td>
<td></td>
<td></td>
<tr>
<td><a href="Samples/IotOnboarding">IoT Onboarding</a></td>
<td><a href="Samples/CognitiveServicesExample">Cognitive Services</a></td>
<td><a href="Samples/CompanionApp">Companion App</a></td>
</tr>
<tr>
<td><a href="Samples/IoTHomeAppSample">IoT Home App</a></td>
<td><a href="Samples/OpenCVExample">OpenCV Example</a></td>
<td><a href="Samples/SerialUART">Serial UART</a></td>
</tr>
<tr>
<td><a href="Samples/WebcamApp">Webcam App</a></td>
<td><a href="Samples/WiFiConnector">WiFi Connector</a></td>
</tr>
</table>
### Samples that utilize Microsoft Azure features
<table>
<tr>
<tr>
<td><a href="Samples/IoTConnector">IoTConnector</a></td>
<td><a href="Samples/SpeechTranslator">SpeechTranslator</a></td>
<td><a href="Samples/WeatherStation">WeatherStation</a></td>
</tr>
<tr>
<td><a href="Samples/Azure/HelloCloud">HelloCloud</a></td>
<td><a href="Samples/Azure/HelloCloud.Headless">HelloCloud.Headless</a></td>
<td><a href="Samples/Azure/ReadDeviceToCloudMessages">ReadDeviceToCloudMessages</a></td>
</tr>
<tr>
<td><a href="Samples/Azure/TpmDeviceTest">TpmDeviceTest</a></td>
<td><a href="Samples/Azure/WeatherStation">WeatherStation</a></td>
<td><a href="Samples/Azure/WeatherStation.PowerBI">WeatherStation.PowerBI</a></td>
</tr>
<tr>
<td><a href="Samples/Azure/IoTHubClients">IoT Hub Clients</a></td>
</tr>
</table>
### Samples that involve device drivers, services, or realtime processing
<table>
@ -126,8 +153,13 @@ For additional Windows samples, see [Windows on GitHub](http://microsoft.github.
<td><a href="Samples/SerialUART">SerialUART</a></td>
</tr>
<tr>
<td><a href="Samples/ShiftRegister">ShiftRegister</a></td>
<td><a href="Samples/MemoryStatus">MemoryStatus</a></td>
<td></td>
<td><a href="Samples/ShiftRegister">Shift Register</a></td>
<td><a href="Samples/MemoryStatus">Memory Status</a></td>
<td><a href="Samples/ContainerWebSocketCS">Container Web Socket</a></td>
</tr>
<tr>
<td><a href="Samples/CustomDeviceAccessor">Custom Device Accessor</a></td>
<td><a href="Samples/IoTOnboarding_RFCOMM">IoT Onboarding - Bluetooth (RFCOMM)</a></td>
<td><a href="Samples/VirtualMicrophoneArrayDriver">Virtual Microphone Array Driver</a></td>
</tr>
</table>

Двоичные данные
Resources/images/CognitiveServicesExample/add_rectangle.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 154 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/add_reference.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 38 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/azure_cogserv_create.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 29 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/cogserv_signup.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 25 KiB

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 10 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/event_handler1.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 85 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/new_project.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 64 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/remote_connection.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 8.4 KiB

Двоичные данные
Resources/images/CognitiveServicesExample/running_app.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 982 KiB

Просмотреть файл

@ -0,0 +1 @@

Двоичные данные
Resources/images/IoTStartApp/cpp-debug-project-properties.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 17 KiB

Двоичные данные
Resources/images/IoTStartApp/cpp-project-properties.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 23 KiB

Двоичные данные
Resources/images/IoTStartApp/cpp-remote-machine-debugging.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 16 KiB

Просмотреть файл

@ -0,0 +1 @@
test

Просмотреть файл

@ -1,4 +1,79 @@
Windows 10 IoT Core sample code
===============
# Azure IoT Hub Client Sample
# Walk-through: Connecting to Microsoft IoT Central
01/10/2018 5 minutes to read
[Additional documentation for this sample](https://blogs.windows.com/buildingapps/2015/12/09/windows-iot-core-and-azure-iot-hub-putting-the-i-in-iot/)
#### Contributors
- David Campbell
- George Mileka
#### In this article
This article describes how, as a device developer, to connect a device running a Windows 10 IoT Core device (like Raspberry Pi) to your Microsoft IoT Central application using the C# programming language.
### Before you begin
To complete the steps in this article, you need the following:
- A Microsoft IoT Central application created from the Sample Devkits application template. For more information, see [Create your Microsoft IoT Central Application](https://docs.microsoft.com/en-us/microsoft-iot-central/howto-create-application).
- A device running the Windows 10 IoT Core operating system. For this walkthrough, we will use a Raspberry Pi.
- Visual Studio installed (only needed if you are going to build/deploy the source code).
- With 'The Universal Windows Platform development' workload installed.
## Add a Real Device in Microsoft IoT Central
In Microsoft IoT Central,
- Add a real device from the Raspberry Pi device template.
- Make a note of the device connection string. For more information, see Add a real device to your [Microsoft IoT Central application](https://docs.microsoft.com/en-us/microsoft-iot-central/tutorial-add-device).
## Setup A Physical Device
To setup a physical device, we need:
- A device running Windows IoT Core operating system.
- To do that, follow the steps described [here](https://developer.microsoft.com/en-us/windows/iot/getstarted/prototype/setupdevice).
- A client application that can communicate with Microsoft IoT Central.
- You can either build your own custom application using the Azure SDK and deploy it to your device (using Visual Studio). OR
- You can download a pre-built sample and simply deploy and run it on the device.
## Deploy The Pre-built Sample Client Application to The Device
To deploy the client application to your Windows IoT Device,
- Ensure the connection string is stored on the device for the client application to use.
- On the desktop, save the connection string in a text file named connection.string.iothub.
- Copy the text file to the devices document folder:
- <i>[device-IP-address]</i>\C$\Data\Users\DefaultAccount\Documents\connection.string.iothub
- Go to the device web portal (in any browser, type http://<i>[device-IP-address]</i>:8080) (This will allow you to manage many aspects of your Windows IoT device. The feature well need for this exercise is app installation).
- On the left, expand the Apps node.
- Click Quick-run samples
- Click Azure IoT Hub Client
- Click Deploy and run
<img src="../../../../Resources/images/Azure/IoTHubClients/webb.capture.png">
The application should launch on the device, and will look something like this:
<img src="../../../../Resources/images/Azure/IoTHubClients/IoTHubClientScreenshot.png">
In Microsoft IoT Central, you can see how the code running on the Raspberry Pi interacts with the application:
- On the Measurements page for your real device, you can see the telemetry.
- On the Properties page, you can see the value of the reported Die Number property.
- On the Settings page, you can change various settings on the Raspberry Pi such as voltage and fan speed.
## Source Code
You can see the source code for the client application on the Windows IoT samples page [here](https://github.com/Microsoft/Windows-iotcore-samples/tree/develop/Samples/Azure/IoTHubClients).
## Additional resources
* [Windows 10 IoT Core home page](https://developer.microsoft.com/en-us/windows/iot/)
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact <opencode@microsoft.com> with any additional questions or comments.
[Documentation for this sample](https://blogs.windows.com/buildingapps/2015/12/09/windows-iot-core-and-azure-iot-hub-putting-the-i-in-iot/)

Просмотреть файл

@ -18,16 +18,15 @@ This project has adopted the Microsoft Open Source Code of Conduct. For more inf
<tr>
<td><a href="../Azure/HelloCloud">HelloCloud</a></td>
<td><a href="../Azure/HelloCloud.Headless">HelloCloud.Headless</a></td>
<td><a href="../Azure/ReadDeviceToCloudMessages">ReadDeviceToCloudMessages</a></td>
</tr>
<tr>
<td><a href="../Azure/ReadDeviceToCloudMessages">ReadDeviceToCloudMessages</a></td>
<td><a href="../Azure/TpmDeviceTest">TpmDeviceTest</a></td>
<td><a href="../Azure/WeatherStation">WeatherStation</a></td>
</tr>
<tr>
<td><a href="../Azure/WeatherStation.PowerBI">WeatherStation.PowerBI</a></td>
</tr>
<tr>
<td><a href="../Azure/IoTHubClients">IoT Hub Clients</a></td>
</tr>
</table>

Просмотреть файл

@ -0,0 +1,28 @@
# Connecting TPM with the Azure IoT Hub
To connect to the Azure IoT Hub from a provisioned device, use the TpmDevice class from the Microsoft.Devices.Tpm library (available as
the NuGet package). Get the device information stored in the desired slot (typically slot 0), then retrieve the name of the IoT Hub,
the device ID, and the SAS token (the string containing the HMAC produced from the shared access key) and use that to create the _DeviceClient_:
```
TpmDevice myDevice = new TpmDevice(0); // Use TPM slot 0
string hubUri = myDevice.GetHostName();
string deviceId = myDevice.GetDeviceId();
string sasToken = myDevice.GetSASToken();
var deviceClient = DeviceClient.Create(
hubUri,
AuthenticationMethodFactory.
CreateAuthenticationWithToken(deviceId, sasToken), TransportType.Amqp);
var str = "Hello, Cloud!";
var message = new Message(Encoding.SCII.GetBytes(str));
await deviceClient.SendEventAsync(message);
```
At this point, you have a connected _deviceClient_ object that you can use to send and receive messages. You can view the full working sample [here](https://github.com/ms-iot/samples/tree/develop/Azure/TpmDeviceTest).
To learn more about building secure apps for Windows IoT Core, you can view the blog post [here](https://blogs.windows.com/buildingapps/2016/07/20/building-secure-apps-for-windows-iot-core/#oqFLXiWIL1iCF8j9.97).

Просмотреть файл

@ -3,6 +3,6 @@ Windows 10 IoT Core sample code
[Documentation for this sample](https://blogs.msdn.microsoft.com/iot/2016/01/26/using-power-bi-to-visualize-sensor-data-from-windows-10-iot-core/)
[Microsoft PowerBI]
(https://powerbi.microsoft.com/en-us/what-is-power-bi)
[Microsoft PowerBI](https://powerbi.microsoft.com/en-us/what-is-power-bi)

Просмотреть файл

@ -0,0 +1,79 @@
# Azure Weather Station
## Introduction
Building a weather station is the rite of passage for beginning IoT enthusiasts. It's easy to put together from inexpensive components and provides immediate gratification. For example, you can blow air on your sensor and watch the temperature graph spike up or down. You can leave the weather station in your home and monitor humidity and temperature remotely. The list goes on.
At the same time, the project presents some unique challenges. How do you send the data from your device to the cloud? How do you visualize it in interesting ways? Finally -- how do you make sure you're looking at your data? In other words, how do you reliably authenticate your device?
## Authenticating a Headless Device
In a typical OAuth 2.0 authorization flow, the user is presented with a browser window where they can enter their credentials. The application then obtains an access token that is used to communicate with the desired cloud service. Alas, this approach is not suitable for headless IoT devices without the mouse and keyboard attached, or devices that only offer console I/O.
This problem is solved with the latest version of the Active Directory Authentication Library (still in preview) that introduces a new authentication flow tailored for headless devices. The approach goes like this: when a user authentication is required, instead of bringing up a browser window, the app asks the user to use another device to navigate to [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter a specific code. Once the code is provided, the web page will lead the user through a normal authentication experience, including consent prompts and multi factor authentication if necessary. Upon successful authentication, the app will receive the required access tokens through a back channel and use it to access the desired cloud service.
## Hardware Setup
For this example, we will use a humidity and temperature sensor, such as the HTU21D, which is available from a number of vendors (e.g., Sparkfun, Amazon).
The sensor connects to your device via I²C bus, as shown on the following wiring diagram (the location of the pins might be slightly different if you're using a device other than Raspberry Pi 2):
![Humidity and temperature sensor](https://msdnshared.blob.core.windows.net/media/2016/01/humidity-htu21d_bb.png)
## Software Setup
The software setup will require several steps. First, we'll need to register an application in the Azure Active Directory. Then, we'll copy the client ID from the application and use it in our UWP app running on the device. Finally, we'll create a dashboard in Power BI to visualize data coming from our device.
### Registering an Application in Azure
This step assumes that your organization already has an Azure Active Directory set up. If not, you can get started here.
You can register your application from the Azure Portal, however it's easier to do it from the dedicated Power BI application registration page [here](https://dev.powerbi.com/apps).
Navigate to the above page, log in with your Power BI credentials and fill out the information about your app:
![Power Bi app information](https://msdnshared.blob.core.windows.net/media/2016/01/Register_app_step2.png)
Note that I used a dummy URL for the redirect -- our app will not need this information but we cannot leave this field empty.
In the next step, enable the "Read All Datasets" and "Read and Write All Datasets" APIs and click "Register App".
Once this is done, click "Register App". The web page will generate the Client ID, which looks like this:
![Register your app](https://msdnshared.blob.core.windows.net/media/2016/01/Register_app_step4.png)
Leave the browser page open -- we will need to copy the Client ID and paste it into our C# app.
## Build the App
The C# app UWP that we're going to use combines the Power BI REST APIs with the use of headless device flow from the Azure Active Directory Library described [here](https://github.com/Azure-Samples/active-directory-dotnet-deviceprofile/).
The full source of the app is available on our GitHub repository [here](https://github.com/ms-iot/samples/tree/develop/Azure/WeatherStation.PowerBI).
To run your application on your device, you need to get the source code and find the clientID constant in PBIClient.cs:
```
// WeatherPBIApp: replace with the actual Client ID from the Azure Application:
private const string clientID = "<replace>";
```
Replace the value with the string obtained from the registered Azure Application at the previous step and compile the app for ARM architecture. For this app, you will need to connect your Raspberry Pi to a monitor (keyboard and mouse are not required) to display the user code from the device. While this means that the device is no longer completely headless, you might imagine a slightly more advanced version of the app where the user code is communicated via an SMS message, an HTTP POST request, or is displayed on an [LED matrix](https://www.raspberrypi.org/products/sense-hat/).
Deploy the app to your Raspberry Pi. If all goes well, you should see the following:
![Running Raspberry Pi app](https://msdnshared.blob.core.windows.net/media/2016/01/App_running.png)
Now switch to another device -- either a desktop PC or a phone and navigate to the specified URL. Type in the specified user code and press Continue:
![Signing into application](https://msdnshared.blob.core.windows.net/media/2016/01/signin1.png)
Once the user code is accepted, the application will receive the access code and start sending data to Power BI.
### Configure Power BI Dashboard
Login to your [Power BI account](https://powerbi.microsoft.com/en-us/) and look for the "WeatherReport" dataset in the navigation bar on the left. In the Visualization pane create a "Temperature by Time" line chart:
![Temperature by Time line chart](https://msdnshared.blob.core.windows.net/media/2016/01/TemperaturexTime.png)
You can then create a "Humidity by Time" chart in a similar way. Alternatively, you can plot both temperature and humidity on the same axis.
Now save your report and pin it to the dashboard. You should see something that looks like this:
![Temperature and humidity charts](https://msdnshared.blob.core.windows.net/media/2016/01/dashboard.png)
That's it! Enjoy your weather station!
Want to learn more about using Power BI to visualize sensor data from IoT Core? Read the blog post [here](https://blogs.msdn.microsoft.com/iot/2016/01/26/using-power-bi-to-visualize-sensor-data-from-windows-10-iot-core/).

Просмотреть файл

@ -1,4 +1,447 @@
Windows 10 IoT Core sample code
===============
# Cognitive Services
[Documentation for this sample](https://developer.microsoft.com/en-us/windows/iot/samples/CognitiveServices)
Create a UWP app that identifies faces in a photo and determine the emotions in those photos using Microsoft's Cognitive Services API.
## Create a new UWP App
___
All of the sample code is available to download, but as an exercise, this tutorial will take you through the complete steps to create this app from scratch.
Make sure your device is running and set up and you have Visual Studio installed. See our [get started page](https://developer.microsoft.com/en-us/windows/iot/GetStarted.htm) to set up your device.
You will need your device's IP address when connecting to it remotely.
1. Start Visual Studio 2017
2. Create a new project with **(File \| New Project...)**
In the **New Project** dialog, navigate to **Universal** as shown below (in the left pane in the dialog: Templates \| Visual C# \| Windows Universal).
3. Select the template **Blank App (Universal Windows)**
Note that we call the app CogntiveServicesExample. You can name it something different, but you will have to adjust sample code that references CognitiveServicesExample as well.
<img src="../../../Resources/images/CognitiveServicesExample/new_project.png">
If this is the first project you create, Visual Studio will likely prompt you to enable [developer mode for Windows 10](https://msdn.microsoft.com/library/windows/apps/xaml/dn706236.aspx)
<img src="../../../Resources/images/CognitiveServicesExample/add_reference.png">
## Add a reference to the Windows IoT extension SDK
___
Since the IoT extension SDK is not added to projects by default, we'll need to add a reference so that namespaces like **Windows.Devices.Gpio** will be available in the project. To do so, right-click on the References entry under the project, select "Add Reference" then navigate the resulting dialog to **Universal Windows->Extensions->Windows IoT Extensions for the UWP**. Check the box and click OK.
## Add the NuGet Packages
___
1. Open the NuGet Package Manager
In Solution Explorer, right click your project and then click "Manage NuGet Packages".
2. Install the Packages
In the NuGet Package Manager window, select nuget.org as your Package Source and search for **Newtonsoft.Json, Microsoft.ProjectOxford.Common, and Microsoft.ProjectOxford.Emotion,**. Install all three packages. When using a Cognitive Services API, you need to add the corresponding NuGet package.
## Set up the User Interface
___
### Add in the XAML
Open MainPage.xaml and replace the existing code with the following code to create the window UI:
```
<Page
x:Class="CognitiveServicesExample.MainPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:CognitiveServicesExample"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d">
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}" BorderThickness="5" Margin="50" >
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<StackPanel Grid.Column="0" VerticalAlignment="Center">
<Border BorderBrush="Black" BorderThickness="1" Margin="20" Width="600" Height="600">
<Canvas x:Name="ImageCanvas" Width="598" Height="598"/>
<!-- <Image x:Name="SampleImage" Stretch="Uniform" Width="600" Height="550" Margin="10"/> -->
</Border>
<TextBlock Grid.Row="1" x:Name="detectionStatus" Width="600" HorizontalAlignment="Center" Margin="10"/>
<StackPanel Orientation="Horizontal" HorizontalAlignment="Center" Width="600" Margin="10">
<TextBox x:Name="ImageURL" Width="440" HorizontalAlignment="Left" Margin="0,0,10,0" Text="http://blogs.cdc.gov/genomics/files/2015/11/ThinkstockPhotos-177826416.jpg"/>
<Button Content="Detect Emotions" Width="140" HorizontalAlignment="Left" Click="button_Clicked"/>
</StackPanel>
</StackPanel>
<Grid Grid.Column="1" VerticalAlignment="Center">
<ListBox x:Name="ResultBox" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/>
</Grid>
</Grid>
</Page>
```
To view the entire UI, change the dropdown in the top left corner from '5\" Phone' to '12\" Tablet'.
### Generate the button event handler
In the UI mock up, double click on the "Detect Emotions" button. You will see a "Click="button_Clicked" added into the button in your XAML code. You will also be redirected to the .xaml.cs file with a new function called "button_Clicked()" created for you. This function will handle the Cognitive Services calls after a user presses the button.
<img src="../../../Resources/images/CognitiveServicesExample/event_handler1.png">
## Get the Emotion API Key
___
1. Free Trial
Visit the [Azure Cognitive Services Page](https://azure.microsoft.com/en-us/try/cognitive-services/?api=computer-vision) and click on "Get API Key" next to the Emotion API label; use your Microsoft account to sign in.
You should now see two API keys available for use for 30 days.
<img src="../../../Resources/images/CognitiveServicesExample/cogserv_signup.png">
2. Azure Subscription
If you already used the Emotion API's free trial, you can still use the APIs for free with an Azure account. [Sign up for one](https://portal.azure.com/), then head to the [Azure Portal](https://portal.azure.com/) and create a new Cognitive Services Resource with the fields as shown below.
After it deploys, click on the **"Show access keys..."** link under the "Essentials" window to see your access keys.
<img src="../../../Resources/images/CognitiveServicesExample/azure_cogserv_create.png">
## Add the C# Code
___
### Add in the namespaces
Open MainPage.xaml.cs. At the top of the , directly under the "using" statements and before the "namespace CognitiveServicesExample" line, add the following Cognitive Services namespaces.
``` C#
using Windows.Graphics.Imaging;
using Microsoft.ProjectOxford.Emotion;
using Microsoft.ProjectOxford.Emotion.Contract;
using System.Threading.Tasks;
using System.Diagnostics;
using Windows.UI.Xaml.Media.Imaging;
using Windows.UI.Xaml.Shapes;
using Windows.UI;
using Windows.UI.Popups;
using Windows.Storage.Streams;
```
These allow us to use the Cognitive Services APIs in our code, along with some other necessary imaging libraries.
### Add in Global Variables
Add the following global variables to the MainPage class (as below)
```C#
public sealed partial class MainPage : Page
{
// add these in after the above statement
private string _subscriptionKey = "your_key_here";
BitmapImage bitMapImage;
//...
}
```
The subscriptionKey allows your application to call the Emotion API on Cognitive Services, and the BitmapImage stores the image that your application will upload.
### Add in the API-calling method
Add the following method to the same class:
``` C#
public sealed partial class MainPage : Page
{
//...
private async Task<Emotion[]> UploadAndDetectEmotions(string url)
{
Debug.WriteLine("EmotionServiceClient is created");
//
// Create Project Oxford Emotion API Service client
//
EmotionServiceClient emotionServiceClient = new EmotionServiceClient(_subscriptionKey);
Debug.WriteLine("Calling EmotionServiceClient.RecognizeAsync()...");
try
{
//
// Detect the emotions in the URL
//
Emotion[] emotionResult = await emotionServiceClient.RecognizeAsync(url);
return emotionResult;
}
catch (Exception exception)
{
Debug.WriteLine("Detection failed. Please make sure that you have the right subscription key and proper URL to detect.");
Debug.WriteLine(exception.ToString());
return null;
}
}
//...
}
```
This function instantiates an instance of the Emotion API and attempts to open the URL passed as an argument (an image URL), scanning it for faces. It searches the faces it finds for emotions and returns the resulting Emotion objects. These contain detailed results, including the likelihood of each emotion and the bounding box of the face. See the [documentation](https://www.microsoft.com/cognitive-services/en-us/emotion-api) for more details.
### Add in the button event handler code
Add the **async** keyword to the button_Clicked method Visual Studio created for you. Then, add the following code to that function:
``` C#
public sealed partial class MainPage : Page
{
//...
private async void button_Clicked(object sender, RoutedEventArgs e)
{
ImageCanvas.Children.Clear();
string urlString = ImageURL.Text;
Uri uri;
try
{
uri = new Uri(urlString, UriKind.Absolute);
}
catch (UriFormatException ex)
{
Debug.WriteLine(ex.Message);
var dialog = new MessageDialog("URL is not correct");
await dialog.ShowAsync();
return;
}
//Load image from URL
bitMapImage = new BitmapImage();
bitMapImage.UriSource = uri;
ImageBrush imageBrush = new ImageBrush();
imageBrush.ImageSource = bitMapImage;
//Load image to UI
ImageCanvas.Background = imageBrush;
detectionStatus.Text = "Detecting...";
//urlString = "http://blogs.cdc.gov/genomics/files/2015/11/ThinkstockPhotos-177826416.jpg"
Emotion[] emotionResult = await UploadAndDetectEmotions(urlString);
detectionStatus.Text = "Detection Done";
displayParsedResults(emotionResult);
displayAllResults(emotionResult);
DrawFaceRectangle(emotionResult, bitMapImage, urlString);
}
//...
}
```
This code reads the string from the text input box on the form and makes sure it's a URL. It retrieves the image from that URL, pastes it in the canvas, and gets the detected emotions from the image using the UploadAndDetectEmotions method defined previously. It then calls a few helper functions to output the results of the Cognitive Services analysis.
### Add in the helper functions
You'll notice that the above code has errors, since we have not added those helper functions yet. Let's add them in:
``` C#
public sealed partial class MainPage : Page
{
//...
private void displayAllResults(Emotion[] resultList)
{
int index = 0;
foreach (Emotion emotion in resultList)
{
ResultBox.Items.Add("\nFace #" + index
+ "\nAnger: " + emotion.Scores.Anger
+ "\nContempt: " + emotion.Scores.Contempt
+ "\nDisgust: " + emotion.Scores.Disgust
+ "\nFear: " + emotion.Scores.Fear
+ "\nHappiness: " + emotion.Scores.Happiness
+ "\nSurprise: " + emotion.Scores.Surprise);
index++;
}
}
private async void displayParsedResults(Emotion[] resultList)
{
int index = 0;
string textToDisplay = "";
foreach (Emotion emotion in resultList)
{
string emotionString = parseResults(emotion);
textToDisplay += "Person number " + index.ToString() + " " + emotionString + "\n";
index++;
}
ResultBox.Items.Add(textToDisplay);
}
private string parseResults(Emotion emotion)
{
float topScore = 0.0f;
string topEmotion = "";
string retString = "";
//anger
topScore = emotion.Scores.Anger;
topEmotion = "Anger";
// contempt
if (topScore < emotion.Scores.Contempt)
{
topScore = emotion.Scores.Contempt;
topEmotion = "Contempt";
}
// disgust
if (topScore < emotion.Scores.Disgust)
{
topScore = emotion.Scores.Disgust;
topEmotion = "Disgust";
}
// fear
if (topScore < emotion.Scores.Fear)
{
topScore = emotion.Scores.Fear;
topEmotion = "Fear";
}
// happiness
if (topScore < emotion.Scores.Happiness)
{
topScore = emotion.Scores.Happiness;
topEmotion = "Happiness";
}
// surprise
if (topScore < emotion.Scores.Surprise)
{
topScore = emotion.Scores.Surprise;
topEmotion = "Surprise";
}
retString = "is expressing " + topEmotion + " with " + topScore.ToString() + " certainty.";
return retString;
}
public async void DrawFaceRectangle(Emotion[] emotionResult, BitmapImage bitMapImage, String urlString)
{
if (emotionResult != null && emotionResult.Length > 0)
{
Windows.Storage.Streams.IRandomAccessStream stream = await Windows.Storage.Streams.RandomAccessStreamReference.CreateFromUri(new Uri(urlString)).OpenReadAsync();
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
double resizeFactorH = ImageCanvas.Height / decoder.PixelHeight;
double resizeFactorW = ImageCanvas.Width / decoder.PixelWidth;
foreach (var emotion in emotionResult)
{
Microsoft.ProjectOxford.Common.Rectangle faceRect = emotion.FaceRectangle;
Image Img = new Image();
BitmapImage BitImg = new BitmapImage();
// open the rectangle image, this will be our face box
Windows.Storage.Streams.IRandomAccessStream box = await Windows.Storage.Streams.RandomAccessStreamReference.CreateFromUri(new Uri("ms-appx:///Assets/rectangle.png", UriKind.Absolute)).OpenReadAsync();
BitImg.SetSource(box);
//rescale each facebox based on the API's face rectangle
var maxWidth = faceRect.Width * resizeFactorW;
var maxHeight = faceRect.Height * resizeFactorH;
var origHeight = BitImg.PixelHeight;
var origWidth = BitImg.PixelWidth;
var ratioX = maxWidth / (float)origWidth;
var ratioY = maxHeight / (float)origHeight;
var ratio = Math.Min(ratioX, ratioY);
var newHeight = (int)(origHeight * ratio);
var newWidth = (int)(origWidth * ratio);
BitImg.DecodePixelWidth = newWidth;
BitImg.DecodePixelHeight = newHeight;
// set the starting x and y coordiantes for each face box
Thickness margin = Img.Margin;
margin.Left = faceRect.Left * resizeFactorW;
margin.Top = faceRect.Top * resizeFactorH;
Img.Margin = margin;
Img.Source = BitImg;
ImageCanvas.Children.Add(Img);
}
}
}
//...
}
```
The first method outputs the score for all emotions Cognitive Services can detect. Each score falls between 0 and 1 and represents the probability that the face detected is expressing that emotion.
The second and third method determines which emotion is most prevalent. It then outputs these results as a string to a Panel next to the image.
The fourth method places a rectangle around each face detected in the image. Since UWP does not allow apps to draw shapes yet, it uses a blue rectangle in the Assets folder with a transparent background instead. The app places each rectangle image at the starting coordinates of the Rectangle provided by Cognitive Services and scales it to the approximate size of the Cognitive Services rectangle.
### Add in the rectangle resource
Download the face rectangle and add it to your Assets folder within your project
<img src="../../../Resources/images/CognitiveServicesExample/add_rectangle.png">
## Build and Test your app locally
___
1. Make sure the app builds correctly by invoking the **Build \| Build** Solution menu command.
2. Since this is a Universal Windows Platform (UWP) application, you can test the app on your Visual Studio machine as well: Press F5, and the app will run inside your machine.
Change the URL for a different image, or just click "Detect Emotion" to run the Emotion Recognizer with the default image. After a few seconds, the results should appear in your app window as expected: the image with rectangles on it on the left and more detailed emotion output for each face on the right.
<img src="../../../Resources/images/CognitiveServicesExample/running_app.png">
In this case, the order is based on depth: **faces closer to the front will be first, and faces farther away will be last in the list.**
Close your app after you're done validating it
## Deploy the app to your Windows 10 IoT Core device
___
1. To deploy our app to our IoT Core device, you need to provide your machine with the device's identifier. In the [PowerShell](https://docs.microsoft.com/en-us/windows/iot-core/connect-your-device/powershell) documentation, you can find instructions to chose a unique name for your IoT Core device. In this sample, we'll use that name (though you can use your IP address as well) in the 'Remote Machine Debugging' settings in Visual Studio.
If you're building for Minnowboard Max, select **x86** in the Visual Studio toolbar architecture dropdown. If you're building for Raspberry Pi 2 or 3 or the DragonBoard, select **ARM**.
In the Visual Studio toolbar, click on the **Local Machine** dropdown and select **Remote Machine**<br/>
2. At this point, Visual Studio will present the 'Remote Connections' dialog. Put the IP address or name of your IoT Core device (in this example, we're using 'my-device') and select **Universal (Unencrypted Protocol)** for Authentication Mode. Click **Select**.
<img src="../../../Resources/images/CognitiveServicesExample/remote_connection.png">
> Couple of notes:
>
> 1. You can use the IP address instead of the IoT Core device name.
>
> 2. You can verify and/or modify these values navigating to the project properties (select 'Properties' in the Solution Explorer) and choose the 'Debug' tab on the left:
>
> <img src="../../../Resources/images/CognitiveServicesExample/cs-debug-project-properties.png">
3. Now you're ready to deploy to the remote IoT Core device. Press F5 (or select **Debug \| Start Debugging**) to start debugging our app. You should see the app come up in IoT Core device screen, and you should be able to perform the same functions you did locally. To stop the app, press on the 'Stop Debugging' button (or select Debug \| Stop Debugging).
4. Congratulations! Your app should now be working!

Просмотреть файл

@ -0,0 +1,117 @@
# IoT Startup App sample
We'll create a UWP app to demonstrate how one can create a simple startup app that can list all the installed apps on the IoT Core System using PackageManager API. We will also demonstrate how we can use the APIs to launch an app.
**Note:** The sample uses a restricted capability '*packageQuery*' to use the PackageManager APIs.
## Load the project in Visual Studio
You can find the source code for this sample by downloading a zip of all of our samples [here](https://github.com/ms-iot/samples/archive/develop.zip) and navigating to the `samples-develop\IoTHomeAppSample\IoTStartApp`. The sample code is in C++. Make a copy of the folder on your disk and open the project from Visual Studio.
With the application open in Visual Studio, set the architecture in the toolbar dropdown. If youre building for MinnowBoard Max, select x86. If youre building for Raspberry Pi 2 or 3, select ARM.
Next, in the Visual Studio toolbar, click on the Local Machine dropdown and select Remote Machine.
<img src="../../Resources/images/IoTStartApp/cpp-remote-machine-debugging.png">
Next, right click on your project in the Solution Explorer pane. Select Properties.
<img src="../../Resources/images/IoTStartApp/cpp-project-properties.png">
Under Configuration Properties -> Debugging, modify the following fields:
Machine Name: If you previously used PowerShell to set a unique name for your device, you can enter it here (in this example, were using my-device). Otherwise, use the IP address of your Windows IoT Core device.
Authentication Mode: Set to Universal (Unencrypted Protocol)
<img src="../../Resources/images/IoTStartApp/cpp-debug-project-properties.png">
When everything is set up, you should be able to press F5 from Visual Studio. The IoT Startup App will deploy and start on the Windows IoT device.
## Let's look at the code
### Adding the Restricted Capability
As noted earlier, in order to use the PackageManager APIs from the UWP app, we need to add the restricted capability to the *Package.appxManifest* file. The Visual Studio Manifest Designer doesn't allow adding restricted capability. Hence, we will have to manually edit the file and insert the folowing in the *Capabilities* section:
```xml
<Capabilities>
...
<!-- Restricted Capability to use PackageManager APIs in UWP app -->
<rescap:Capability Name="packageQuery" />
```
### Enumerating the Apps
The following code shows how to enumerate the apps:
```c++
public ref class AppListItem sealed
{
public:
property Windows::UI::Xaml::Media::Imaging::BitmapImage^ ImgSrc;
property Platform::String^ Name;
property Platform::String^ PackageFullName;
property Windows::ApplicationModel::Core::AppListEntry^ AppEntry;
};
```
```c++
void MainPage::EnumApplications()
{
m_AppItemList = ref new Vector<AppListItem^>();
auto mgr = ref new PackageManager();
auto packages = mgr->FindPackagesForUserWithPackageTypes(nullptr, PackageTypes::Main);
for (auto& pkg : packages)
{
auto task = create_task(pkg->GetAppListEntriesAsync());
task.then([this, pkg](IVectorView<AppListEntry^>^ entryList)
{
for (auto entry : entryList)
{
try
{
auto displayInfo = entry->DisplayInfo;
auto logo = displayInfo->GetLogo(Size(150.0, 150.0));
auto appItem = ref new AppListItem;
appItem->Name = displayInfo->DisplayName;
appItem->PackageFullName = pkg->Id->FullName;
appItem->AppEntry = entry;
appItem->ImgSrc = ref new BitmapImage();
create_task(logo->OpenReadAsync()).then([this, appItem](IRandomAccessStreamWithContentType^ stream)
{
appItem->ImgSrc->SetSourceAsync(stream);
});
m_AppItemList->Append(appItem);
}
catch (Exception^ e)
{
OutputDebugString(e->Message->Data());
}
catch (...)
{
OutputDebugString(L"Unknown Exception");
//ignore
}
}
});
}
}
```
### Launching the App
The following code shows how to launch the app:
```c++
void MainPage::StackPanel_Tapped(Object^ sender, TappedRoutedEventArgs^ e)
{
auto appItem = dynamic_cast<AppListItem^>(appList->SelectedItem);
if (appItem)
{
appItem->AppEntry->LaunchAsync();
}
}
```
More information on PackageManager APIs can be found [here](https://docs.microsoft.com/en-us/uwp/api/Windows.Management.Deployment.PackageManager).