зеркало из
1
0
Форкнуть 0

chore(azure-iot-sdk-python): Merge master into digitaltwins-preview (#481)

This commit is contained in:
Zoltan Varga 2020-03-06 12:04:08 -08:00 коммит произвёл GitHub
Родитель fee0661edc
Коммит 7deede7d50
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
238 изменённых файлов: 28390 добавлений и 8331 удалений

112
README.md
Просмотреть файл

@ -1,39 +1,104 @@
![Build Status](https://azure-iot-sdks.visualstudio.com/azure-iot-sdks/_apis/build/status/python/python-preview) #
<div align=center>
<img src="./azure-iot-device/doc/images/azure_iot_sdk_python_banner.png"></img>
<h1> V2 - We are now GA! </h1>
</div>
# Azure IoT Hub Python SDKs v2 - PREVIEW ![Build Status](https://azure-iot-sdks.visualstudio.com/azure-iot-sdks/_apis/build/status/Azure.azure-iot-sdk-python)
This repository contains the code for the future v2.0.0 of the Azure IoT SDKs for Python. The goal of v2.0.0 is to be a complete rewrite of the existing SDK that maximizes the use of the Python language and its standard features rather than wrap over the C SDK, like v1.x.x of the SDK did. This repository contains code for the Azure IoT SDKs for Python. This enables python developers to easily create IoT device solutions that semealessly
connection to the Azure IoTHub ecosystem.
*If you're looking for the v1.x.x client library, it is now preserved in the [v1-deprecated](https://github.com/Azure/azure-iot-sdk-python/tree/v1-deprecated) branch.* *If you're looking for the v1.x.x client library, it is now preserved in the [v1-deprecated](https://github.com/Azure/azure-iot-sdk-python/tree/v1-deprecated) branch.*
**Note that these SDKs are currently in preview, and are subject to change.**
# SDKs ## Azure IoT SDK for Python
This repository contains the following SDKs: This repository contains the following libraries:
* [Azure IoT Device SDK](azure-iot-device) - /azure-iot-device * [Azure IoT Device library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/README.md)
* Provision a device using the Device Provisioning Service for use with the Azure IoT hub
* Send/receive telemetry between a device or module and the Azure IoT hub or Azure IoT Edge device
* Handle direct methods invoked by the Azure IoT hub on a device
* Handle twin events and report twin updates
* *Still in development*
- *Blob/File upload*
- *Invoking method from a module client onto a leaf device*
* Azure IoT Hub SDK **(COMING SOON)** * [Azure IoT Hub Service library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/README.md)
* Do service/management operations on the Azure IoT Hub
* Azure IoT Hub Provisioning SDK **(COMING SOON)** * Coming Soon: Azure IoT Device Provisioning Service Library
* Do service/management operations on the Azure IoT Device Provisioning Service
# How to install the SDKs ## Installing the libraries
``` Pip installs are provided for all of the SDK libraries in this repo:
pip install azure-iot-device
```
# Contributing [Device libraries](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#installation)
[IoTHub library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/README.md#installation)
## Features
:heavy_check_mark: feature available :heavy_multiplication_x: feature planned but not yet supported :heavy_minus_sign: no support planned*
*Features that are not planned may be prioritized in a future release, but are not currently planned
### Device Client Library ([azure-iot-device](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device))
#### IoTHub Device Client
| Features | Status | Description |
|------------------------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Authentication](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-security-deployment) | :heavy_check_mark: | Connect your device to IoT Hub securely with supported authentication, including private key, SASToken, X-509 Self Signed and Certificate Authority (CA) Signed. |
| [Send device-to-cloud message](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-d2c) | :heavy_check_mark: | Send device-to-cloud messages (max 256KB) to IoT Hub with the option to add custom properties. |
| [Receive cloud-to-device messages](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_check_mark: | Receive cloud-to-device messages and read associated custom and system properties from IoT Hub, with the option to complete/reject/abandon C2D messages. |
| [Device Twins](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | IoT Hub persists a device twin for each device that you connect to IoT Hub. The device can perform operations like get twin tags, subscribe to desired properties. |
| [Direct Methods](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | IoT Hub gives you the ability to invoke direct methods on devices from the cloud. The SDK supports handler for method specific and generic operation. |
| [Connection Status and Error reporting](https://docs.microsoft.com/en-us/rest/api/iothub/common-error-codes) | :heavy_multiplication_x: | Error reporting for IoT Hub supported error code. *This SDK supports error reporting on authentication and Device Not Found. |
| Retry policies | :heavy_check_mark: | Retry policy for unsuccessful device-to-cloud messages. |
| [Upload file to Blob](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-file-upload) | :heavy_check_mark: | A device can initiate a file upload and notifies IoT Hub when the upload is complete. |
#### IoTHub Module Client
| Features | Status | Description |
|------------------------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Authentication](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-security-deployment) | :heavy_check_mark: | Connect your device to IoT Hub securely with supported authentication, including private key, SASToken, X-509 Self Signed and Certificate Authority (CA) Signed. |
| [Send device-to-cloud message](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-d2c) | :heavy_check_mark: | Send device-to-cloud messages (max 256KB) to IoT Hub with the option to add custom properties. |
| [Receive cloud-to-device messages](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_check_mark: | Receive cloud-to-device messages and read associated custom and system properties from IoT Hub, with the option to complete/reject/abandon C2D messages. |
| [Device Twins](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | IoT Hub persists a device twin for each device that you connect to IoT Hub. The device can perform operations like get twin tags, subscribe to desired properties. |
| [Direct Methods](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | IoT Hub gives you the ability to invoke direct methods on devices from the cloud. The SDK supports handler for method specific and generic operation. |
| [Connection Status and Error reporting](https://docs.microsoft.com/en-us/rest/api/iothub/common-error-codes) | :heavy_multiplication_x: | Error reporting for IoT Hub supported error code. *This SDK supports error reporting on authentication and Device Not Found. |
| Retry policies | :heavy_check_mark: | Retry policy for connecting disconnected devices and resubmitting messages. |
| Direct Invocation of Method on Modules | :heavy_check_mark: | Invoke method calls to another module using using the Edge Gateway. |
#### Provisioning Device Client
| Features | Status | Description |
|-----------------------------|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| TPM Individual Enrollment | :heavy_minus_sign: | Provisioning via [Trusted Platform Module](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#trusted-platform-module-tpm). |
| X.509 Individual Enrollment | :heavy_check_mark: | Provisioning via [X.509 root certificate](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#root-certificate). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_x509_and_send_telemetry.py) folder and this [quickstart](https://docs.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python) on how to create a device client. |
| X.509 Enrollment Group | :heavy_check_mark: | Provisioning via [X.509 leaf certificate](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#leaf-certificate)). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_x509_and_send_telemetry.py) folder on how to create a device client. |
| Symmetric Key Enrollment | :heavy_check_mark: | Provisioning via [Symmetric key attestation](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-symmetric-key-attestation)). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_symmetric_key_and_send_telemetry.py) folder on how to create a device client. |
### IoTHub Service Library ([azure-iot-hub](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/azure/iot/hub/iothub_registry_manager.py))
#### Registry Manager
| Features | Status | Description |
|---------------------------------------------------------------------------------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------|
| [Identity registry (CRUD)](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry) | :heavy_check_mark: | Use your backend app to perform CRUD operation for individual device or in bulk. |
| [Cloud-to-device messaging](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_multiplication_x: | Use your backend app to send cloud-to-device messages, and set up cloud-to-device message receivers. |
| [Direct Methods operations](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | Use your backend app to invoke direct method on device. |
| [Device Twins operations](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | Use your backend app to perform device twin operations. *Twin reported property update callback and replace twin are in progress. |
| [Query](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-query-language) | :heavy_multiplication_x: | Use your backend app to perform query for information. |
| [Jobs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-jobs) | :heavy_multiplication_x: | Use your backend app to perform job operation. |
### IoTHub Provisioning Service Library
Feature is Coming Soon
| Features | Status | Description |
|-----------------------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| CRUD Operation with TPM Individual Enrollment | :heavy_multiplication_x: | Manage device enrollment using TPM with the service SDK. |
| Bulk CRUD Operation with TPM Individual Enrollment | :heavy_multiplication_x: | Bulk manage device enrollment using TPM with the service SDK. |
| CRUD Operation with X.509 Individual Enrollment | :heavy_multiplication_x: | Manages device enrollment using X.509 individual enrollment with the service SDK. |
| CRUD Operation with X.509 Group Enrollment | :heavy_multiplication_x: | Manages device enrollment using X.509 group enrollment with the service SDK. |
| Query enrollments | :heavy_multiplication_x: | Query registration states with the service SDK. |
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
@ -46,3 +111,4 @@ provided by the bot. You will only need to do this once across all repos using o
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

41
SECURITY.MD Normal file
Просмотреть файл

@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.3 BLOCK -->
# Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)) of a security vulnerability, please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

Просмотреть файл

@ -1,7 +1,7 @@
[bumpversion] [bumpversion]
current_version = 2.0.0-preview.10 current_version = 2.1.0
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)-preview\.(?P<preview>\d+) parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)
serialize = {major}.{minor}.{patch}-preview.{preview} serialize = {major}.{minor}.{patch}
[bumpversion:part:preview] [bumpversion:part]

Просмотреть файл

@ -1,20 +1,23 @@
# Azure IoT Device SDK # Azure IoT Device SDK
The Azure IoT Device SDK for Python provides functionality for communicating with the Azure IoT Hub for both Devices and Modules. The Azure IoT Device SDK for Python provides functionality for communicating with the Azure IoT Hub for both Devices and Modules.
**Note that this SDK is currently in preview, and is subject to change.** ## Azure IoT Device Features
## Features
The SDK provides the following clients: The SDK provides the following clients:
* ### Provisioning Device Client * ### Provisioning Device Client
* Creates a device identity on the Azure IoT Hub * Creates a device identity on the Azure IoT Hub
* ### IoT Hub Device Client * ### IoT Hub Device Client
* Send telemetry messages to Azure IoT Hub * Send telemetry messages to Azure IoT Hub
* Receive Cloud-to-Device (C2D) messages from the Azure IoT Hub * Receive Cloud-to-Device (C2D) messages from the Azure IoT Hub
* Receive and respond to direct method invocations from the Azure IoT Hub * Receive and respond to direct method invocations from the Azure IoT Hub
* ### IoT Hub Module Client * ### IoT Hub Module Client
* Supports Azure IoT Edge Hub and Azure IoT Hub * Supports Azure IoT Edge Hub and Azure IoT Hub
* Send telemetry messages to a Hub or to another Module * Send telemetry messages to a Hub or to another Module
* Receive Input messages from a Hub or other Modules * Receive Input messages from a Hub or other Modules
@ -25,115 +28,29 @@ These clients are available with an asynchronous API, as well as a blocking sync
| Python Version | Asynchronous API | Synchronous API | | Python Version | Asynchronous API | Synchronous API |
| -------------- | ---------------- | --------------- | | -------------- | ---------------- | --------------- |
| Python 3.5.3+ | **YES** | **YES** | | Python 3.5.3+ | **YES** | **YES** |
| Python 3.4 | NO | **YES** |
| Python 2.7 | NO | **YES** | | Python 2.7 | NO | **YES** |
## Installation ## Installation
```
```Shell
pip install azure-iot-device pip install azure-iot-device
``` ```
## Set up an IoT Hub and create a Device Identity ## Device Samples
1. Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) (or use the [Azure Cloud Shell](https://shell.azure.com/)) and use it to [create an Azure IoT Hub](https://docs.microsoft.com/en-us/cli/azure/iot/hub?view=azure-cli-latest#az-iot-hub-create).
```bash Check out the [samples repository](./azure-iot-device/samples) for example code showing how the SDK can be used in a variety of scenarios, including:
az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
```
* Note that this operation make take a few minutes.
2. Add the IoT Extension to the Azure CLI, and then [register a device identity](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-create)
```bash
az extension add --name azure-cli-iot-ext
az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>
```
2. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI
```bash
az iot hub device-identity show-connection-string --device-id <your device id> --hub-name <your IoT Hub name>
```
It should be in the format:
```
HostName=<your IoT Hub name>.azure-devices.net;DeviceId=<your device id>;SharedAccessKey=<some value>
```
## Send a simple telemetry message
1. [Begin monitoring for telemetry](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-monitor-events) on your IoT Hub using the Azure CLI
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
2. On your device, set the Device Connection String as an enviornment variable called `IOTHUB_DEVICE_CONNECTION_STRING`.
### Windows
```cmd
set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
```
* Note that there are **NO** quotation marks around the connection string.
### Linux
```bash
export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
```
3. Copy the following code that sends a single message to the IoT Hub into a new python file on your device, and run it from the terminal or IDE (**requires Python 3.7+**):
```python
import asyncio
import os
from azure.iot.device.aio import IoTHubDeviceClient
async def main():
# Fetch the connection string from an enviornment variable
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create instance of the device client using the connection string
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Send a single message
print("Sending message...")
await device_client.send_message("This is a message that is being sent")
print("Message successfully sent!")
# finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
```
4. Check the Azure CLI output to verify that the message was received by the IoT Hub. You should see the following output:
```bash
Starting event monitor, use ctrl-c to stop...
event:
origin: <your Device name>
payload: This is a message that is being sent
```
5. Your device is now able to connect to Azure IoT Hub!
## Additional Samples
Check out the [samples repository](https://github.com/Azure/azure-iot-sdk-python-preview/tree/master/azure-iot-device/samples) for example code showing how the SDK can be used in a variety of scenarios, including:
* Sending multiple telemetry messages at once. * Sending multiple telemetry messages at once.
* Receiving Cloud-to-Device messages. * Receiving Cloud-to-Device messages.
* Using Edge Modules with the Azure IoT Edge Hub. * Using Edge Modules with the Azure IoT Edge Hub.
* Send and receive updates to device twin * Send and receive updates to device twin
* Receive invocations to direct methods * Receive invocations to direct methods
* Register a device with the Device Provisioning Service * Register a device with the Device Provisioning Service
* Legacy scenarios for Python 2.7 and 3.4
## Getting help and finding API docs ## Getting help and finding API docs
Our SDK makes use of docstrings which means you cand find API documentation directly through Python with use of the [help](https://docs.python.org/3/library/functions.html#help) command: Our SDK makes use of docstrings which means you cand find API documentation directly through Python with use of the [help](https://docs.python.org/3/library/functions.html#help) command:
```python ```python
>>> from azure.iot.device import IoTHubDeviceClient >>> from azure.iot.device import IoTHubDeviceClient
>>> help(IoTHubDeviceClient) >>> help(IoTHubDeviceClient)

Просмотреть файл

@ -5,6 +5,6 @@ This package provides shared modules for use with various Azure IoT device-side
INTERNAL USAGE ONLY INTERNAL USAGE ONLY
""" """
from .models import X509 from .models import X509, ProxyOptions
__all__ = ["X509"] __all__ = ["X509", "ProxyOptions"]

Просмотреть файл

@ -7,6 +7,7 @@
import functools import functools
import logging import logging
import traceback
import azure.iot.device.common.asyncio_compat as asyncio_compat import azure.iot.device.common.asyncio_compat as asyncio_compat
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -69,9 +70,9 @@ class AwaitableCallback(object):
result = None result = None
if exception: if exception:
logger.error( # Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
"Callback completed with error {}".format(exception), exc_info=exception logger.error("Callback completed with error {}".format(exception))
) logger.error(traceback.format_exception_only(type(exception), exception))
loop.call_soon_threadsafe(self.future.set_exception, exception) loop.call_soon_threadsafe(self.future.set_exception, exception)
else: else:
logger.debug("Callback completed with result {}".format(result)) logger.debug("Callback completed with result {}".format(result))

Просмотреть файл

@ -0,0 +1,78 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import weakref
class CallableWeakMethod(object):
"""
Object which makes a weak reference to a method call. Similar to weakref.WeakMethod,
but works on Python 2.7 and returns an object which is callable.
This objet is used primarily for callbacks and it prevents circular references in the
garbage collector. It is used specifically in the scenario where object holds a
refernce to object b and b holds a callback into a (which creates a rererence
back into a)
By default, method references are _strong_, and we end up with we have a situation
where a has a _strong) reference to b and b has a _strong_ reference to a.
The Python 3.4+ garbage collectors handle this circular reference just fine, but the
2.7 garbage collector fails, but only when one of the objects has a finalizer method.
'''
# example of bad (strong) circular dependency:
class A(object):
def --init__(self):
self.b = B() # A objects now have a strong refernce to B objects
b.handler = a.method() # and B object have a strong reference back into A objects
def method(self):
pass
'''
In the example above, if a or B has a finalizer, that object will be considered uncollectable
(on 2.7) and both objects will leak
However, if we use this object, a will a _strong_ reference to b, and b will have a _weak_
reference =back to a, and the circular depenency chain is broken.
```
# example of better (weak) circular dependency:
class A(object):
def --init__(self):
self.b = B() # A objects now have a strong refernce to B objects
b.handler = CallableWeakMethod(a, "method") # and B objects have a WEAK reference back into A objects
def method(self):
pass
```
In this example, there is no circular reference, and the Python 2.7 garbage collector is able
to collect both objects, even if one of them has a finalizer.
When we reach the point where all supported interpreters implement PEP 442, we will
no longer need this object
ref: https://www.python.org/dev/peps/pep-0442/
"""
def __init__(self, object, method_name):
self.object_weakref = weakref.ref(object)
self.method_name = method_name
def _get_method(self):
return getattr(self.object_weakref(), self.method_name)
def __call__(self, *args, **kwargs):
return self._get_method()(*args, **kwargs)
def __eq__(self, other):
return self._get_method() == other
def __repr__(self):
if self.object_weakref():
return "CallableWeakMethod for {}".format(self._get_method())
else:
return "CallableWeakMethod for {} (DEAD)".format(self.method_name)

Просмотреть файл

@ -0,0 +1,24 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class ChainableException(Exception):
"""This exception stores a reference to a previous exception which has caused
the current one"""
def __init__(self, message=None, cause=None):
# By using .__cause__, this will allow typical stack trace behavior in Python 3,
# while still being able to operate in Python 2.
self.__cause__ = cause
super(ChainableException, self).__init__(message)
def __str__(self):
if self.__cause__:
return "{} caused by {}".format(
super(ChainableException, self).__repr__(), self.__cause__.__repr__()
)
else:
return super(ChainableException, self).__repr__()

Просмотреть файл

@ -1,189 +0,0 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class OperationCancelledError(Exception):
"""
Operation was cancelled.
"""
pass
class ConnectionFailedError(Exception):
"""
Connection failed to be established
"""
pass
class ConnectionDroppedError(Exception):
"""
Previously established connection was dropped
"""
pass
class ArgumentError(Exception):
"""
Service returned 400
"""
pass
class UnauthorizedError(Exception):
"""
Authorization failed or service returned 401
"""
pass
class QuotaExceededError(Exception):
"""
Service returned 403
"""
pass
class NotFoundError(Exception):
"""
Service returned 404
"""
pass
class DeviceTimeoutError(Exception):
"""
Service returned 408
"""
# TODO: is this a method call error? If so, do we retry?
pass
class DeviceAlreadyExistsError(Exception):
"""
Service returned 409
"""
pass
class InvalidEtagError(Exception):
"""
Service returned 412
"""
pass
class MessageTooLargeError(Exception):
"""
Service returned 413
"""
pass
class ThrottlingError(Exception):
"""
Service returned 429
"""
pass
class InternalServiceError(Exception):
"""
Service returned 500
"""
pass
class BadDeviceResponseError(Exception):
"""
Service returned 502
"""
# TODO: is this a method invoke thing?
pass
class ServiceUnavailableError(Exception):
"""
Service returned 503
"""
pass
class TimeoutError(Exception):
"""
Operation timed out or service returned 504
"""
pass
class FailedStatusCodeError(Exception):
"""
Service returned unknown status code
"""
pass
class ProtocolClientError(Exception):
"""
Error returned from protocol client library
"""
pass
class PipelineError(Exception):
"""
Error returned from transport pipeline
"""
pass
status_code_to_error = {
400: ArgumentError,
401: UnauthorizedError,
403: QuotaExceededError,
404: NotFoundError,
408: DeviceTimeoutError,
409: DeviceAlreadyExistsError,
412: InvalidEtagError,
413: MessageTooLargeError,
429: ThrottlingError,
500: InternalServiceError,
502: BadDeviceResponseError,
503: ServiceUnavailableError,
504: TimeoutError,
}
def error_from_status_code(status_code, message=None):
"""
Return an Error object from a failed status code
:param int status_code: Status code returned from failed operation
:returns: Error object
"""
if status_code in status_code_to_error:
return status_code_to_error[status_code](message)
else:
return FailedStatusCodeError(message)

Просмотреть файл

@ -6,6 +6,7 @@
import threading import threading
import logging import logging
import six import six
import traceback
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -31,7 +32,6 @@ class EventedCallback(object):
def wrapping_callback(*args, **kwargs): def wrapping_callback(*args, **kwargs):
if "error" in kwargs and kwargs["error"]: if "error" in kwargs and kwargs["error"]:
logger.error("Callback called with error {}".format(kwargs["error"]))
self.exception = kwargs["error"] self.exception = kwargs["error"]
elif return_arg_name: elif return_arg_name:
if return_arg_name in kwargs: if return_arg_name in kwargs:
@ -44,10 +44,9 @@ class EventedCallback(object):
) )
if self.exception: if self.exception:
logger.error( # Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
"Callback completed with error {}".format(self.exception), logger.error("Callback completed with error {}".format(self.exception))
exc_info=self.exception, logger.error(traceback.format_exc())
)
else: else:
logger.debug("Callback completed with result {}".format(self.result)) logger.debug("Callback completed with result {}".format(self.result))

Просмотреть файл

@ -4,11 +4,12 @@
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
import logging import logging
import traceback
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def exception_caught_in_background_thread(e): def handle_background_exception(e):
""" """
Function which handled exceptions that are caught in background thread. This is Function which handled exceptions that are caught in background thread. This is
typically called from the callback thread inside the pipeline. These exceptions typically called from the callback thread inside the pipeline. These exceptions
@ -24,4 +25,32 @@ def exception_caught_in_background_thread(e):
# @FUTURE: We should add a mechanism which allows applications to receive these # @FUTURE: We should add a mechanism which allows applications to receive these
# exceptions so they can respond accordingly # exceptions so they can respond accordingly
logger.error(msg="Exception caught in background thread. Unable to handle.", exc_info=e) logger.error(msg="Exception caught in background thread. Unable to handle.")
logger.error(traceback.format_exception_only(type(e), e))
def swallow_unraised_exception(e, log_msg=None, log_lvl="warning"):
"""Swallow and log an exception object.
Convenience function for logging, as exceptions can only be logged correctly from within a
except block.
:param Exception e: Exception object to be swallowed.
:param str log_msg: Optional message to use when logging.
:param str log_lvl: The log level to use for logging. Default "warning".
"""
try:
raise e
except Exception:
if log_lvl == "warning":
logger.warning(log_msg)
logger.warning(traceback.format_exc())
elif log_lvl == "error":
logger.error(log_msg)
logger.error(traceback.format_exc())
elif log_lvl == "info":
logger.info(log_msg)
logger.info(traceback.format_exc())
else:
logger.debug(log_msg)
logger.debug(traceback.format_exc())

Просмотреть файл

@ -0,0 +1,108 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import uuid
import threading
import json
import ssl
from . import transport_exceptions as exceptions
from .pipeline import pipeline_thread
from six.moves import http_client
logger = logging.getLogger(__name__)
class HTTPTransport(object):
"""
A wrapper class that provides an implementation-agnostic HTTP interface.
"""
def __init__(self, hostname, server_verification_cert=None, x509_cert=None, cipher=None):
"""
Constructor to instantiate an HTTP protocol wrapper.
:param str hostname: Hostname or IP address of the remote host.
:param str server_verification_cert: Certificate which can be used to validate a server-side TLS connection (optional).
:param x509_cert: Certificate which can be used to authenticate connection to a server in lieu of a password (optional).
"""
self._hostname = hostname
self._server_verification_cert = server_verification_cert
self._x509_cert = x509_cert
self._ssl_context = self._create_ssl_context()
def _create_ssl_context(self):
"""
This method creates the SSLContext object used to authenticate the connection. The generated context is used by the http_client and is necessary when authenticating using a self-signed X509 cert or trusted X509 cert
"""
logger.debug("creating a SSL context")
ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2)
if self._server_verification_cert:
ssl_context.load_verify_locations(cadata=self._server_verification_cert)
else:
ssl_context.load_default_certs()
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
if self._x509_cert is not None:
logger.debug("configuring SSL context with client-side certificate and key")
ssl_context.load_cert_chain(
self._x509_cert.certificate_file,
self._x509_cert.key_file,
self._x509_cert.pass_phrase,
)
return ssl_context
@pipeline_thread.invoke_on_http_thread_nowait
def request(self, method, path, callback, body="", headers={}, query_params=""):
"""
This method creates a connection to a remote host, sends a request to that host, and then waits for and reads the response from that request.
:param str method: The request method (e.g. "POST")
:param str path: The path for the URL
:param Function callback: The function that gets called when this operation is complete or has failed. The callback function must accept an error and a response dictionary, where the response dictionary contains a status code, a reason, and a response string.
:param str body: The body of the HTTP request to be sent following the headers.
:param dict headers: A dictionary that provides extra HTTP headers to be sent with the request.
:param str query_params: The optional query parameters to be appended at the end of the URL.
"""
# Sends a complete request to the server
logger.info("sending https request.")
try:
logger.debug("creating an https connection")
connection = http_client.HTTPSConnection(self._hostname, context=self._ssl_context)
logger.debug("connecting to host tcp socket")
connection.connect()
logger.debug("connection succeeded")
url = "https://{hostname}/{path}{query_params}".format(
hostname=self._hostname,
path=path,
query_params="?" + query_params if query_params else "",
)
logger.debug("Sending Request to HTTP URL: {}".format(url))
logger.debug("HTTP Headers: {}".format(headers))
logger.debug("HTTP Body: {}".format(body))
connection.request(method, url, body=body, headers=headers)
response = connection.getresponse()
status_code = response.status
reason = response.reason
response_string = response.read()
logger.debug("response received")
logger.debug("closing connection to https host")
connection.close()
logger.debug("connection closed")
logger.info("https request sent, and response received.")
response_obj = {"status_code": status_code, "reason": reason, "resp": response_string}
callback(response=response_obj)
except Exception as e:
logger.error("Error in HTTP Transport: {}".format(e))
callback(
error=exceptions.ProtocolClientError(
message="Unexpected HTTPS failure during connect", cause=e
)
)

Просмотреть файл

@ -4,3 +4,4 @@ This package provides object models for use within the Azure Provisioning Device
""" """
from .x509 import X509 from .x509 import X509
from .proxy_options import ProxyOptions

Просмотреть файл

@ -0,0 +1,53 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""
This module represents proxy options to enable sending traffic through proxy servers.
"""
class ProxyOptions(object):
"""
A class containing various options to send traffic through proxy servers by enabling
proxying of MQTT connection.
"""
def __init__(
self, proxy_type, proxy_addr, proxy_port, proxy_username=None, proxy_password=None
):
"""
Initializer for proxy options.
:param proxy_type: The type of the proxy server. This can be one of three possible choices:socks.HTTP, socks.SOCKS4, or socks.SOCKS5
:param proxy_addr: IP address or DNS name of proxy server
:param proxy_port: The port of the proxy server. Defaults to 1080 for socks and 8080 for http.
:param proxy_username: (optional) username for SOCKS5 proxy, or userid for SOCKS4 proxy.This parameter is ignored if an HTTP server is being used.
If it is not provided, authentication will not be used (servers may accept unauthenticated requests).
:param proxy_password: (optional) This parameter is valid only for SOCKS5 servers and specifies the respective password for the username provided.
"""
self._proxy_type = proxy_type
self._proxy_addr = proxy_addr
self._proxy_port = proxy_port
self._proxy_username = proxy_username
self._proxy_password = proxy_password
@property
def proxy_type(self):
return self._proxy_type
@property
def proxy_address(self):
return self._proxy_addr
@property
def proxy_port(self):
return self._proxy_port
@property
def proxy_username(self):
return self._proxy_username
@property
def proxy_password(self):
return self._proxy_password

Просмотреть файл

@ -7,61 +7,74 @@
import paho.mqtt.client as mqtt import paho.mqtt.client as mqtt
import logging import logging
import ssl import ssl
import sys
import threading import threading
import traceback import traceback
from . import errors import weakref
import socket
from . import transport_exceptions as exceptions
import socks
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# mapping of Paho conack rc codes to Error object classes # Mapping of Paho CONNACK rc codes to Error object classes
paho_conack_rc_to_error = { # Used for connection callbacks
mqtt.CONNACK_REFUSED_PROTOCOL_VERSION: errors.ProtocolClientError, paho_connack_rc_to_error = {
mqtt.CONNACK_REFUSED_IDENTIFIER_REJECTED: errors.ProtocolClientError, mqtt.CONNACK_REFUSED_PROTOCOL_VERSION: exceptions.ProtocolClientError,
mqtt.CONNACK_REFUSED_SERVER_UNAVAILABLE: errors.ConnectionFailedError, mqtt.CONNACK_REFUSED_IDENTIFIER_REJECTED: exceptions.ProtocolClientError,
mqtt.CONNACK_REFUSED_BAD_USERNAME_PASSWORD: errors.UnauthorizedError, mqtt.CONNACK_REFUSED_SERVER_UNAVAILABLE: exceptions.ConnectionFailedError,
mqtt.CONNACK_REFUSED_NOT_AUTHORIZED: errors.UnauthorizedError, mqtt.CONNACK_REFUSED_BAD_USERNAME_PASSWORD: exceptions.UnauthorizedError,
mqtt.CONNACK_REFUSED_NOT_AUTHORIZED: exceptions.UnauthorizedError,
} }
# mapping of Paho rc codes to Error object classes # Mapping of Paho rc codes to Error object classes
# Used for responses to Paho APIs and non-connection callbacks
paho_rc_to_error = { paho_rc_to_error = {
mqtt.MQTT_ERR_NOMEM: errors.ProtocolClientError, mqtt.MQTT_ERR_NOMEM: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_PROTOCOL: errors.ProtocolClientError, mqtt.MQTT_ERR_PROTOCOL: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_INVAL: errors.ArgumentError, mqtt.MQTT_ERR_INVAL: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_NO_CONN: errors.ConnectionDroppedError, mqtt.MQTT_ERR_NO_CONN: exceptions.ConnectionDroppedError,
mqtt.MQTT_ERR_CONN_REFUSED: errors.ConnectionFailedError, mqtt.MQTT_ERR_CONN_REFUSED: exceptions.ConnectionFailedError,
mqtt.MQTT_ERR_NOT_FOUND: errors.ConnectionFailedError, mqtt.MQTT_ERR_NOT_FOUND: exceptions.ConnectionFailedError,
mqtt.MQTT_ERR_CONN_LOST: errors.ConnectionDroppedError, mqtt.MQTT_ERR_CONN_LOST: exceptions.ConnectionDroppedError,
mqtt.MQTT_ERR_TLS: errors.UnauthorizedError, mqtt.MQTT_ERR_TLS: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_PAYLOAD_SIZE: errors.ProtocolClientError, mqtt.MQTT_ERR_PAYLOAD_SIZE: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_NOT_SUPPORTED: errors.ProtocolClientError, mqtt.MQTT_ERR_NOT_SUPPORTED: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_AUTH: errors.UnauthorizedError, mqtt.MQTT_ERR_AUTH: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_ACL_DENIED: errors.UnauthorizedError, mqtt.MQTT_ERR_ACL_DENIED: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_UNKNOWN: errors.ProtocolClientError, mqtt.MQTT_ERR_UNKNOWN: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_ERRNO: errors.ProtocolClientError, mqtt.MQTT_ERR_ERRNO: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_QUEUE_SIZE: errors.ProtocolClientError, mqtt.MQTT_ERR_QUEUE_SIZE: exceptions.ProtocolClientError,
} }
# Default keepalive. Paho sends a PINGREQ using this interval
# to make sure the connection is still open.
DEFAULT_KEEPALIVE = 60
def _create_error_from_conack_rc_code(rc):
def _create_error_from_connack_rc_code(rc):
""" """
Given a paho CONACK rc code, return an Exception that can be raised Given a paho CONNACK rc code, return an Exception that can be raised
""" """
message = mqtt.connack_string(rc) message = mqtt.connack_string(rc)
if rc in paho_conack_rc_to_error: if rc in paho_connack_rc_to_error:
return paho_conack_rc_to_error[rc](message) return paho_connack_rc_to_error[rc](message)
else: else:
return errors.ProtocolClientError("Unknown CONACK rc={}".format(rc)) return exceptions.ProtocolClientError("Unknown CONNACK rc={}".format(rc))
def _create_error_from_rc_code(rc): def _create_error_from_rc_code(rc):
""" """
Given a paho rc code, return an Exception that can be raised Given a paho rc code, return an Exception that can be raised
""" """
if rc == 1:
# Paho returns rc=1 to mean "something went wrong. stop". We manually translate this to a ConnectionDroppedError.
return exceptions.ConnectionDroppedError("Paho returned rc==1")
elif rc in paho_rc_to_error:
message = mqtt.error_string(rc) message = mqtt.error_string(rc)
if rc in paho_rc_to_error:
return paho_rc_to_error[rc](message) return paho_rc_to_error[rc](message)
else: else:
return errors.ProtocolClientError("Unknown CONACK rc={}".format(rc)) return exceptions.ProtocolClientError("Unknown CONNACK rc=={}".format(rc))
class MQTTTransport(object): class MQTTTransport(object):
@ -78,21 +91,37 @@ class MQTTTransport(object):
:type on_mqtt_connection_failure_handler: Function :type on_mqtt_connection_failure_handler: Function
""" """
def __init__(self, client_id, hostname, username, ca_cert=None, x509_cert=None): def __init__(
self,
client_id,
hostname,
username,
server_verification_cert=None,
x509_cert=None,
websockets=False,
cipher=None,
proxy_options=None,
):
""" """
Constructor to instantiate an MQTT protocol wrapper. Constructor to instantiate an MQTT protocol wrapper.
:param str client_id: The id of the client connecting to the broker. :param str client_id: The id of the client connecting to the broker.
:param str hostname: Hostname or IP address of the remote broker. :param str hostname: Hostname or IP address of the remote broker.
:param str username: Username for login to the remote broker. :param str username: Username for login to the remote broker.
:param str ca_cert: Certificate which can be used to validate a server-side TLS connection (optional). :param str server_verification_cert: Certificate which can be used to validate a server-side TLS connection (optional).
:param x509_cert: Certificate which can be used to authenticate connection to a server in lieu of a password (optional). :param x509_cert: Certificate which can be used to authenticate connection to a server in lieu of a password (optional).
:param bool websockets: Indicates whether or not to enable a websockets connection in the Transport.
:param str cipher: Cipher string in OpenSSL cipher list format
:param proxy_options: Options for sending traffic through proxy servers.
""" """
self._client_id = client_id self._client_id = client_id
self._hostname = hostname self._hostname = hostname
self._username = username self._username = username
self._mqtt_client = None self._mqtt_client = None
self._ca_cert = ca_cert self._server_verification_cert = server_verification_cert
self._x509_cert = x509_cert self._x509_cert = x509_cert
self._websockets = websockets
self._cipher = cipher
self._proxy_options = proxy_options
self.on_mqtt_connected_handler = None self.on_mqtt_connected_handler = None
self.on_mqtt_disconnected_handler = None self.on_mqtt_disconnected_handler = None
@ -109,25 +138,53 @@ class MQTTTransport(object):
""" """
logger.info("creating mqtt client") logger.info("creating mqtt client")
# Instantiate client # Instaniate the client
if self._websockets:
logger.info("Creating client for connecting using MQTT over websockets")
mqtt_client = mqtt.Client(
client_id=self._client_id,
clean_session=False,
protocol=mqtt.MQTTv311,
transport="websockets",
)
mqtt_client.ws_set_options(path="/$iothub/websocket")
else:
logger.info("Creating client for connecting using MQTT over TCP")
mqtt_client = mqtt.Client( mqtt_client = mqtt.Client(
client_id=self._client_id, clean_session=False, protocol=mqtt.MQTTv311 client_id=self._client_id, clean_session=False, protocol=mqtt.MQTTv311
) )
if self._proxy_options:
mqtt_client.proxy_set(
proxy_type=self._proxy_options.proxy_type,
proxy_addr=self._proxy_options.proxy_address,
proxy_port=self._proxy_options.proxy_port,
proxy_username=self._proxy_options.proxy_username,
proxy_password=self._proxy_options.proxy_password,
)
mqtt_client.enable_logger(logging.getLogger("paho")) mqtt_client.enable_logger(logging.getLogger("paho"))
# Configure TLS/SSL # Configure TLS/SSL
ssl_context = self._create_ssl_context() ssl_context = self._create_ssl_context()
mqtt_client.tls_set_context(context=ssl_context) mqtt_client.tls_set_context(context=ssl_context)
# Set event handlers # Set event handlers. Use weak references back into this object to prevent
# leaks on Python 2.7. See callable_weak_method.py and PEP 442 for explanation.
#
# We don't use the CallableWeakMethod object here because these handlers
# are not methods.
self_weakref = weakref.ref(self)
def on_connect(client, userdata, flags, rc): def on_connect(client, userdata, flags, rc):
this = self_weakref()
logger.info("connected with result code: {}".format(rc)) logger.info("connected with result code: {}".format(rc))
if rc: if rc: # i.e. if there is an error
if self.on_mqtt_connection_failure_handler: if this.on_mqtt_connection_failure_handler:
try: try:
self.on_mqtt_connection_failure_handler( this.on_mqtt_connection_failure_handler(
_create_error_from_conack_rc_code(rc) _create_error_from_connack_rc_code(rc)
) )
except Exception: except Exception:
logger.error("Unexpected error calling on_mqtt_connection_failure_handler") logger.error("Unexpected error calling on_mqtt_connection_failure_handler")
@ -136,9 +193,9 @@ class MQTTTransport(object):
logger.warning( logger.warning(
"connection failed, but no on_mqtt_connection_failure_handler handler callback provided" "connection failed, but no on_mqtt_connection_failure_handler handler callback provided"
) )
elif self.on_mqtt_connected_handler: elif this.on_mqtt_connected_handler:
try: try:
self.on_mqtt_connected_handler() this.on_mqtt_connected_handler()
except Exception: except Exception:
logger.error("Unexpected error calling on_mqtt_connected_handler") logger.error("Unexpected error calling on_mqtt_connected_handler")
logger.error(traceback.format_exc()) logger.error(traceback.format_exc())
@ -146,15 +203,18 @@ class MQTTTransport(object):
logger.warning("No event handler callback set for on_mqtt_connected_handler") logger.warning("No event handler callback set for on_mqtt_connected_handler")
def on_disconnect(client, userdata, rc): def on_disconnect(client, userdata, rc):
this = self_weakref()
logger.info("disconnected with result code: {}".format(rc)) logger.info("disconnected with result code: {}".format(rc))
cause = None cause = None
if rc: if rc: # i.e. if there is an error
logger.debug("".join(traceback.format_stack()))
cause = _create_error_from_rc_code(rc) cause = _create_error_from_rc_code(rc)
this._stop_automatic_reconnect()
if self.on_mqtt_disconnected_handler: if this.on_mqtt_disconnected_handler:
try: try:
self.on_mqtt_disconnected_handler(cause) this.on_mqtt_disconnected_handler(cause)
except Exception: except Exception:
logger.error("Unexpected error calling on_mqtt_disconnected_handler") logger.error("Unexpected error calling on_mqtt_disconnected_handler")
logger.error(traceback.format_exc()) logger.error(traceback.format_exc())
@ -162,29 +222,33 @@ class MQTTTransport(object):
logger.warning("No event handler callback set for on_mqtt_disconnected_handler") logger.warning("No event handler callback set for on_mqtt_disconnected_handler")
def on_subscribe(client, userdata, mid, granted_qos): def on_subscribe(client, userdata, mid, granted_qos):
this = self_weakref()
logger.info("suback received for {}".format(mid)) logger.info("suback received for {}".format(mid))
# subscribe failures are returned from the subscribe() call. This is just # subscribe failures are returned from the subscribe() call. This is just
# a notification that a SUBACK was received, so there is no failure case here # a notification that a SUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid) this._op_manager.complete_operation(mid)
def on_unsubscribe(client, userdata, mid): def on_unsubscribe(client, userdata, mid):
this = self_weakref()
logger.info("UNSUBACK received for {}".format(mid)) logger.info("UNSUBACK received for {}".format(mid))
# unsubscribe failures are returned from the unsubscribe() call. This is just # unsubscribe failures are returned from the unsubscribe() call. This is just
# a notification that a SUBACK was received, so there is no failure case here # a notification that a SUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid) this._op_manager.complete_operation(mid)
def on_publish(client, userdata, mid): def on_publish(client, userdata, mid):
this = self_weakref()
logger.info("payload published for {}".format(mid)) logger.info("payload published for {}".format(mid))
# publish failures are returned from the publish() call. This is just # publish failures are returned from the publish() call. This is just
# a notification that a PUBACK was received, so there is no failure case here # a notification that a PUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid) this._op_manager.complete_operation(mid)
def on_message(client, userdata, mqtt_message): def on_message(client, userdata, mqtt_message):
this = self_weakref()
logger.info("message received on {}".format(mqtt_message.topic)) logger.info("message received on {}".format(mqtt_message.topic))
if self.on_mqtt_message_received_handler: if this.on_mqtt_message_received_handler:
try: try:
self.on_mqtt_message_received_handler(mqtt_message.topic, mqtt_message.payload) this.on_mqtt_message_received_handler(mqtt_message.topic, mqtt_message.payload)
except Exception: except Exception:
logger.error("Unexpected error calling on_mqtt_message_received_handler") logger.error("Unexpected error calling on_mqtt_message_received_handler")
logger.error(traceback.format_exc()) logger.error(traceback.format_exc())
@ -203,6 +267,40 @@ class MQTTTransport(object):
logger.debug("Created MQTT protocol client, assigned callbacks") logger.debug("Created MQTT protocol client, assigned callbacks")
return mqtt_client return mqtt_client
def _stop_automatic_reconnect(self):
"""
After disconnecting because of an error, Paho will attempt to reconnect (some of the time --
this isn't 100% reliable). We don't want Paho to reconnect because we want to control the
timing of the reconnect, so we force the connection closed.
We are relying on intimite knowledge of Paho behavior here. If this becomes a problem,
it may be necessary to write our own Paho thread and stop using thread_start()/thread_stop().
This is certainly supported by Paho, but the thread that Paho provides works well enough
(so far) and making our own would be more complex than is currently justified.
"""
logger.info("Forcing paho disconnect to prevent it from automatically reconnecting")
# Note: We are calling this inside our on_disconnect() handler, so we are inside the
# Paho thread at this point. This is perfectly valid. Comments in Paho's client.py
# loop_forever() function recomment calling disconnect() from a callback to exit the
# Paho thread/loop.
self._mqtt_client.disconnect()
# Calling disconnect() isn't enough. We also need to call loop_stop to make sure
# Paho is as clean as possible. Our call to disconnect() above is enough to stop the
# loop and exit the tread, but the call to loop_stop() is necessary to complete the cleanup.
self._mqtt_client.loop_stop()
# Finally, because of a bug in Paho, we need to null out the _thread pointer. This
# is necessary because the code that sets _thread to None only gets called if you
# call loop_stop from an external thread (and we're still inside the Paho thread here).
self._mqtt_client._thread = None
logger.debug("Done forcing paho disconnect")
def _create_ssl_context(self): def _create_ssl_context(self):
""" """
This method creates the SSLContext object used by Paho to authenticate the connection. This method creates the SSLContext object used by Paho to authenticate the connection.
@ -210,12 +308,17 @@ class MQTTTransport(object):
logger.debug("creating a SSL context") logger.debug("creating a SSL context")
ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2) ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2)
if self._ca_cert: if self._server_verification_cert:
ssl_context.load_verify_locations(cadata=self._ca_cert) ssl_context.load_verify_locations(cadata=self._server_verification_cert)
else: else:
ssl_context.load_default_certs() ssl_context.load_default_certs()
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True if self._cipher:
try:
ssl_context.set_ciphers(self._cipher)
except ssl.SSLError as e:
# TODO: custom error with more detail?
raise e
if self._x509_cert is not None: if self._x509_cert is not None:
logger.debug("configuring SSL context with client-side certificate and key") logger.debug("configuring SSL context with client-side certificate and key")
@ -225,6 +328,9 @@ class MQTTTransport(object):
self._x509_cert.pass_phrase, self._x509_cert.pass_phrase,
) )
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
return ssl_context return ssl_context
def connect(self, password=None): def connect(self, password=None):
@ -235,45 +341,118 @@ class MQTTTransport(object):
The password is not required if the transport was instantiated with an x509 certificate. The password is not required if the transport was instantiated with an x509 certificate.
If MQTT connection has been proxied, connection will take a bit longer to allow negotiation
with the proxy server. Any errors in the proxy connection process will trigger exceptions
:param str password: The password for connecting with the MQTT broker (Optional). :param str password: The password for connecting with the MQTT broker (Optional).
:raises: ConnectionFailedError if connection could not be established.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: UnauthorizedError if there is an error authenticating.
:raises: ProtocolClientError if there is some other client error.
""" """
logger.info("connecting to mqtt broker") logger.info("connecting to mqtt broker")
self._mqtt_client.username_pw_set(username=self._username, password=password) self._mqtt_client.username_pw_set(username=self._username, password=password)
rc = self._mqtt_client.connect(host=self._hostname, port=8883) try:
if self._websockets:
logger.info("Connect using port 443 (websockets)")
rc = self._mqtt_client.connect(
host=self._hostname, port=443, keepalive=DEFAULT_KEEPALIVE
)
else:
logger.info("Connect using port 8883 (TCP)")
rc = self._mqtt_client.connect(
host=self._hostname, port=8883, keepalive=DEFAULT_KEEPALIVE
)
except socket.error as e:
# Only this type will raise a special error
# To stop it from retrying.
if (
isinstance(e, ssl.SSLError)
and e.strerror is not None
and "CERTIFICATE_VERIFY_FAILED" in e.strerror
):
raise exceptions.TlsExchangeAuthError(cause=e)
elif isinstance(e, socks.ProxyError):
if isinstance(e, socks.SOCKS5AuthError):
# TODO This is the only I felt like specializing
raise exceptions.UnauthorizedError(cause=e)
else:
raise exceptions.ProtocolProxyError(cause=e)
else:
# If the socket can't open (e.g. using iptables REJECT), we get a
# socket.error. Convert this into ConnectionFailedError so we can retry
raise exceptions.ConnectionFailedError(cause=e)
except socks.ProxyError as pe:
if isinstance(pe, socks.SOCKS5AuthError):
raise exceptions.UnauthorizedError(cause=pe)
else:
raise exceptions.ProtocolProxyError(cause=pe)
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during connect", cause=e
)
logger.debug("_mqtt_client.connect returned rc={}".format(rc)) logger.debug("_mqtt_client.connect returned rc={}".format(rc))
if rc: if rc:
raise _create_error_from_rc_code(rc) raise _create_error_from_rc_code(rc)
self._mqtt_client.loop_start() self._mqtt_client.loop_start()
def reconnect(self, password=None): def reauthorize_connection(self, password=None):
""" """
Reconnect to the MQTT broker, using username set at instantiation. Reauthorize with the MQTT broker, using username set at instantiation.
Connect should have previously been called in order to use this function. Connect should have previously been called in order to use this function.
The password is not required if the transport was instantiated with an x509 certificate. The password is not required if the transport was instantiated with an x509 certificate.
:param str password: The password for reconnecting with the MQTT broker (Optional). :param str password: The password for reauthorizing with the MQTT broker (Optional).
:raises: ConnectionFailedError if connection could not be established.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: UnauthorizedError if there is an error authenticating.
:raises: ProtocolClientError if there is some other client error.
""" """
logger.info("reconnecting MQTT client") logger.info("reauthorizing MQTT client")
self._mqtt_client.username_pw_set(username=self._username, password=password) self._mqtt_client.username_pw_set(username=self._username, password=password)
try:
rc = self._mqtt_client.reconnect() rc = self._mqtt_client.reconnect()
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during reconnect", cause=e
)
logger.debug("_mqtt_client.reconnect returned rc={}".format(rc)) logger.debug("_mqtt_client.reconnect returned rc={}".format(rc))
if rc: if rc:
# This could result in ConnectionFailedError, ConnectionDroppedError, UnauthorizedError
# or ProtocolClientError
raise _create_error_from_rc_code(rc) raise _create_error_from_rc_code(rc)
def disconnect(self): def disconnect(self):
""" """
Disconnect from the MQTT broker. Disconnect from the MQTT broker.
:raises: ProtocolClientError if there is some client error.
""" """
logger.info("disconnecting MQTT client") logger.info("disconnecting MQTT client")
try:
rc = self._mqtt_client.disconnect() rc = self._mqtt_client.disconnect()
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during disconnect", cause=e
)
logger.debug("_mqtt_client.disconnect returned rc={}".format(rc)) logger.debug("_mqtt_client.disconnect returned rc={}".format(rc))
self._mqtt_client.loop_stop() self._mqtt_client.loop_stop()
if rc: if rc:
raise _create_error_from_rc_code(rc) # This could result in ConnectionDroppedError or ProtocolClientError
err = _create_error_from_rc_code(rc)
# If we get a ConnectionDroppedError, swallow it, because we have successfully disconnected!
if type(err) is exceptions.ConnectionDroppedError:
logger.warning("Dropped connection while disconnecting - swallowing error")
pass
else:
raise err
def subscribe(self, topic, qos=1, callback=None): def subscribe(self, topic, qos=1, callback=None):
""" """
@ -283,14 +462,25 @@ class MQTTTransport(object):
:param int qos: the desired quality of service level for the subscription. Defaults to 1. :param int qos: the desired quality of service level for the subscription. Defaults to 1.
:param callback: A callback to be triggered upon completion (Optional). :param callback: A callback to be triggered upon completion (Optional).
:return: message ID for the subscribe request :return: message ID for the subscribe request.
:raises: ValueError if qos is not 0, 1 or 2
:raises: ValueError if topic is None or has zero string length :raises: ValueError if qos is not 0, 1 or 2.
:raises: ValueError if topic is None or has zero string length.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
""" """
logger.info("subscribing to {} with qos {}".format(topic, qos)) logger.info("subscribing to {} with qos {}".format(topic, qos))
try:
(rc, mid) = self._mqtt_client.subscribe(topic, qos=qos) (rc, mid) = self._mqtt_client.subscribe(topic, qos=qos)
except ValueError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during subscribe", cause=e
)
logger.debug("_mqtt_client.subscribe returned rc={}".format(rc)) logger.debug("_mqtt_client.subscribe returned rc={}".format(rc))
if rc: if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc) raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback) self._op_manager.establish_operation(mid, callback)
@ -301,12 +491,22 @@ class MQTTTransport(object):
:param str topic: a single string which is the subscription topic to unsubscribe from. :param str topic: a single string which is the subscription topic to unsubscribe from.
:param callback: A callback to be triggered upon completion (Optional). :param callback: A callback to be triggered upon completion (Optional).
:raises: ValueError if topic is None or has zero string length :raises: ValueError if topic is None or has zero string length.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
""" """
logger.info("unsubscribing from {}".format(topic)) logger.info("unsubscribing from {}".format(topic))
try:
(rc, mid) = self._mqtt_client.unsubscribe(topic) (rc, mid) = self._mqtt_client.unsubscribe(topic)
except ValueError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during unsubscribe", cause=e
)
logger.debug("_mqtt_client.unsubscribe returned rc={}".format(rc)) logger.debug("_mqtt_client.unsubscribe returned rc={}".format(rc))
if rc: if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc) raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback) self._op_manager.establish_operation(mid, callback)
@ -315,7 +515,8 @@ class MQTTTransport(object):
Send a message via the MQTT broker. Send a message via the MQTT broker.
:param str topic: topic: The topic that the message should be published on. :param str topic: topic: The topic that the message should be published on.
:param str payload: The actual message to send. :param payload: The actual message to send.
:type payload: str, bytes, int, float or None
:param int qos: the desired quality of service level for the subscription. Defaults to 1. :param int qos: the desired quality of service level for the subscription. Defaults to 1.
:param callback: A callback to be triggered upon completion (Optional). :param callback: A callback to be triggered upon completion (Optional).
@ -323,11 +524,24 @@ class MQTTTransport(object):
:raises: ValueError if topic is None or has zero string length :raises: ValueError if topic is None or has zero string length
:raises: ValueError if topic contains a wildcard ("+") :raises: ValueError if topic contains a wildcard ("+")
:raises: ValueError if the length of the payload is greater than 268435455 bytes :raises: ValueError if the length of the payload is greater than 268435455 bytes
:raises: TypeError if payload is not a valid type
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
""" """
logger.info("publishing on {}".format(topic)) logger.info("publishing on {}".format(topic))
try:
(rc, mid) = self._mqtt_client.publish(topic=topic, payload=payload, qos=qos) (rc, mid) = self._mqtt_client.publish(topic=topic, payload=payload, qos=qos)
except ValueError:
raise
except TypeError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during publish", cause=e
)
logger.debug("_mqtt_client.publish returned rc={}".format(rc)) logger.debug("_mqtt_client.publish returned rc={}".format(rc))
if rc: if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc) raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback) self._op_manager.establish_operation(mid, callback)
@ -385,7 +599,7 @@ class OperationManager(object):
logger.error("Unexpected error calling callback for MID: {}".format(mid)) logger.error("Unexpected error calling callback for MID: {}".format(mid))
logger.error(traceback.format_exc()) logger.error(traceback.format_exc())
else: else:
logger.warning("No callback for MID: {}".format(mid)) logger.exception("No callback for MID: {}".format(mid))
def complete_operation(self, mid): def complete_operation(self, mid):
"""Complete an operation identified by MID and trigger the associated completion callback. """Complete an operation identified by MID and trigger the associated completion callback.

Просмотреть файл

@ -7,3 +7,4 @@ INTERNAL USAGE ONLY
from .pipeline_events_base import PipelineEvent from .pipeline_events_base import PipelineEvent
from .pipeline_ops_base import PipelineOperation from .pipeline_ops_base import PipelineOperation
from .pipeline_stages_base import PipelineStage from .pipeline_stages_base import PipelineStage
from .pipeline_exceptions import OperationCancelled

Просмотреть файл

@ -0,0 +1,47 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six
import abc
logger = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BasePipelineConfig(object):
"""A base class for storing all configurations/options shared across the Azure IoT Python Device Client Library.
More specific configurations such as those that only apply to the IoT Hub Client will be found in the respective
config files.
"""
def __init__(self, websockets=False, cipher="", proxy_options=None):
"""Initializer for BasePipelineConfig
:param bool websockets: Enabling/disabling websockets in MQTT. This feature is relevant
if a firewall blocks port 8883 from use.
:param cipher: Optional cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
"""
self.websockets = websockets
self.cipher = self._sanitize_cipher(cipher)
self.proxy_options = proxy_options
@staticmethod
def _sanitize_cipher(cipher):
"""Sanitize the cipher input and convert to a string in OpenSSL list format
"""
if isinstance(cipher, list):
cipher = ":".join(cipher)
if isinstance(cipher, str):
cipher = cipher.upper()
cipher = cipher.replace("_", "-")
else:
raise TypeError("Invalid type for 'cipher'")
return cipher

Просмотреть файл

@ -1,137 +0,0 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import sys
from . import pipeline_thread
from azure.iot.device.common import unhandled_exceptions
from six.moves import queue
logger = logging.getLogger(__name__)
@pipeline_thread.runs_on_pipeline_thread
def delegate_to_different_op(stage, original_op, new_op):
"""
Continue an operation using a new operation. This means that the new operation
will be passed down the pipeline (starting at the next stage). When that new
operation completes, the original operation will also complete. In this way,
a stage can accept one type of operation and, effectively, change that operation
into a different type of operation before passing it to the next stage.
This is useful when a generic operation (such as "enable feature") needs to be
converted into a more specific operation (such as "subscribe to mqtt topic").
In that case, a stage's _execute_op function would call this function passing in
the original "enable feature" op and the new "subscribe to mqtt topic"
op. This function will pass the "subscribe" down. When the "subscribe" op
is completed, this function will cause the original op to complete.
This function is only really useful if there is no data returned in the
new_op that that needs to be copied back into the original_op before
completing it. If data needs to be copied this way, some other method needs
to be used. (or a "copy data back" function needs to be added to this function
as an optional parameter.)
:param PipelineStage stage: stage to delegate the operation to
:param PipelineOperation original_op: Operation that is being continued using a
different op. This is most likely the operation that is currently being handled
by the stage. This operation is not actually continued, in that it is not
actually passed down the pipeline. Instead, the original_op operation is
effectively paused while we wait for the new_op operation to complete. When
the new_op operation completes, the original_op operation will also be completed.
:param PipelineOperation new_op: Operation that is being passed down the pipeline
to effectively continue the work represented by original_op. This is most likely
a different type of operation that is able to accomplish the intention of the
original_op in a way that is more specific than the original_op.
"""
logger.debug("{}({}): continuing with {} op".format(stage.name, original_op.name, new_op.name))
@pipeline_thread.runs_on_pipeline_thread
def new_op_complete(op):
logger.debug(
"{}({}): completing with result from {}".format(
stage.name, original_op.name, new_op.name
)
)
original_op.error = new_op.error
complete_op(stage, original_op)
new_op.callback = new_op_complete
pass_op_to_next_stage(stage, new_op)
@pipeline_thread.runs_on_pipeline_thread
def pass_op_to_next_stage(stage, op):
"""
Helper function to continue a given operation by passing it to the next stage
in the pipeline. If there is no next stage in the pipeline, this function
will fail the operation and call complete_op to return the failure back up the
pipeline. If the operation is already in an error state, this function will
complete the operation in order to return that error to the caller.
:param PipelineStage stage: stage that the operation is being passed from
:param PipelineOperation op: Operation which is being passed on
"""
if op.error:
logger.error("{}({}): op has error. completing.".format(stage.name, op.name))
complete_op(stage, op)
elif not stage.next:
logger.error("{}({}): no next stage. completing with error".format(stage.name, op.name))
op.error = NotImplementedError(
"{} not handled after {} stage with no next stage".format(op.name, stage.name)
)
complete_op(stage, op)
else:
logger.debug("{}({}): passing to next stage.".format(stage.name, op.name))
stage.next.run_op(op)
@pipeline_thread.runs_on_pipeline_thread
def complete_op(stage, op):
"""
Helper function to complete an operation by calling its callback function thus
returning the result of the operation back up the pipeline. This is perferred to
calling the operation's callback directly as it provides several layers of protection
(such as a try/except wrapper) which are strongly advised.
"""
if op.error:
logger.error("{}({}): completing with error {}".format(stage.name, op.name, op.error))
else:
logger.debug("{}({}): completing without error".format(stage.name, op.name))
try:
op.callback(op)
except Exception as e:
_, e, _ = sys.exc_info()
logger.error(
msg="Unhandled error calling back inside {}.complete_op() after {} complete".format(
stage.name, op.name
),
exc_info=e,
)
unhandled_exceptions.exception_caught_in_background_thread(e)
@pipeline_thread.runs_on_pipeline_thread
def pass_event_to_previous_stage(stage, event):
"""
Helper function to pass an event to the previous stage of the pipeline. This is the default
behavior of events while traveling through the pipeline. They start somewhere (maybe the
bottom) and move up the pipeline until they're handled or until they error out.
"""
if stage.previous:
logger.debug(
"{}({}): pushing event up to {}".format(stage.name, event.name, stage.previous.name)
)
stage.previous.handle_pipeline_event(event)
else:
logger.error("{}({}): Error: unhandled event".format(stage.name, event.name))
error = NotImplementedError(
"{} unhandled at {} stage with no previous stage".format(event.name, stage.name)
)
unhandled_exceptions.exception_caught_in_background_thread(error)

Просмотреть файл

@ -33,34 +33,49 @@ class PipelineEvent(object):
self.name = self.__class__.__name__ self.name = self.__class__.__name__
class IotResponseEvent(PipelineEvent): class ResponseEvent(PipelineEvent):
""" """
A PipelineEvent object which is the second part of an SendIotRequestAndWaitForResponseOperation operation A PipelineEvent object which is the second part of an RequestAndResponseOperation operation
(the response). The SendIotRequestAndWaitForResponseOperation represents the common operation of sending (the response). The RequestAndResponseOperation represents the common operation of sending
a request to iothub with a request_id ($rid) value and waiting for a response with a request to iothub with a request_id ($rid) value and waiting for a response with
the same $rid value. This convention is used by both Twin and Provisioning features. the same $rid value. This convention is used by both Twin and Provisioning features.
The response represented by this event has not yet been matched to the corresponding The response represented by this event has not yet been matched to the corresponding
SendIotRequestOperation operation. That matching is done by the CoordinateRequestAndResponseStage RequestOperation operation. That matching is done by the CoordinateRequestAndResponseStage
stage which takes the contents of this event and puts it into the SendIotRequestAndWaitForResponseOperation stage which takes the contents of this event and puts it into the RequestAndResponseOperation
operation with the matching $rid value. operation with the matching $rid value.
:ivar request_id: The request ID which will eventually be used to match a RequestOperation
operation to this event.
:type request_id: str
:ivar status_code: The status code returned by the response. Any value under 300 is :ivar status_code: The status code returned by the response. Any value under 300 is
considered success. considered success.
:type status_code: int :type status_code: int
:ivar request_id: The request ID which will eventually be used to match a SendIotRequestOperation
operation to this event.
:type request: str
:ivar response_body: The body of the response. :ivar response_body: The body of the response.
:type request_body: str :type response_body: str
:ivar retry_after: A retry interval value that was extracted from the topic.
:ivar status_code: :type retry_after: int
:type status: int
:ivar respons_body:
""" """
def __init__(self, request_id, status_code, response_body): def __init__(self, request_id, status_code, response_body, retry_after=None):
super(IotResponseEvent, self).__init__() super(ResponseEvent, self).__init__()
self.request_id = request_id self.request_id = request_id
self.status_code = status_code self.status_code = status_code
self.response_body = response_body self.response_body = response_body
self.retry_after = retry_after
class ConnectedEvent(PipelineEvent):
"""
A PipelineEvent object indicating a connection has been established.
"""
pass
class DisconnectedEvent(PipelineEvent):
"""
A PipelineEvent object indicating a connection has been dropped.
"""
pass

Просмотреть файл

@ -0,0 +1,40 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines exceptions that may be raised from a pipeline"""
from azure.iot.device.common.chainable_exception import ChainableException
class PipelineException(ChainableException):
"""Generic pipeline exception"""
pass
class OperationCancelled(PipelineException):
"""Operation was cancelled"""
pass
class OperationError(PipelineException):
"""Error while executing an Operation"""
pass
class PipelineTimeoutError(PipelineException):
"""
Pipeline operation timed out
"""
pass
class PipelineError(PipelineException):
"""Error caused by incorrect pipeline configuration"""
pass

Просмотреть файл

@ -3,6 +3,14 @@
# Licensed under the MIT License. See License.txt in the project root for # Licensed under the MIT License. See License.txt in the project root for
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
import sys
import logging
import traceback
from . import pipeline_exceptions
from . import pipeline_thread
from azure.iot.device.common import handle_exceptions
logger = logging.getLogger(__name__)
class PipelineOperation(object): class PipelineOperation(object):
@ -22,7 +30,7 @@ class PipelineOperation(object):
successfully or with a failure. successfully or with a failure.
:type callback: Function :type callback: Function
:ivar needs_connection: This is an attribute that indicates whether a particular operation :ivar needs_connection: This is an attribute that indicates whether a particular operation
requires a connection to operate. This is currently used by the EnsureConnectionStage requires a connection to operate. This is currently used by the AutoConnectStage
stage, but this functionality will be revamped shortly. stage, but this functionality will be revamped shortly.
:type needs_connection: Boolean :type needs_connection: Boolean
:ivar error: The presence of a value in the error attribute indicates that the operation failed, :ivar error: The presence of a value in the error attribute indicates that the operation failed,
@ -30,7 +38,7 @@ class PipelineOperation(object):
:type error: Error :type error: Error
""" """
def __init__(self, callback=None): def __init__(self, callback):
""" """
Initializer for PipelineOperation objects. Initializer for PipelineOperation objects.
@ -43,10 +51,171 @@ class PipelineOperation(object):
"Cannot instantiate PipelineOperation object. You need to use a derived class" "Cannot instantiate PipelineOperation object. You need to use a derived class"
) )
self.name = self.__class__.__name__ self.name = self.__class__.__name__
self.callback = callback self.callback_stack = []
self.needs_connection = False self.needs_connection = False
self.completed = False # Operation has been fully completed
self.completing = False # Operation is in the process of completing
self.error = None # Error associated with Operation completion
self.add_callback(callback)
def add_callback(self, callback):
"""Adds a callback to the Operation that will be triggered upon Operation completion.
When an Operation is completed, all callbacks will be resolved in LIFO order.
Callbacks cannot be added to an already completed operation, or an operation that is
currently undergoing a completion process.
:param callback: The callback to add to the operation.
:raises: OperationError if the operation is already completed, or is in the process of
completing.
"""
if self.completed:
raise pipeline_exceptions.OperationError(
"{}: Attempting to add a callback to an already-completed operation!".format(
self.name
)
)
if self.completing:
raise pipeline_exceptions.OperationError(
"{}: Attempting to add a callback to a operation with completion in progress!".format(
self.name
)
)
else:
self.callback_stack.append(callback)
@pipeline_thread.runs_on_pipeline_thread
def complete(self, error=None):
""" Complete the operation, and trigger all callbacks in LIFO order.
The operation is completed successfully be default, or completed unsucessfully if an error
is provided.
An operation that is already fully completed, or in the process of completion cannot be
completed again.
This process can be halted if a callback for the operation invokes the .halt_completion()
method on this Operation.
:param error: Optionally provide an Exception object indicating the error that caused
the completion. Providing an error indicates that the operation was unsucessful.
"""
if error:
logger.error("{}: completing with error {}".format(self.name, error))
else:
logger.debug("{}: completing without error".format(self.name))
if self.completed or self.completing:
logger.error("{}: has already been completed!".format(self.name))
e = pipeline_exceptions.OperationError(
"Attempting to complete an already-completed operation: {}".format(self.name)
)
# This could happen in a foreground or background thread, so err on the side of caution
# and send it to the background handler.
handle_exceptions.handle_background_exception(e)
else:
# Operation is now in the process of completing
self.completing = True
self.error = error
while self.callback_stack:
if not self.completing:
logger.debug("{}: Completion halted!".format(self.name))
break
if self.completed:
# This block should never be reached - this is an invalid state.
# If this block is reached, there is a bug in the code.
logger.error(
"{}: Invalid State! Operation completed while resolving completion".format(
self.name
)
)
e = pipeline_exceptions.OperationError(
"Operation reached fully completed state while still resolving completion: {}".format(
self.name
)
)
handle_exceptions.handle_background_exception(e)
break
callback = self.callback_stack.pop()
try:
callback(op=self, error=error)
except Exception as e:
logger.error(
"Unhandled error while triggering callback for {}".format(self.name)
)
logger.error(traceback.format_exc())
# This could happen in a foreground or background thread, so err on the side of caution
# and send it to the background handler.
handle_exceptions.handle_background_exception(e)
if self.completing:
# Operation is now completed, no longer in the process of completing
self.completing = False
self.completed = True
@pipeline_thread.runs_on_pipeline_thread
def halt_completion(self):
"""Halt the completion of an operation that is currently undergoing a completion process
as a result of a call to .complete().
Completion cannot be halted if there is no currently ongoing completion process. The only
way to successfully invoke this method is from within a callback on the Operation in
question.
This method will leave any yet-untriggered callbacks on the Operation to be triggered upon
a later completion.
This method will clear any error associated with the currently ongoing completion process
from the Operation.
"""
if not self.completing:
logger.error("{}: is not currently in the process of completion!".format(self.name))
e = pipeline_exceptions.OperationError(
"Attempting to halt completion of an operation not in the process of completion: {}".format(
self.name
)
)
handle_exceptions.handle_background_exception(e)
else:
logger.debug("{}: Halting completion...".format(self.name))
self.completing = False
self.error = None self.error = None
@pipeline_thread.runs_on_pipeline_thread
def spawn_worker_op(self, worker_op_type, **kwargs):
"""Create and return a new operation, which, when completed, will complete the operation
it was spawned from.
:param worker_op_type: The type (class) of the new worker operation.
:param **kwargs: The arguments to instantiate the new worker operation with. Note that a
callback is not required, but if provided, will be triggered prior to completing the
operation that spawned the worker operation.
:returns: A new worker operation of the type specified in the worker_op_type parameter.
"""
logger.debug("{}: creating worker op of type {}".format(self.name, worker_op_type.__name__))
@pipeline_thread.runs_on_pipeline_thread
def on_worker_op_complete(op, error):
logger.debug("{}: Worker op ({}) has been completed".format(self.name, op.name))
self.complete(error=error)
if "callback" in kwargs:
provided_callback = kwargs["callback"]
kwargs["callback"] = on_worker_op_complete
worker_op = worker_op_type(**kwargs)
worker_op.add_callback(provided_callback)
else:
kwargs["callback"] = on_worker_op_complete
worker_op = worker_op_type(**kwargs)
return worker_op
class ConnectOperation(PipelineOperation): class ConnectOperation(PipelineOperation):
""" """
@ -57,17 +226,19 @@ class ConnectOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage). Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
""" """
pass def __init__(self, callback):
self.retry_timer = None
super(ConnectOperation, self).__init__(callback)
class ReconnectOperation(PipelineOperation): class ReauthorizeConnectionOperation(PipelineOperation):
""" """
A PipelineOperation object which tells the pipeline to reconnect to whatever service it is connected to. A PipelineOperation object which tells the pipeline to reauthorize the connection to whatever service it is connected to.
Clients will most-likely submit a Reconnect operation when some credential (such as a sas token) has changed and the protocol client Clients will most-likely submit a ReauthorizeConnectionOperation when some credential (such as a sas token) has changed and the protocol client
needs to re-establish the connection to refresh the credentials needs to re-establish the connection to refresh the credentials
This operation is in the group of base operations because reconnecting is a common operation that many clients might need to do. This operation is in the group of base operations because reauthorizinging is a common operation that many clients might need to do.
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage). Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
""" """
@ -101,7 +272,7 @@ class EnableFeatureOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage). Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
""" """
def __init__(self, feature_name, callback=None): def __init__(self, feature_name, callback):
""" """
Initializer for EnableFeatureOperation objects. Initializer for EnableFeatureOperation objects.
@ -129,7 +300,7 @@ class DisableFeatureOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage). Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
""" """
def __init__(self, feature_name, callback=None): def __init__(self, feature_name, callback):
""" """
Initializer for DisableFeatureOperation objects. Initializer for DisableFeatureOperation objects.
@ -154,7 +325,7 @@ class UpdateSasTokenOperation(PipelineOperation):
(such as IoTHub or MQTT stages). (such as IoTHub or MQTT stages).
""" """
def __init__(self, sas_token, callback=None): def __init__(self, sas_token, callback):
""" """
Initializer for UpdateSasTokenOperation objects. Initializer for UpdateSasTokenOperation objects.
@ -168,7 +339,7 @@ class UpdateSasTokenOperation(PipelineOperation):
self.sas_token = sas_token self.sas_token = sas_token
class SendIotRequestAndWaitForResponseOperation(PipelineOperation): class RequestAndResponseOperation(PipelineOperation):
""" """
A PipelineOperation object which wraps the common operation of sending a request to iothub with a request_id ($rid) A PipelineOperation object which wraps the common operation of sending a request to iothub with a request_id ($rid)
value and waiting for a response with the same $rid value. This convention is used by both Twin and Provisioning value and waiting for a response with the same $rid value. This convention is used by both Twin and Provisioning
@ -185,11 +356,15 @@ class SendIotRequestAndWaitForResponseOperation(PipelineOperation):
:type status_code: int :type status_code: int
:ivar response_body: The body of the response. :ivar response_body: The body of the response.
:type response_body: Undefined :type response_body: Undefined
:ivar query_params: Any query parameters that need to be sent with the request.
Example is the id of the operation as returned by the initial provisioning request.
""" """
def __init__(self, request_type, method, resource_location, request_body, callback=None): def __init__(
self, request_type, method, resource_location, request_body, callback, query_params=None
):
""" """
Initializer for SendIotRequestAndWaitForResponseOperation objects Initializer for RequestAndResponseOperation objects
:param str request_type: The type of request. This is a string which is used by protocol-specific stages to :param str request_type: The type of request. This is a string which is used by protocol-specific stages to
generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert
@ -204,29 +379,37 @@ class SendIotRequestAndWaitForResponseOperation(PipelineOperation):
failed. The callback function must accept A PipelineOperation object which indicates failed. The callback function must accept A PipelineOperation object which indicates
the specific operation which has completed or failed. the specific operation which has completed or failed.
""" """
super(SendIotRequestAndWaitForResponseOperation, self).__init__(callback=callback) super(RequestAndResponseOperation, self).__init__(callback=callback)
self.request_type = request_type self.request_type = request_type
self.method = method self.method = method
self.resource_location = resource_location self.resource_location = resource_location
self.request_body = request_body self.request_body = request_body
self.status_code = None self.status_code = None
self.response_body = None self.response_body = None
self.query_params = query_params
class SendIotRequestOperation(PipelineOperation): class RequestOperation(PipelineOperation):
""" """
A PipelineOperation object which is the first part of an SendIotRequestAndWaitForResponseOperation operation (the request). The second A PipelineOperation object which is the first part of an RequestAndResponseOperation operation (the request). The second
part of the SendIotRequestAndWaitForResponseOperation operation (the response) is returned via an IotResponseEvent event. part of the RequestAndResponseOperation operation (the response) is returned via an ResponseEvent event.
Even though this is an base operation, it will most likely be generated and also handled by more specifics stages Even though this is an base operation, it will most likely be generated and also handled by more specifics stages
(such as IoTHub or MQTT stages). (such as IoTHub or MQTT stages).
""" """
def __init__( def __init__(
self, request_type, method, resource_location, request_body, request_id, callback=None self,
request_type,
method,
resource_location,
request_body,
request_id,
callback,
query_params=None,
): ):
""" """
Initializer for SendIotRequestOperation objects Initializer for RequestOperation objects
:param str request_type: The type of request. This is a string which is used by protocol-specific stages to :param str request_type: The type of request. This is a string which is used by protocol-specific stages to
generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert
@ -240,10 +423,13 @@ class SendIotRequestOperation(PipelineOperation):
:param Function callback: The function that gets called when this operation is complete or has :param Function callback: The function that gets called when this operation is complete or has
failed. The callback function must accept A PipelineOperation object which indicates failed. The callback function must accept A PipelineOperation object which indicates
the specific operation which has completed or failed. the specific operation which has completed or failed.
:type query_params: Any query parameters that need to be sent with the request.
Example is the id of the operation as returned by the initial provisioning request.
""" """
super(SendIotRequestOperation, self).__init__(callback=callback) super(RequestOperation, self).__init__(callback=callback)
self.method = method self.method = method
self.resource_location = resource_location self.resource_location = resource_location
self.request_type = request_type self.request_type = request_type
self.request_body = request_body self.request_body = request_body
self.request_id = request_id self.request_id = request_id
self.query_params = query_params

Просмотреть файл

@ -0,0 +1,65 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from . import PipelineOperation
class SetHTTPConnectionArgsOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to connect to a server using the HTTP protocol.
This operation is in the group of HTTP operations because its attributes are very specific to the HTTP protocol.
"""
def __init__(
self, hostname, callback, server_verification_cert=None, client_cert=None, sas_token=None
):
"""
Initializer for SetHTTPConnectionArgsOperation objects.
:param str hostname: The hostname of the HTTP server we will eventually connect to
:param str server_verification_cert: (Optional) The server verification certificate to use
if the HTTP server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the HTTP service
:param str sas_token: The token string which will be used to authenticate with the service
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(SetHTTPConnectionArgsOperation, self).__init__(callback=callback)
self.hostname = hostname
self.server_verification_cert = server_verification_cert
self.client_cert = client_cert
self.sas_token = sas_token
class HTTPRequestAndResponseOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to connect to a server using the HTTP protocol.
This operation is in the group of HTTP operations because its attributes are very specific to the HTTP protocol.
"""
def __init__(self, method, path, headers, body, query_params, callback):
"""
Initializer for HTTPPublishOperation objects.
:param str method: The HTTP method used in the request
:param str path: The path to be used in the request url
:param dict headers: The headers to be used in the HTTP request
:param str body: The body to be provided with the HTTP request
:param str query_params: The query parameters to be used in the request url
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(HTTPRequestAndResponseOperation, self).__init__(callback=callback)
self.method = method
self.path = path
self.headers = headers
self.body = body
self.query_params = query_params
self.status_code = None
self.response_body = None
self.reason = None

Просмотреть файл

@ -18,10 +18,10 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
client_id, client_id,
hostname, hostname,
username, username,
ca_cert=None, callback,
server_verification_cert=None,
client_cert=None, client_cert=None,
sas_token=None, sas_token=None,
callback=None,
): ):
""" """
Initializer for SetMQTTConnectionArgsOperation objects. Initializer for SetMQTTConnectionArgsOperation objects.
@ -29,8 +29,8 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
:param str client_id: The client identifier to use when connecting to the MQTT server :param str client_id: The client identifier to use when connecting to the MQTT server
:param str hostname: The hostname of the MQTT server we will eventually connect to :param str hostname: The hostname of the MQTT server we will eventually connect to
:param str username: The username to use when connecting to the MQTT server :param str username: The username to use when connecting to the MQTT server
:param str ca_cert: (Optional) The CA certificate to use if the MQTT server that we're going to :param str server_verification_cert: (Optional) The server verification certificate to use
connect to uses server-side TLS if the MQTT server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect :param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the MQTT service to the MQTT service
:param str sas_token: The token string which will be used to authenticate with the service :param str sas_token: The token string which will be used to authenticate with the service
@ -42,7 +42,7 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
self.client_id = client_id self.client_id = client_id
self.hostname = hostname self.hostname = hostname
self.username = username self.username = username
self.ca_cert = ca_cert self.server_verification_cert = server_verification_cert
self.client_cert = client_cert self.client_cert = client_cert
self.sas_token = sas_token self.sas_token = sas_token
@ -54,7 +54,7 @@ class MQTTPublishOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol. This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
""" """
def __init__(self, topic, payload, callback=None): def __init__(self, topic, payload, callback):
""" """
Initializer for MQTTPublishOperation objects. Initializer for MQTTPublishOperation objects.
@ -68,6 +68,7 @@ class MQTTPublishOperation(PipelineOperation):
self.topic = topic self.topic = topic
self.payload = payload self.payload = payload
self.needs_connection = True self.needs_connection = True
self.retry_timer = None
class MQTTSubscribeOperation(PipelineOperation): class MQTTSubscribeOperation(PipelineOperation):
@ -77,7 +78,7 @@ class MQTTSubscribeOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol. This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
""" """
def __init__(self, topic, callback=None): def __init__(self, topic, callback):
""" """
Initializer for MQTTSubscribeOperation objects. Initializer for MQTTSubscribeOperation objects.
@ -89,6 +90,8 @@ class MQTTSubscribeOperation(PipelineOperation):
super(MQTTSubscribeOperation, self).__init__(callback=callback) super(MQTTSubscribeOperation, self).__init__(callback=callback)
self.topic = topic self.topic = topic
self.needs_connection = True self.needs_connection = True
self.timeout_timer = None
self.retry_timer = None
class MQTTUnsubscribeOperation(PipelineOperation): class MQTTUnsubscribeOperation(PipelineOperation):
@ -98,7 +101,7 @@ class MQTTUnsubscribeOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol. This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
""" """
def __init__(self, topic, callback=None): def __init__(self, topic, callback):
""" """
Initializer for MQTTUnsubscribeOperation objects. Initializer for MQTTUnsubscribeOperation objects.
@ -110,3 +113,5 @@ class MQTTUnsubscribeOperation(PipelineOperation):
super(MQTTUnsubscribeOperation, self).__init__(callback=callback) super(MQTTUnsubscribeOperation, self).__init__(callback=callback)
self.topic = topic self.topic = topic
self.needs_connection = True self.needs_connection = True
self.timeout_timer = None
self.retry_timer = None

Просмотреть файл

@ -7,13 +7,19 @@
import logging import logging
import abc import abc
import six import six
import sys
import time
import traceback
import uuid import uuid
import weakref
from six.moves import queue from six.moves import queue
import threading
from . import pipeline_events_base from . import pipeline_events_base
from . import pipeline_ops_base from . import pipeline_ops_base, pipeline_ops_mqtt
from . import operation_flow
from . import pipeline_thread from . import pipeline_thread
from azure.iot.device.common import unhandled_exceptions from . import pipeline_exceptions
from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -43,11 +49,11 @@ class PipelineStage(object):
(use an auth provider) and converts it into something more generic (here is your device_id, etc, and use (use an auth provider) and converts it into something more generic (here is your device_id, etc, and use
this SAS token when connecting). this SAS token when connecting).
An example of a generic-to-specific stage is IoTHubMQTTConverterStage which converts IoTHub operations An example of a generic-to-specific stage is IoTHubMQTTTranslationStage which converts IoTHub operations
(such as SendD2CMessageOperation) to MQTT operations (such as Publish). (such as SendD2CMessageOperation) to MQTT operations (such as Publish).
Each stage should also work in the broadest domain possible. For example a generic stage (say Each stage should also work in the broadest domain possible. For example a generic stage (say
"EnsureConnectionStage") that initiates a connection if any arbitrary operation needs a connection is more useful "AutoConnectStage") that initiates a connection if any arbitrary operation needs a connection is more useful
than having some MQTT-specific code that re-connects to the MQTT broker if the user calls Publish and than having some MQTT-specific code that re-connects to the MQTT broker if the user calls Publish and
there's no connection. there's no connection.
@ -81,7 +87,7 @@ class PipelineStage(object):
def run_op(self, op): def run_op(self, op):
""" """
Run the given operation. This is the public function that outside callers would call to run an Run the given operation. This is the public function that outside callers would call to run an
operation. Derived classes should override the private _execute_op function to implement operation. Derived classes should override the private _run_op function to implement
stage-specific behavior. When run_op returns, that doesn't mean that the operation has executed stage-specific behavior. When run_op returns, that doesn't mean that the operation has executed
to completion. Rather, it means that the pipeline has done something that will cause the to completion. Rather, it means that the pipeline has done something that will cause the
operation to eventually execute to completion. That might mean that something was sent over operation to eventually execute to completion. That might mean that something was sent over
@ -92,29 +98,29 @@ class PipelineStage(object):
:param PipelineOperation op: The operation to run. :param PipelineOperation op: The operation to run.
""" """
logger.debug("{}({}): running".format(self.name, op.name))
try: try:
self._execute_op(op) self._run_op(op)
except Exception as e: except Exception as e:
# This path is ONLY for unexpected errors. Expected errors should cause a fail completion # This path is ONLY for unexpected errors. Expected errors should cause a fail completion
# within ._execute_op() # within ._run_op()
logger.error(msg="Unexpected error in {}._execute_op() call".format(self), exc_info=e)
op.error = e
operation_flow.complete_op(self, op)
@abc.abstractmethod # Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
def _execute_op(self, op): logger.error(msg="Unexpected error in {}._run_op() call".format(self))
logger.error(traceback.format_exc())
op.complete(error=e)
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
""" """
Abstract method to run the actual operation. This function is implemented in derived classes Implementation of the stage-specific function of .run_op(). Override this method instead of
and performs the actual work that any operation expects. The default behavior for this function .run_op() in child classes in order to change how a stage behaves when running an operation.
should be to forward the event to the next stage using operation_flow.pass_op_to_next_stage for any
operations that a particular stage might not operate on.
See the description of the run_op method for more discussion on what it means to "run" an operation. See the description of the .run_op() method for more discussion on what it means to "run"
an operation.
:param PipelineOperation op: The operation to run. :param PipelineOperation op: The operation to run.
""" """
pass self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def handle_pipeline_event(self, event): def handle_pipeline_event(self, event):
@ -129,10 +135,10 @@ class PipelineStage(object):
try: try:
self._handle_pipeline_event(event) self._handle_pipeline_event(event)
except Exception as e: except Exception as e:
logger.error( # Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
msg="Unexpected error in {}._handle_pipeline_event() call".format(self), exc_info=e logger.error(msg="Unexpected error in {}._handle_pipeline_event() call".format(self))
) logger.error(traceback.format_exc())
unhandled_exceptions.exception_caught_in_background_thread(e) handle_exceptions.handle_background_exception(e)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event): def _handle_pipeline_event(self, event):
@ -143,23 +149,42 @@ class PipelineStage(object):
:param PipelineEvent event: The event that is being passed back up the pipeline :param PipelineEvent event: The event that is being passed back up the pipeline
""" """
operation_flow.pass_event_to_previous_stage(self, event) self.send_event_up(event)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_connected(self): def send_op_down(self, op):
""" """
Called by lower layers when the protocol client connects Helper function to continue a given operation by passing it to the next stage
in the pipeline. If there is no next stage in the pipeline, this function
will fail the operation and call complete_op to return the failure back up the
pipeline.
:param PipelineOperation op: Operation which is being passed on
""" """
if self.previous: if not self.next:
self.previous.on_connected() logger.error("{}({}): no next stage. completing with error".format(self.name, op.name))
error = pipeline_exceptions.PipelineError(
"{} not handled after {} stage with no next stage".format(op.name, self.name)
)
op.complete(error=error)
else:
self.next.run_op(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_disconnected(self): def send_event_up(self, event):
""" """
Called by lower layers when the protocol client disconnects Helper function to pass an event to the previous stage of the pipeline. This is the default
behavior of events while traveling through the pipeline. They start somewhere (maybe the
bottom) and move up the pipeline until they're handled or until they error out.
""" """
if self.previous: if self.previous:
self.previous.on_disconnected() self.previous.handle_pipeline_event(event)
else:
logger.error("{}({}): Error: unhandled event".format(self.name, event.name))
error = pipeline_exceptions.PipelineError(
"{} unhandled at {} stage with no previous stage".format(event.name, self.name)
)
handle_exceptions.handle_background_exception(error)
class PipelineRootStage(PipelineStage): class PipelineRootStage(PipelineStage):
@ -181,42 +206,36 @@ class PipelineRootStage(PipelineStage):
:type on_disconnected_handler: Function :type on_disconnected_handler: Function
""" """
def __init__(self): def __init__(self, pipeline_configuration):
super(PipelineRootStage, self).__init__() super(PipelineRootStage, self).__init__()
self.on_pipeline_event_handler = None self.on_pipeline_event_handler = None
self.on_connected_handler = None self.on_connected_handler = None
self.on_disconnected_handler = None self.on_disconnected_handler = None
self.connected = False self.connected = False
self.pipeline_configuration = pipeline_configuration
def run_op(self, op): def run_op(self, op):
op.callback = pipeline_thread.invoke_on_callback_thread_nowait(op.callback) # CT-TODO: make this more elegant
op.callback_stack[0] = pipeline_thread.invoke_on_callback_thread_nowait(
op.callback_stack[0]
)
pipeline_thread.invoke_on_pipeline_thread(super(PipelineRootStage, self).run_op)(op) pipeline_thread.invoke_on_pipeline_thread(super(PipelineRootStage, self).run_op)(op)
@pipeline_thread.runs_on_pipeline_thread def append_stage(self, new_stage):
def _execute_op(self, op):
"""
run the operation. At the root, the only thing to do is to pass the operation
to the next stage.
:param PipelineOperation op: Operation to run.
"""
operation_flow.pass_op_to_next_stage(self, op)
def append_stage(self, new_next_stage):
""" """
Add the next stage to the end of the pipeline. This is the function that callers Add the next stage to the end of the pipeline. This is the function that callers
use to build the pipeline by appending stages. This function returns the root of use to build the pipeline by appending stages. This function returns the root of
the pipeline so that calls to this function can be chained together. the pipeline so that calls to this function can be chained together.
:param PipelineStage new_next_stage: Stage to add to the end of the pipeline :param PipelineStage new_stage: Stage to add to the end of the pipeline
:returns: The root of the pipeline. :returns: The root of the pipeline.
""" """
old_tail = self old_tail = self
while old_tail.next: while old_tail.next:
old_tail = old_tail.next old_tail = old_tail.next
old_tail.next = new_next_stage old_tail.next = new_stage
new_next_stage.previous = old_tail new_stage.previous = old_tail
new_next_stage.pipeline_root = self new_stage.pipeline_root = self
return self return self
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
@ -229,42 +248,39 @@ class PipelineRootStage(PipelineStage):
:param PipelineEvent event: Event to be handled, i.e. returned to the caller :param PipelineEvent event: Event to be handled, i.e. returned to the caller
through the handle_pipeline_event (if provided). through the handle_pipeline_event (if provided).
""" """
if self.on_pipeline_event_handler: if isinstance(event, pipeline_events_base.ConnectedEvent):
pipeline_thread.invoke_on_callback_thread_nowait(self.on_pipeline_event_handler)(event)
else:
logger.warning("incoming pipeline event with no handler. dropping.")
@pipeline_thread.runs_on_pipeline_thread
def on_connected(self):
logger.debug( logger.debug(
"{}: on_connected. on_connected_handler={}".format( "{}: ConnectedEvent received. Calling on_connected_handler".format(self.name)
self.name, self.on_connected_handler
)
) )
self.connected = True self.connected = True
if self.on_connected_handler: if self.on_connected_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_connected_handler)() pipeline_thread.invoke_on_callback_thread_nowait(self.on_connected_handler)()
@pipeline_thread.runs_on_pipeline_thread elif isinstance(event, pipeline_events_base.DisconnectedEvent):
def on_disconnected(self):
logger.debug( logger.debug(
"{}: on_disconnected. on_disconnected_handler={}".format( "{}: DisconnectedEvent received. Calling on_disconnected_handler".format(self.name)
self.name, self.on_disconnected_handler
)
) )
self.connected = False self.connected = False
if self.on_disconnected_handler: if self.on_disconnected_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_disconnected_handler)() pipeline_thread.invoke_on_callback_thread_nowait(self.on_disconnected_handler)()
else:
if self.on_pipeline_event_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_pipeline_event_handler)(
event
)
else:
logger.warning("incoming pipeline event with no handler. dropping.")
class EnsureConnectionStage(PipelineStage):
class AutoConnectStage(PipelineStage):
""" """
This stage is responsible for ensuring that the protocol is connected when This stage is responsible for ensuring that the protocol is connected when
it needs to be connected. it needs to be connected.
""" """
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
# Any operation that requires a connection can trigger a connection if # Any operation that requires a connection can trigger a connection if
# we're not connected. # we're not connected.
if op.needs_connection and not self.pipeline_root.connected: if op.needs_connection and not self.pipeline_root.connected:
@ -278,90 +294,95 @@ class EnsureConnectionStage(PipelineStage):
# Finally, if this stage doesn't need to do anything else with this operation, # Finally, if this stage doesn't need to do anything else with this operation,
# it just passes it down. # it just passes it down.
else: else:
operation_flow.pass_op_to_next_stage(self, op) self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _do_connect(self, op): def _do_connect(self, op):
""" """
Start connecting the transport in response to some operation Start connecting the transport in response to some operation
""" """
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_needs_complete = op
# function that gets called after we're connected. # function that gets called after we're connected.
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_connect_op_complete(op_connect): def on_connect_op_complete(op, error):
if op_connect.error: if error:
logger.error( logger.error(
"{}({}): Connection failed. Completing with failure because of connection failure: {}".format( "{}({}): Connection failed. Completing with failure because of connection failure: {}".format(
self.name, op.name, op_connect.error self.name, op_needs_complete.name, error
) )
) )
op.error = op_connect.error op_needs_complete.complete(error=error)
operation_flow.complete_op(stage=self, op=op)
else: else:
logger.debug( logger.debug(
"{}({}): connection is complete. Continuing with op".format(self.name, op.name) "{}({}): connection is complete. Continuing with op".format(
self.name, op_needs_complete.name
) )
operation_flow.pass_op_to_next_stage(stage=self, op=op) )
self.send_op_down(op_needs_complete)
# call down to the next stage to connect. # call down to the next stage to connect.
logger.debug("{}({}): calling down with Connect operation".format(self.name, op.name)) logger.debug("{}({}): calling down with Connect operation".format(self.name, op.name))
operation_flow.pass_op_to_next_stage( self.send_op_down(pipeline_ops_base.ConnectOperation(callback=on_connect_op_complete))
self, pipeline_ops_base.ConnectOperation(callback=on_connect_op_complete)
)
class SerializeConnectOpsStage(PipelineStage): class ConnectionLockStage(PipelineStage):
""" """
This stage is responsible for serializing connect, disconnect, and reconnect ops on This stage is responsible for serializing connect, disconnect, and reauthorize ops on
the pipeline, such that only a single one of these ops can go past this stage at a the pipeline, such that only a single one of these ops can go past this stage at a
time. This way, we don't have to worry about cases like "what happens if we try to time. This way, we don't have to worry about cases like "what happens if we try to
disconnect if we're in the middle of reconnecting." This stage will wait for the disconnect if we're in the middle of reauthorizing." This stage will wait for the
reconnect to complete before letting the disconnect past. reauthorize to complete before letting the disconnect past.
""" """
def __init__(self): def __init__(self):
super(SerializeConnectOpsStage, self).__init__() super(ConnectionLockStage, self).__init__()
self.queue = queue.Queue() self.queue = queue.Queue()
self.blocked = False self.blocked = False
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
# If this stage is currently blocked (because we're waiting for a connection, etc, # If this stage is currently blocked (because we're waiting for a connection, etc,
# to complete), we queue up all operations until after the connect completes. # to complete), we queue up all operations until after the connect completes.
if self.blocked: if self.blocked:
logger.info( logger.info(
"{}({}): pipeline is blocked waiting for a prior connect/disconnect/reconnect to complete. queueing.".format( "{}({}): pipeline is blocked waiting for a prior connect/disconnect/reauthorize to complete. queueing.".format(
self.name, op.name self.name, op.name
) )
) )
self.queue.put_nowait(op) self.queue.put_nowait(op)
elif isinstance(op, pipeline_ops_base.ConnectOperation) and self.pipeline_root.connected: elif isinstance(op, pipeline_ops_base.ConnectOperation) and self.pipeline_root.connected:
logger.info("{}({}): Transport is connected. Completing.".format(self.name, op.name)) logger.info(
operation_flow.complete_op(stage=self, op=op) "{}({}): Transport is already connected. Completing.".format(self.name, op.name)
)
op.complete()
elif ( elif (
isinstance(op, pipeline_ops_base.DisconnectOperation) isinstance(op, pipeline_ops_base.DisconnectOperation)
and not self.pipeline_root.connected and not self.pipeline_root.connected
): ):
logger.info( logger.info(
"{}({}): Transport is disconnected. Completing.".format(self.name, op.name) "{}({}): Transport is already disconnected. Completing.".format(self.name, op.name)
) )
operation_flow.complete_op(stage=self, op=op) op.complete()
elif ( elif (
isinstance(op, pipeline_ops_base.DisconnectOperation) isinstance(op, pipeline_ops_base.DisconnectOperation)
or isinstance(op, pipeline_ops_base.ConnectOperation) or isinstance(op, pipeline_ops_base.ConnectOperation)
or isinstance(op, pipeline_ops_base.ReconnectOperation) or isinstance(op, pipeline_ops_base.ReauthorizeConnectionOperation)
): ):
self._block(op) self._block(op)
old_callback = op.callback
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_operation_complete(op): def on_operation_complete(op, error):
if op.error: if error:
logger.error( logger.error(
"{}({}): op failed. Unblocking queue with error: {}".format( "{}({}): op failed. Unblocking queue with error: {}".format(
self.name, op.name, op.error self.name, op.name, error
) )
) )
else: else:
@ -369,25 +390,18 @@ class SerializeConnectOpsStage(PipelineStage):
"{}({}): op succeeded. Unblocking queue".format(self.name, op.name) "{}({}): op succeeded. Unblocking queue".format(self.name, op.name)
) )
op.callback = old_callback self._unblock(op, error)
self._unblock(op, op.error)
logger.debug(
"{}({}): unblock is complete. completing op that caused unblock".format(
self.name, op.name
)
)
operation_flow.complete_op(stage=self, op=op)
op.callback = on_operation_complete op.add_callback(on_operation_complete)
operation_flow.pass_op_to_next_stage(stage=self, op=op) self.send_op_down(op)
else: else:
operation_flow.pass_op_to_next_stage(stage=self, op=op) self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _block(self, op): def _block(self, op):
""" """
block this stage while we're waiting for the connect/disconnect/reconnect operation to complete. block this stage while we're waiting for the connect/disconnect/reauthorize operation to complete.
""" """
logger.debug("{}({}): blocking".format(self.name, op.name)) logger.debug("{}({}): blocking".format(self.name, op.name))
self.blocked = True self.blocked = True
@ -395,7 +409,7 @@ class SerializeConnectOpsStage(PipelineStage):
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _unblock(self, op, error): def _unblock(self, op, error):
""" """
Unblock this stage after the connect/disconnect/reconnect operation is complete. This also means Unblock this stage after the connect/disconnect/reauthorize operation is complete. This also means
releasing all the operations that were queued up. releasing all the operations that were queued up.
""" """
logger.debug("{}({}): unblocking and releasing queued ops.".format(self.name, op.name)) logger.debug("{}({}): unblocking and releasing queued ops.".format(self.name, op.name))
@ -418,21 +432,20 @@ class SerializeConnectOpsStage(PipelineStage):
self.name, op.name, op_to_release.name self.name, op.name, op_to_release.name
) )
) )
op_to_release.error = error op_to_release.complete(error=error)
operation_flow.complete_op(self, op_to_release)
else: else:
logger.debug( logger.debug(
"{}({}): releasing {} op.".format(self.name, op.name, op_to_release.name) "{}({}): releasing {} op.".format(self.name, op.name, op_to_release.name)
) )
# call run_op directly here so operations go through this stage again (especiall connect/disconnect ops) # call run_op directly here so operations go through this stage again (especially connect/disconnect ops)
self.run_op(op_to_release) self.run_op(op_to_release)
class CoordinateRequestAndResponseStage(PipelineStage): class CoordinateRequestAndResponseStage(PipelineStage):
""" """
Pipeline stage which is responsible for coordinating SendIotRequestAndWaitForResponseOperation operations. For each Pipeline stage which is responsible for coordinating RequestAndResponseOperation operations. For each
SendIotRequestAndWaitForResponseOperation operation, this stage passes down a SendIotRequestOperation operation and waits for RequestAndResponseOperation operation, this stage passes down a RequestOperation operation and waits for
an IotResponseEvent event. All other events are passed down unmodified. an ResponseEvent event. All other events are passed down unmodified.
""" """
def __init__(self): def __init__(self):
@ -440,31 +453,38 @@ class CoordinateRequestAndResponseStage(PipelineStage):
self.pending_responses = {} self.pending_responses = {}
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_base.SendIotRequestAndWaitForResponseOperation): if isinstance(op, pipeline_ops_base.RequestAndResponseOperation):
# Convert SendIotRequestAndWaitForResponseOperation operation into a SendIotRequestOperation operation # Convert RequestAndResponseOperation operation into a RequestOperation operation
# and send it down. A lower level will convert the SendIotRequestOperation into an # and send it down. A lower level will convert the RequestOperation into an
# actual protocol client operation. The SendIotRequestAndWaitForResponseOperation operation will be # actual protocol client operation. The RequestAndResponseOperation operation will be
# completed when the corresponding IotResponse event is received in this stage. # completed when the corresponding IotResponse event is received in this stage.
request_id = str(uuid.uuid4()) request_id = str(uuid.uuid4())
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_waiting_for_response = op
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_send_request_done(send_request_op): def on_send_request_done(op, error):
logger.debug( logger.debug(
"{}({}): Finished sending {} request to {} resource {}".format( "{}({}): Finished sending {} request to {} resource {}".format(
self.name, op.name, op.request_type, op.method, op.resource_location self.name,
op_waiting_for_response.name,
op_waiting_for_response.request_type,
op_waiting_for_response.method,
op_waiting_for_response.resource_location,
) )
) )
if send_request_op.error: if error:
op.error = send_request_op.error
logger.debug( logger.debug(
"{}({}): removing request {} from pending list".format( "{}({}): removing request {} from pending list".format(
self.name, op.name, request_id self.name, op_waiting_for_response.name, request_id
) )
) )
del (self.pending_responses[request_id]) del (self.pending_responses[request_id])
operation_flow.complete_op(self, op) op_waiting_for_response.complete(error=error)
else: else:
# request sent. Nothing to do except wait for the response # request sent. Nothing to do except wait for the response
pass pass
@ -480,23 +500,24 @@ class CoordinateRequestAndResponseStage(PipelineStage):
) )
self.pending_responses[request_id] = op self.pending_responses[request_id] = op
new_op = pipeline_ops_base.SendIotRequestOperation( new_op = pipeline_ops_base.RequestOperation(
method=op.method, method=op.method,
resource_location=op.resource_location, resource_location=op.resource_location,
request_body=op.request_body, request_body=op.request_body,
request_id=request_id, request_id=request_id,
request_type=op.request_type, request_type=op.request_type,
callback=on_send_request_done, callback=on_send_request_done,
query_params=op.query_params,
) )
operation_flow.pass_op_to_next_stage(self, new_op) self.send_op_down(new_op)
else: else:
operation_flow.pass_op_to_next_stage(self, op) self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event): def _handle_pipeline_event(self, event):
if isinstance(event, pipeline_events_base.IotResponseEvent): if isinstance(event, pipeline_events_base.ResponseEvent):
# match IotResponseEvent events to the saved dictionary of SendIotRequestAndWaitForResponseOperation # match ResponseEvent events to the saved dictionary of RequestAndResponseOperation
# operations which have not received responses yet. If the operation is found, # operations which have not received responses yet. If the operation is found,
# complete it. # complete it.
@ -510,6 +531,7 @@ class CoordinateRequestAndResponseStage(PipelineStage):
del (self.pending_responses[event.request_id]) del (self.pending_responses[event.request_id])
op.status_code = event.status_code op.status_code = event.status_code
op.response_body = event.response_body op.response_body = event.response_body
op.retry_after = event.retry_after
logger.debug( logger.debug(
"{}({}): Completing {} request to {} resource {} with status {}".format( "{}({}): Completing {} request to {} resource {} with status {}".format(
self.name, self.name,
@ -520,7 +542,7 @@ class CoordinateRequestAndResponseStage(PipelineStage):
op.status_code, op.status_code,
) )
) )
operation_flow.complete_op(self, op) op.complete()
else: else:
logger.warning( logger.warning(
"{}({}): request_id {} not found in pending list. Nothing to do. Dropping".format( "{}({}): request_id {} not found in pending list. Nothing to do. Dropping".format(
@ -528,4 +550,399 @@ class CoordinateRequestAndResponseStage(PipelineStage):
) )
) )
else: else:
operation_flow.pass_event_to_previous_stage(self, event) self.send_event_up(event)
class OpTimeoutStage(PipelineStage):
"""
The purpose of the timeout stage is to add timeout errors to select operations
The timeout_intervals attribute contains a list of operations to track along with
their timeout values. Right now this list is hard-coded but the operations and
intervals will eventually become a parameter.
For each operation that needs a timeout check, this stage will add a timer to
the operation. If the timer elapses, this stage will fail the operation with
a PipelineTimeoutError. The intention is that a higher stage will know what to
do with that error and act accordingly (either return the error to the user or
retry).
This stage currently assumes that all timed out operation are just "lost".
It does not attempt to cancel the operation, as Paho doesn't have a way to
cancel an operation, and with QOS=1, sending a pub or sub twice is not
catastrophic.
Also, as a long-term plan, the operations that need to be watched for timeout
will become an initialization parameter for this stage so that differet
instances of this stage can watch for timeouts on different operations.
This will be done because we want a lower-level timeout stage which can watch
for timeouts at the MQTT level, and we want a higher-level timeout stage which
can watch for timeouts at the iothub level. In this way, an MQTT operation that
times out can be retried as an MQTT operation and a higher-level IoTHub operation
which times out can be retried as an IoTHub operation (which might necessitate
redoing multiple MQTT operations).
"""
def __init__(self):
super(OpTimeoutStage, self).__init__()
# use a fixed list and fixed intervals for now. Later, this info will come in
# as an init param or a retry poicy
self.timeout_intervals = {
pipeline_ops_mqtt.MQTTSubscribeOperation: 10,
pipeline_ops_mqtt.MQTTUnsubscribeOperation: 10,
}
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if type(op) in self.timeout_intervals:
# Create a timer to watch for operation timeout on this op and attach it
# to the op.
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_timeout():
this = self_weakref()
logger.info("{}({}): returning timeout error".format(this.name, op.name))
op.complete(
error=pipeline_exceptions.PipelineTimeoutError(
"operation timed out before protocol client could respond"
)
)
logger.debug("{}({}): Creating timer".format(self.name, op.name))
op.timeout_timer = threading.Timer(self.timeout_intervals[type(op)], on_timeout)
op.timeout_timer.start()
# Send the op down, but intercept the return of the op so we can
# remove the timer when the op is done
op.add_callback(self._clear_timer)
logger.debug("{}({}): Sending down".format(self.name, op.name))
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _clear_timer(self, op, error):
# When an op comes back, delete the timer and pass it right up.
if op.timeout_timer:
logger.debug("{}({}): Cancelling timer".format(self.name, op.name))
op.timeout_timer.cancel()
op.timeout_timer = None
class RetryStage(PipelineStage):
"""
The purpose of the retry stage is to watch specific operations for specific
errors and retry the operations as appropriate.
Unlike the OpTimeoutStage, this stage will never need to worry about cancelling
failed operations. When an operation is retried at this stage, it is already
considered "failed", so no cancellation needs to be done.
"""
def __init__(self):
super(RetryStage, self).__init__()
# Retry intervals are hardcoded for now. Later, they come in as an
# init param, probably via retry policy.
self.retry_intervals = {
pipeline_ops_mqtt.MQTTSubscribeOperation: 20,
pipeline_ops_mqtt.MQTTUnsubscribeOperation: 20,
pipeline_ops_mqtt.MQTTPublishOperation: 20,
}
self.ops_waiting_to_retry = []
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
"""
Send all ops down and intercept their return to "watch for retry"
"""
if self._should_watch_for_retry(op):
op.add_callback(self._do_retry_if_necessary)
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _should_watch_for_retry(self, op):
"""
Return True if this op needs to be watched for retry. This can be
called before the op runs.
"""
return type(op) in self.retry_intervals
@pipeline_thread.runs_on_pipeline_thread
def _should_retry(self, op, error):
"""
Return True if this op needs to be retried. This must be called after
the op completes.
"""
if error:
if self._should_watch_for_retry(op):
if isinstance(error, pipeline_exceptions.PipelineTimeoutError):
return True
return False
@pipeline_thread.runs_on_pipeline_thread
def _do_retry_if_necessary(self, op, error):
"""
Handler which gets called when operations are complete. This function
is where we check to see if a retry is necessary and set a "retry timer"
which can be used to send the op down again.
"""
if self._should_retry(op, error):
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_retry():
this = self_weakref()
logger.info("{}({}): retrying".format(this.name, op.name))
op.retry_timer.cancel()
op.retry_timer = None
this.ops_waiting_to_retry.remove(op)
# Don't just send it down directly. Instead, go through run_op so we get
# retry functionality this time too
this.run_op(op)
interval = self.retry_intervals[type(op)]
logger.warning(
"{}({}): Op needs retry with interval {} because of {}. Setting timer.".format(
self.name, op.name, interval, error
)
)
# if we don't keep track of this op, it might get collected.
op.halt_completion()
self.ops_waiting_to_retry.append(op)
op.retry_timer = threading.Timer(self.retry_intervals[type(op)], do_retry)
op.retry_timer.start()
else:
if op.retry_timer:
op.retry_timer.cancel()
op.retry_timer = None
transient_connect_errors = [
pipeline_exceptions.OperationCancelled,
pipeline_exceptions.PipelineTimeoutError,
pipeline_exceptions.OperationError,
transport_exceptions.ConnectionFailedError,
transport_exceptions.ConnectionDroppedError,
]
class ReconnectState(object):
"""
Class which holds reconenct states as class variables. Created to make code that reads like an enum without using an enum.
NEVER_CONNECTED: Ttransport has never been conencted. This state is necessary because some errors might be fatal or transient,
depending on wether the transport has been connceted. For example, a failed conenction is a transient error if we've connected
before, but it's fatal if we've never conencted.
WAITING_TO_RECONNECT: This stage is in a waiting period before reconnecting.
CONNECTED_OR_DISCONNECTED: The transport is either connected or disconencted. This stage doesn't really care which one, so
it doesn't keep track.
"""
NEVER_CONNECTED = "NEVER_CONNECTED"
WAITING_TO_RECONNECT = "WAITING_TO_RECONNECT"
CONNECTED_OR_DISCONNECTED = "CONNECTED_OR_DISCONNECTED"
class ReconnectStage(PipelineStage):
def __init__(self):
super(ReconnectStage, self).__init__()
self.reconnect_timer = None
self.state = ReconnectState.NEVER_CONNECTED
# connect delay is hardcoded for now. Later, this comes from a retry policy
self.reconnect_delay = 10
self.waiting_connect_ops = []
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_base.ConnectOperation):
if self.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): State is {}. Adding to wait list".format(
self.name, op.name, self.state
)
)
self.waiting_connect_ops.append(op)
else:
logger.info(
"{}({}): State is {}. Adding to wait list and sending new connect op down".format(
self.name, op.name, self.state
)
)
self.waiting_connect_ops.append(op)
self._send_new_connect_op_down()
elif isinstance(op, pipeline_ops_base.DisconnectOperation):
if self.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): State is {}. Canceling waiting ops and sending disconnect down.".format(
self.name, op.name, self.state
)
)
self._clear_reconnect_timer()
self._complete_waiting_connect_ops(
pipeline_exceptions.OperationCancelled("Explicit disconnect invoked")
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
op.complete()
else:
logger.info(
"{}({}): State is {}. Sending op down.".format(self.name, op.name, self.state)
)
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event):
if isinstance(event, pipeline_events_base.DisconnectedEvent):
if self.pipeline_root.connected:
logger.info(
"{}({}): State is {}. Triggering reconnect timer".format(
self.name, event.name, self.state
)
)
self.state = ReconnectState.WAITING_TO_RECONNECT
self._start_reconnect_timer()
else:
logger.info(
"{}({}): State is {}. Doing nothing".format(self.name, event.name, self.state)
)
self.send_event_up(event)
else:
self.send_event_up(event)
@pipeline_thread.runs_on_pipeline_thread
def _send_new_connect_op_down(self):
self_weakref = weakref.ref(self)
@pipeline_thread.runs_on_pipeline_thread
def on_connect_complete(op, error):
this = self_weakref()
if this:
if error:
if this.state == ReconnectState.NEVER_CONNECTED:
logger.info(
"{}({}): error on first connection. Not triggering reconnection".format(
this.name, op.name
)
)
this._complete_waiting_connect_ops(error)
elif type(error) in transient_connect_errors:
logger.info(
"{}({}): State is {}. Connect failed with transient error. Triggering reconnect timer".format(
self.name, op.name, self.state
)
)
self.state = ReconnectState.WAITING_TO_RECONNECT
self._start_reconnect_timer()
elif this.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): non-tranient error. Failing all waiting ops.n".format(
this.name, op.name
)
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
self._clear_reconnect_timer()
this._complete_waiting_connect_ops(error)
else:
logger.info(
"{}({}): State is {}. Connection failed. Not triggering reconnection".format(
this.name, op.name, this.state
)
)
this._complete_waiting_connect_ops(error)
else:
logger.info(
"{}({}): State is {}. Connection succeeded".format(
this.name, op.name, this.state
)
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
self._clear_reconnect_timer()
self._complete_waiting_connect_ops()
logger.info("{}: sending new connect op down".format(self.name))
op = pipeline_ops_base.ConnectOperation(callback=on_connect_complete)
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _start_reconnect_timer(self):
"""
Set a timer to reconnect after some period of time
"""
logger.info("{}: State is {}. Starting reconnect timer".format(self.name, self.state))
self._clear_reconnect_timer()
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_reconnect_timer_expired():
this = self_weakref()
this.reconnect_timer = None
if this.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}: State is {}. Reconnect timer expired. Sending connect op down".format(
this.name, this.state
)
)
this.state = ReconnectState.CONNECTED_OR_DISCONNECTED
this._send_new_connect_op_down()
else:
logger.info(
"{}: State is {}. Reconnect timer expired. Doing nothing".format(
this.name, this.state
)
)
self.reconnect_timer = threading.Timer(self.reconnect_delay, on_reconnect_timer_expired)
self.reconnect_timer.start()
@pipeline_thread.runs_on_pipeline_thread
def _clear_reconnect_timer(self):
"""
Clear any previous reconnect timer
"""
if self.reconnect_timer:
logger.info("{}: clearing reconnect timer".format(self.name))
self.reconnect_timer.cancel()
self.reconnect_timer = None
@pipeline_thread.runs_on_pipeline_thread
def _complete_waiting_connect_ops(self, error=None):
"""
A note of explanation: when we are waiting to reconnect, we need to keep a list of
all connect ops that come through here. We do this for 2 reasons:
1. We don't want to pass them down immediately because we want to honor the waiting
period. If we passed them down immediately, we'd try to reconnect immediately
instead of waiting until reconnect_timer fires.
2. When we're retrying, there are new ConnectOperation ops sent down regularly.
Any of the ops could be the one that succeeds. When that happens, we need a
way to to complete all of the ops that are patiently waiting for the connection.
Right now, we only need to do this with ConnectOperation ops because these are the
only ops that need to wait because these are the only ops that cause a connection
to be established. Other ops pass through this stage, and might fail in later
stages, but that's OK. If they needed a connection, the AutoConnectStage before
this stage should be taking care of that.
"""
logger.info("{}: completing waiting ops with error={}".format(self.name, error))
list_copy = self.waiting_connect_ops
self.waiting_connect_ops = []
for op in list_copy:
op.complete(error)

Просмотреть файл

@ -0,0 +1,102 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six
import traceback
import copy
from . import (
pipeline_ops_base,
PipelineStage,
pipeline_ops_http,
pipeline_thread,
pipeline_exceptions,
)
from azure.iot.device.common.http_transport import HTTPTransport
from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__)
class HTTPTransportStage(PipelineStage):
"""
PipelineStage object which is responsible for interfacing with the HTTP protocol wrapper object.
This stage handles all HTTP operations that are not specific to IoT Hub.
"""
def __init__(self):
super(HTTPTransportStage, self).__init__()
# The sas_token will be set when Connetion Args are received
self.sas_token = None
# The transport will be instantiated when Connection Args are received
self.transport = None
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_http.SetHTTPConnectionArgsOperation):
# pipeline_ops_http.SetHTTPConenctionArgsOperation is used to create the HTTPTransport object and set all of it's properties.
logger.debug("{}({}): got connection args".format(self.name, op.name))
self.sas_token = op.sas_token
self.transport = HTTPTransport(
hostname=op.hostname,
server_verification_cert=op.server_verification_cert,
x509_cert=op.client_cert,
)
self.pipeline_root.transport = self.transport
op.complete()
elif isinstance(op, pipeline_ops_base.UpdateSasTokenOperation):
logger.debug("{}({}): saving sas token and completing".format(self.name, op.name))
self.sas_token = op.sas_token
op.complete()
elif isinstance(op, pipeline_ops_http.HTTPRequestAndResponseOperation):
# This will call down to the HTTP Transport with a request and also created a request callback. Because the HTTP Transport will run on the http transport thread, this call should be non-blocking to the pipline thread.
logger.debug(
"{}({}): Generating HTTP request and setting callback before completing.".format(
self.name, op.name
)
)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_request_completed(error=None, response=None):
if error:
logger.error(
"{}({}): Error passed to on_request_completed. Error={}".format(
self.name, op.name, error
)
)
op.complete(error=error)
else:
logger.debug(
"{}({}): Request completed. Completing op.".format(self.name, op.name)
)
logger.debug("HTTP Response Status: {}".format(response["status_code"]))
logger.debug("HTTP Response: {}".format(response["resp"].decode("utf-8")))
op.response_body = response["resp"]
op.status_code = response["status_code"]
op.reason = response["reason"]
op.complete()
# A deepcopy is necessary here since otherwise the manipulation happening to http_headers will affect the op.headers, which would be an unintended side effect and not a good practice.
http_headers = copy.deepcopy(op.headers)
if self.sas_token:
http_headers["Authorization"] = self.sas_token
self.transport.request(
method=op.method,
path=op.path,
headers=http_headers,
query_params=op.query_params,
body=op.body,
callback=on_request_completed,
)
else:
self.send_op_down(op)

Просмотреть файл

@ -6,16 +6,19 @@
import logging import logging
import six import six
import traceback
from . import ( from . import (
pipeline_ops_base, pipeline_ops_base,
PipelineStage, PipelineStage,
pipeline_ops_mqtt, pipeline_ops_mqtt,
pipeline_events_mqtt, pipeline_events_mqtt,
operation_flow,
pipeline_thread, pipeline_thread,
pipeline_exceptions,
pipeline_events_base,
) )
from azure.iot.device.common.mqtt_transport import MQTTTransport from azure.iot.device.common.mqtt_transport import MQTTTransport
from azure.iot.device.common import unhandled_exceptions, errors from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -27,66 +30,83 @@ class MQTTTransportStage(PipelineStage):
is not in the MQTT group of operations, but can only be run at the protocol level. is not in the MQTT group of operations, but can only be run at the protocol level.
""" """
def __init__(self):
super(MQTTTransportStage, self).__init__()
# The sas_token will be set when Connetion Args are received
self.sas_token = None
# The transport will be instantiated when Connection Args are received
self.transport = None
self._pending_connection_op = None
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _cancel_pending_connection_op(self): def _cancel_pending_connection_op(self):
""" """
Cancel any running connect, disconnect or reconnect op. Since our ability to "cancel" is fairly limited, Cancel any running connect, disconnect or reauthorize_connection op. Since our ability to "cancel" is fairly limited,
all this does (for now) is to fail the operation all this does (for now) is to fail the operation
""" """
op = self._pending_connection_op op = self._pending_connection_op
if op: if op:
# TODO: should this actually run a cancel call on the op? # NOTE: This code path should NOT execute in normal flow. There should never already be a pending
op.error = errors.PipelineError( # connection op when another is added, due to the SerializeConnectOps stage.
"Cancelling because new ConnectOperation, DisconnectOperation, or ReconnectOperation was issued" # If this block does execute, there is a bug in the codebase.
) error = pipeline_exceptions.OperationCancelled(
operation_flow.complete_op(stage=self, op=op) "Cancelling because new ConnectOperation, DisconnectOperation, or ReauthorizeConnectionOperation was issued"
) # TODO: should this actually somehow cancel the operation?
op.complete(error=error)
self._pending_connection_op = None self._pending_connection_op = None
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_mqtt.SetMQTTConnectionArgsOperation): if isinstance(op, pipeline_ops_mqtt.SetMQTTConnectionArgsOperation):
# pipeline_ops_mqtt.SetMQTTConnectionArgsOperation is where we create our MQTTTransport object and set # pipeline_ops_mqtt.SetMQTTConnectionArgsOperation is where we create our MQTTTransport object and set
# all of its properties. # all of its properties.
logger.debug("{}({}): got connection args".format(self.name, op.name)) logger.debug("{}({}): got connection args".format(self.name, op.name))
self.hostname = op.hostname
self.username = op.username
self.client_id = op.client_id
self.ca_cert = op.ca_cert
self.sas_token = op.sas_token self.sas_token = op.sas_token
self.client_cert = op.client_cert
self.transport = MQTTTransport( self.transport = MQTTTransport(
client_id=self.client_id, client_id=op.client_id,
hostname=self.hostname, hostname=op.hostname,
username=self.username, username=op.username,
ca_cert=self.ca_cert, server_verification_cert=op.server_verification_cert,
x509_cert=self.client_cert, x509_cert=op.client_cert,
websockets=self.pipeline_root.pipeline_configuration.websockets,
cipher=self.pipeline_root.pipeline_configuration.cipher,
proxy_options=self.pipeline_root.pipeline_configuration.proxy_options,
)
self.transport.on_mqtt_connected_handler = CallableWeakMethod(
self, "_on_mqtt_connected"
)
self.transport.on_mqtt_connection_failure_handler = CallableWeakMethod(
self, "_on_mqtt_connection_failure"
)
self.transport.on_mqtt_disconnected_handler = CallableWeakMethod(
self, "_on_mqtt_disconnected"
)
self.transport.on_mqtt_message_received_handler = CallableWeakMethod(
self, "_on_mqtt_message_received"
) )
self.transport.on_mqtt_connected_handler = self._on_mqtt_connected
self.transport.on_mqtt_connection_failure_handler = self._on_mqtt_connection_failure
self.transport.on_mqtt_disconnected_handler = self._on_mqtt_disconnected
self.transport.on_mqtt_message_received_handler = self._on_mqtt_message_received
# There can only be one pending connection operation (Connect, Reconnect, Disconnect) # There can only be one pending connection operation (Connect, ReauthorizeConnection, Disconnect)
# at a time. The existing one must be completed or canceled before a new one is set. # at a time. The existing one must be completed or canceled before a new one is set.
# Currently, this means that if, say, a connect operation is the pending op and is executed # Currently, this means that if, say, a connect operation is the pending op and is executed
# but another connection op is begins by the time the CONACK is received, the original # but another connection op is begins by the time the CONNACK is received, the original
# operation will be cancelled, but the CONACK for it will still be received, and complete the # operation will be cancelled, but the CONNACK for it will still be received, and complete the
# NEW operation. This is not desirable, but it is how things currently work. # NEW operation. This is not desirable, but it is how things currently work.
# We are however, checking the type, so the CONACK from a cancelled Connect, cannot successfully # We are however, checking the type, so the CONNACK from a cancelled Connect, cannot successfully
# complete a Disconnect operation. # complete a Disconnect operation.
self._pending_connection_op = None self._pending_connection_op = None
self.pipeline_root.transport = self.transport op.complete()
operation_flow.complete_op(self, op)
elif isinstance(op, pipeline_ops_base.UpdateSasTokenOperation): elif isinstance(op, pipeline_ops_base.UpdateSasTokenOperation):
logger.debug("{}({}): saving sas token and completing".format(self.name, op.name)) logger.debug("{}({}): saving sas token and completing".format(self.name, op.name))
self.sas_token = op.sas_token self.sas_token = op.sas_token
operation_flow.complete_op(self, op) op.complete()
elif isinstance(op, pipeline_ops_base.ConnectOperation): elif isinstance(op, pipeline_ops_base.ConnectOperation):
logger.info("{}({}): connecting".format(self.name, op.name)) logger.info("{}({}): connecting".format(self.name, op.name))
@ -96,24 +116,24 @@ class MQTTTransportStage(PipelineStage):
try: try:
self.transport.connect(password=self.sas_token) self.transport.connect(password=self.sas_token)
except Exception as e: except Exception as e:
logger.error("transport.connect raised error", exc_info=True) logger.error("transport.connect raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None self._pending_connection_op = None
op.error = e op.complete(error=e)
operation_flow.complete_op(self, op)
elif isinstance(op, pipeline_ops_base.ReconnectOperation): elif isinstance(op, pipeline_ops_base.ReauthorizeConnectionOperation):
logger.info("{}({}): reconnecting".format(self.name, op.name)) logger.info("{}({}): reauthorizing".format(self.name, op.name))
# We set _active_connect_op here because a reconnect is the same as a connect for "active operation" tracking purposes. # We set _active_connect_op here because reauthorizing the connection is the same as a connect for "active operation" tracking purposes.
self._cancel_pending_connection_op() self._cancel_pending_connection_op()
self._pending_connection_op = op self._pending_connection_op = op
try: try:
self.transport.reconnect(password=self.sas_token) self.transport.reauthorize_connection(password=self.sas_token)
except Exception as e: except Exception as e:
logger.error("transport.reconnect raised error", exc_info=True) logger.error("transport.reauthorize_connection raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None self._pending_connection_op = None
op.error = e op.complete(error=e)
operation_flow.complete_op(self, op)
elif isinstance(op, pipeline_ops_base.DisconnectOperation): elif isinstance(op, pipeline_ops_base.DisconnectOperation):
logger.info("{}({}): disconnecting".format(self.name, op.name)) logger.info("{}({}): disconnecting".format(self.name, op.name))
@ -123,10 +143,10 @@ class MQTTTransportStage(PipelineStage):
try: try:
self.transport.disconnect() self.transport.disconnect()
except Exception as e: except Exception as e:
logger.error("transport.disconnect raised error", exc_info=True) logger.error("transport.disconnect raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None self._pending_connection_op = None
op.error = e op.complete(error=e)
operation_flow.complete_op(self, op)
elif isinstance(op, pipeline_ops_mqtt.MQTTPublishOperation): elif isinstance(op, pipeline_ops_mqtt.MQTTPublishOperation):
logger.info("{}({}): publishing on {}".format(self.name, op.name, op.topic)) logger.info("{}({}): publishing on {}".format(self.name, op.name, op.topic))
@ -134,7 +154,7 @@ class MQTTTransportStage(PipelineStage):
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def on_published(): def on_published():
logger.debug("{}({}): PUBACK received. completing op.".format(self.name, op.name)) logger.debug("{}({}): PUBACK received. completing op.".format(self.name, op.name))
operation_flow.complete_op(self, op) op.complete()
self.transport.publish(topic=op.topic, payload=op.payload, callback=on_published) self.transport.publish(topic=op.topic, payload=op.payload, callback=on_published)
@ -144,7 +164,7 @@ class MQTTTransportStage(PipelineStage):
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def on_subscribed(): def on_subscribed():
logger.debug("{}({}): SUBACK received. completing op.".format(self.name, op.name)) logger.debug("{}({}): SUBACK received. completing op.".format(self.name, op.name))
operation_flow.complete_op(self, op) op.complete()
self.transport.subscribe(topic=op.topic, callback=on_subscribed) self.transport.subscribe(topic=op.topic, callback=on_subscribed)
@ -156,12 +176,14 @@ class MQTTTransportStage(PipelineStage):
logger.debug( logger.debug(
"{}({}): UNSUBACK received. completing op.".format(self.name, op.name) "{}({}): UNSUBACK received. completing op.".format(self.name, op.name)
) )
operation_flow.complete_op(self, op) op.complete()
self.transport.unsubscribe(topic=op.topic, callback=on_unsubscribed) self.transport.unsubscribe(topic=op.topic, callback=on_unsubscribed)
else: else:
operation_flow.pass_op_to_next_stage(self, op) # This code block should not be reached in correct program flow.
# This will raise an error when executed.
self.send_op_down(op)
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_message_received(self, topic, payload): def _on_mqtt_message_received(self, topic, payload):
@ -169,9 +191,8 @@ class MQTTTransportStage(PipelineStage):
Handler that gets called by the protocol library when an incoming message arrives. Handler that gets called by the protocol library when an incoming message arrives.
Convert that message into a pipeline event and pass it up for someone to handle. Convert that message into a pipeline event and pass it up for someone to handle.
""" """
operation_flow.pass_event_to_previous_stage( self.send_event_up(
stage=self, pipeline_events_mqtt.IncomingMQTTMessageEvent(topic=topic, payload=payload)
event=pipeline_events_mqtt.IncomingMQTTMessageEvent(topic=topic, payload=payload),
) )
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
@ -180,22 +201,24 @@ class MQTTTransportStage(PipelineStage):
Handler that gets called by the transport when it connects. Handler that gets called by the transport when it connects.
""" """
logger.info("_on_mqtt_connected called") logger.info("_on_mqtt_connected called")
# self.on_connected() tells other pipeline stages that we're connected. Do this before # Send an event to tell other pipeline stages that we're connected. Do this before
# we do anything else (in case upper stages have any "are we connected" logic. # we do anything else (in case upper stages have any "are we connected" logic.
self.on_connected() self.send_event_up(pipeline_events_base.ConnectedEvent())
if isinstance( if isinstance(
self._pending_connection_op, pipeline_ops_base.ConnectOperation self._pending_connection_op, pipeline_ops_base.ConnectOperation
) or isinstance(self._pending_connection_op, pipeline_ops_base.ReconnectOperation): ) or isinstance(
self._pending_connection_op, pipeline_ops_base.ReauthorizeConnectionOperation
):
logger.debug("completing connect op") logger.debug("completing connect op")
op = self._pending_connection_op op = self._pending_connection_op
self._pending_connection_op = None self._pending_connection_op = None
operation_flow.complete_op(stage=self, op=op) op.complete()
else: else:
# This should indicate something odd is going on. # This should indicate something odd is going on.
# If this occurs, either a connect was completed while there was no pending op, # If this occurs, either a connect was completed while there was no pending op,
# OR that a connect was completed while a disconnect op was pending # OR that a connect was completed while a disconnect op was pending
logger.warning("Connection was unexpected") logger.info("Connection was unexpected")
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_connection_failure(self, cause): def _on_mqtt_connection_failure(self, cause):
@ -205,19 +228,22 @@ class MQTTTransportStage(PipelineStage):
:param Exception cause: The Exception that caused the connection failure. :param Exception cause: The Exception that caused the connection failure.
""" """
logger.error("{}: _on_mqtt_connection_failure called: {}".format(self.name, cause)) logger.info("{}: _on_mqtt_connection_failure called: {}".format(self.name, cause))
if isinstance( if isinstance(
self._pending_connection_op, pipeline_ops_base.ConnectOperation self._pending_connection_op, pipeline_ops_base.ConnectOperation
) or isinstance(self._pending_connection_op, pipeline_ops_base.ReconnectOperation): ) or isinstance(
self._pending_connection_op, pipeline_ops_base.ReauthorizeConnectionOperation
):
logger.debug("{}: failing connect op".format(self.name)) logger.debug("{}: failing connect op".format(self.name))
op = self._pending_connection_op op = self._pending_connection_op
self._pending_connection_op = None self._pending_connection_op = None
op.error = cause op.complete(error=cause)
operation_flow.complete_op(stage=self, op=op)
else: else:
logger.warning("{}: Connection failure was unexpected".format(self.name)) logger.info("{}: Connection failure was unexpected".format(self.name))
unhandled_exceptions.exception_caught_in_background_thread(cause) handle_exceptions.swallow_unraised_exception(
cause, log_msg="Unexpected connection failure. Safe to ignore.", log_lvl="info"
)
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_disconnected(self, cause=None): def _on_mqtt_disconnected(self, cause=None):
@ -227,31 +253,47 @@ class MQTTTransportStage(PipelineStage):
:param Exception cause: The Exception that caused the disconnection, if any (optional) :param Exception cause: The Exception that caused the disconnection, if any (optional)
""" """
if cause: if cause:
logger.error("{}: _on_mqtt_disconnect called: {}".format(self.name, cause)) logger.info("{}: _on_mqtt_disconnect called: {}".format(self.name, cause))
else: else:
logger.info("{}: _on_mqtt_disconnect called".format(self.name)) logger.info("{}: _on_mqtt_disconnect called".format(self.name))
# self.on_disconnected() tells other pipeilne stages that we're disconnected. Do this before # Send an event to tell other pipeilne stages that we're disconnected. Do this before
# we do anything else (in case upper stages have any "are we connected" logic. # we do anything else (in case upper stages have any "are we connected" logic.)
self.on_disconnected() self.send_event_up(pipeline_events_base.DisconnectedEvent())
if isinstance(self._pending_connection_op, pipeline_ops_base.DisconnectOperation): if self._pending_connection_op:
logger.debug("{}: completing disconnect op".format(self.name)) # on_mqtt_disconnected will cause any pending connect op to complete. This is how Paho
# behaves when there is a connection error, and it also makes sense that on_mqtt_disconnected
# would cause a pending connection op to fail.
logger.debug(
"{}: completing pending {} op".format(self.name, self._pending_connection_op.name)
)
op = self._pending_connection_op op = self._pending_connection_op
self._pending_connection_op = None self._pending_connection_op = None
if isinstance(op, pipeline_ops_base.DisconnectOperation):
# Swallow any errors if we intended to disconnect - even if something went wrong, we
# got to the state we wanted to be in!
if cause: if cause:
# Only create a ConnnectionDroppedError if there is a cause, handle_exceptions.swallow_unraised_exception(
# i.e. unexpected disconnect. cause,
try: log_msg="Unexpected disconnect with error while disconnecting - swallowing error",
six.raise_from(errors.ConnectionDroppedError, cause) )
except errors.ConnectionDroppedError as e: op.complete()
op.error = e
operation_flow.complete_op(stage=self, op=op)
else: else:
logger.warning("{}: disconnection was unexpected".format(self.name)) if cause:
# Regardless of cause, it is now a ConnectionDroppedError op.complete(error=cause)
try: else:
six.raise_from(errors.ConnectionDroppedError, cause) op.complete(
except errors.ConnectionDroppedError as e: error=transport_exceptions.ConnectionDroppedError("transport disconnected")
unhandled_exceptions.exception_caught_in_background_thread(e) )
else:
logger.info("{}: disconnection was unexpected".format(self.name))
# Regardless of cause, it is now a ConnectionDroppedError. log it and swallow it.
# Higher layers will see that we're disconencted and reconnect as necessary.
e = transport_exceptions.ConnectionDroppedError(cause=cause)
handle_exceptions.swallow_unraised_exception(
e,
log_msg="Unexpected disconnection. Safe to ignore since other stages will reconnect.",
log_lvl="info",
)

Просмотреть файл

@ -9,7 +9,7 @@ import threading
import traceback import traceback
from multiprocessing.pool import ThreadPool from multiprocessing.pool import ThreadPool
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from azure.iot.device.common import unhandled_exceptions from azure.iot.device.common import handle_exceptions
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -113,7 +113,7 @@ def _invoke_on_executor_thread(func, thread_name, block=True):
return func(*args, **kwargs) return func(*args, **kwargs)
except Exception as e: except Exception as e:
if not block: if not block:
unhandled_exceptions.exception_caught_in_background_thread(e) handle_exceptions.handle_background_exception(e)
else: else:
raise raise
except BaseException: except BaseException:
@ -166,6 +166,15 @@ def invoke_on_callback_thread_nowait(func):
return _invoke_on_executor_thread(func=func, thread_name="callback", block=False) return _invoke_on_executor_thread(func=func, thread_name="callback", block=False)
def invoke_on_http_thread_nowait(func):
"""
Run the decorated function on the callback thread, but don't wait for it to complete
"""
# TODO: Refactor this since this is not in the pipeline thread anymore, so we need to pull this into common.
# Also, the max workers eventually needs to be a bigger number, so that needs to be fixed to allow for more than one HTTP Request a a time.
return _invoke_on_executor_thread(func=func, thread_name="azure_iot_http", block=False)
def _assert_executor_thread(func, thread_name): def _assert_executor_thread(func, thread_name):
""" """
Decorator which asserts that the given function only gets called inside the given Decorator which asserts that the given function only gets called inside the given
@ -196,3 +205,10 @@ def runs_on_pipeline_thread(func):
Decorator which marks a function as only running inside the pipeline thread. Decorator which marks a function as only running inside the pipeline thread.
""" """
return _assert_executor_thread(func=func, thread_name="pipeline") return _assert_executor_thread(func=func, thread_name="pipeline")
def runs_on_http_thread(func):
"""
Decorator which marks a function as only running inside the http thread.
"""
return _assert_executor_thread(func=func, thread_name="azure_iot_http")

Просмотреть файл

@ -0,0 +1,58 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines errors that may be raised from a transport"""
from .chainable_exception import ChainableException
class ConnectionFailedError(ChainableException):
"""
Connection failed to be established
"""
pass
class ConnectionDroppedError(ChainableException):
"""
Previously established connection was dropped
"""
pass
class UnauthorizedError(ChainableException):
"""
Authorization was rejected
"""
pass
class ProtocolClientError(ChainableException):
"""
Error returned from protocol client library
"""
pass
class TlsExchangeAuthError(ChainableException):
"""
Error returned when transport layer exchanges
result in a SSLCertVerification error.
"""
pass
class ProtocolProxyError(ChainableException):
"""
All proxy-related errors.
TODO : Not sure what to name it here. There is a class called Proxy Error already in Pysocks
"""
pass

Просмотреть файл

@ -6,8 +6,10 @@
"""This module defines constants for use across the azure-iot-device package """This module defines constants for use across the azure-iot-device package
""" """
VERSION = "2.0.0-preview.10" VERSION = "2.1.0"
USER_AGENT = "py-azure-iot-device/{version}".format(version=VERSION) IOTHUB_IDENTIFIER = "azure-iot-device-iothub-py"
PROVISIONING_IDENTIFIER = "azure-iot-device-provisioning-py"
IOTHUB_API_VERSION = "2018-06-30" IOTHUB_API_VERSION = "2018-06-30"
PROVISIONING_API_VERSION = "2019-03-31" PROVISIONING_API_VERSION = "2019-03-31"
SECURITY_MESSAGE_INTERFACE_ID = "urn:azureiot:Security:SecurityAgent:1" SECURITY_MESSAGE_INTERFACE_ID = "urn:azureiot:Security:SecurityAgent:1"
TELEMETRY_MESSAGE_SIZE_LIMIT = 262144

Просмотреть файл

@ -0,0 +1,175 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the azure.iot.device library API"""
from azure.iot.device.common.chainable_exception import ChainableException
# Currently, we are redefining many lower level exceptions in this file, in order to present an API
# surface that will be consistent and unchanging (even though lower level exceptions may change).
# Potentially, this could be somewhat relaxed in the future as the design solidifies.
# ~~~ EXCEPTIONS ~~~
class OperationCancelled(ChainableException):
"""An operation was cancelled"""
pass
# ~~~ CLIENT ERRORS ~~~
class ClientError(ChainableException):
"""Generic error for a client"""
pass
class ConnectionFailedError(ClientError):
"""Failed to establish a connection"""
pass
class ConnectionDroppedError(ClientError):
"""Lost connection while executing operation"""
pass
class CredentialError(ClientError):
"""Could not connect client using given credentials"""
pass
# ~~~ SERVICE ERRORS ~~~
class ServiceError(ChainableException):
"""Error received from an Azure IoT service"""
pass
# NOTE: These are not (yet) in use.
# Because of this they have been commented out to prevent confusion.
# class ArgumentError(ServiceError):
# """Service returned 400"""
# pass
# class UnauthorizedError(ServiceError):
# """Service returned 401"""
# pass
# class QuotaExceededError(ServiceError):
# """Service returned 403"""
# pass
# class NotFoundError(ServiceError):
# """Service returned 404"""
# pass
# class DeviceTimeoutError(ServiceError):
# """Service returned 408"""
# # TODO: is this a method call error? If so, do we retry?
# pass
# class DeviceAlreadyExistsError(ServiceError):
# """Service returned 409"""
# pass
# class InvalidEtagError(ServiceError):
# """Service returned 412"""
# pass
# class MessageTooLargeError(ServiceError):
# """Service returned 413"""
# pass
# class ThrottlingError(ServiceError):
# """Service returned 429"""
# pass
# class InternalServiceError(ServiceError):
# """Service returned 500"""
# pass
# class BadDeviceResponseError(ServiceError):
# """Service returned 502"""
# # TODO: is this a method invoke thing?
# pass
# class ServiceUnavailableError(ServiceError):
# """Service returned 503"""
# pass
# class ServiceTimeoutError(ServiceError):
# """Service returned 504"""
# pass
# class FailedStatusCodeError(ServiceError):
# """Service returned unknown status code"""
# pass
# status_code_to_error = {
# 400: ArgumentError,
# 401: UnauthorizedError,
# 403: QuotaExceededError,
# 404: NotFoundError,
# 408: DeviceTimeoutError,
# 409: DeviceAlreadyExistsError,
# 412: InvalidEtagError,
# 413: MessageTooLargeError,
# 429: ThrottlingError,
# 500: InternalServiceError,
# 502: BadDeviceResponseError,
# 503: ServiceUnavailableError,
# 504: ServiceTimeoutError,
# }
# def error_from_status_code(status_code, message=None):
# """
# Return an Error object from a failed status code
# :param int status_code: Status code returned from failed operation
# :returns: Error object
# """
# if status_code in status_code_to_error:
# return status_code_to_error[status_code](message)
# else:
# return FailedStatusCodeError(message)

Просмотреть файл

@ -5,7 +5,6 @@ as a Device or Module.
""" """
from .sync_clients import IoTHubDeviceClient, IoTHubModuleClient from .sync_clients import IoTHubDeviceClient, IoTHubModuleClient
from .sync_inbox import InboxEmpty from .models import Message, MethodRequest, MethodResponse
from .models import Message, MethodResponse
__all__ = ["IoTHubDeviceClient", "IoTHubModuleClient", "Message", "InboxEmpty", "MethodResponse"] __all__ = ["IoTHubDeviceClient", "IoTHubModuleClient", "Message", "MethodRequest", "MethodResponse"]

Просмотреть файл

@ -14,7 +14,6 @@ import io
from . import auth from . import auth
from . import pipeline from . import pipeline
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# A note on implementation: # A note on implementation:
@ -23,57 +22,102 @@ logger = logging.getLogger(__name__)
# pipeline configuration to be specifically tailored to the method of instantiation. # pipeline configuration to be specifically tailored to the method of instantiation.
# For instance, .create_from_connection_string and .create_from_edge_envrionment both can use # For instance, .create_from_connection_string and .create_from_edge_envrionment both can use
# SymmetricKeyAuthenticationProviders to instantiate pipeline(s), but only .create_from_edge_environment # SymmetricKeyAuthenticationProviders to instantiate pipeline(s), but only .create_from_edge_environment
# should use it to instantiate an EdgePipeline. If the initializer accepted an auth provider, and then # should use it to instantiate an HTTPPipeline. If the initializer accepted an auth provider, and then
# used it to create pipelines, this detail would be lost, as there would be no way to tell if a # used it to create pipelines, this detail would be lost, as there would be no way to tell if a
# SymmetricKeyAuthenticationProvider was intended to be part of an Edge scenario or not. # SymmetricKeyAuthenticationProvider was intended to be part of an Edge scenario or not.
def _validate_kwargs(**kwargs):
"""Helper function to validate user provided kwargs.
Raises TypeError if an invalid option has been provided"""
valid_kwargs = [
"product_info",
"websockets",
"cipher",
"server_verification_cert",
"proxy_options",
]
for kwarg in kwargs:
if kwarg not in valid_kwargs:
raise TypeError("Got an unexpected keyword argument '{}'".format(kwarg))
def _get_pipeline_config_kwargs(**kwargs):
"""Helper function to get a subset of user provided kwargs relevant to IoTHubPipelineConfig"""
new_kwargs = {}
if "product_info" in kwargs:
new_kwargs["product_info"] = kwargs["product_info"]
if "websockets" in kwargs:
new_kwargs["websockets"] = kwargs["websockets"]
if "cipher" in kwargs:
new_kwargs["cipher"] = kwargs["cipher"]
if "proxy_options" in kwargs:
new_kwargs["proxy_options"] = kwargs["proxy_options"]
return new_kwargs
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubClient(object): class AbstractIoTHubClient(object):
"""A superclass representing a generic client. This class needs to be extended for specific clients.""" """ A superclass representing a generic IoTHub client.
This class needs to be extended for specific clients.
"""
def __init__(self, iothub_pipeline): def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a generic client. """Initializer for a generic client.
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
""" """
self._iothub_pipeline = iothub_pipeline self._iothub_pipeline = iothub_pipeline
self._edge_pipeline = None self._http_pipeline = http_pipeline
@classmethod @classmethod
def create_from_connection_string(cls, connection_string, ca_cert=None): def create_from_connection_string(cls, connection_string, **kwargs):
""" """
Instantiate the client from a IoTHub device or module connection string. Instantiate the client from a IoTHub device or module connection string.
:param str connection_string: The connection string for the IoTHub you wish to connect to. :param str connection_string: The connection string for the IoTHub you wish to connect to.
:param str ca_cert: (OPTIONAL) The trusted certificate chain. Necessary when using a
connection string with a GatewayHostName parameter. :param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:param proxy_options: Options for sending traffic through proxy servers.
:type ProxyOptions: :class:`azure.iot.device.common.proxy_options`
:raises: ValueError if given an invalid connection_string. :raises: ValueError if given an invalid connection_string.
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses a connection string for authentication.
""" """
# TODO: Make this device/module specific and reject non-matching connection strings. # TODO: Make this device/module specific and reject non-matching connection strings.
# This will require refactoring of the auth package to use common objects (e.g. ConnectionString) # This will require refactoring of the auth package to use common objects (e.g. ConnectionString)
# in order to differentiate types of connection strings. # in order to differentiate types of connection strings.
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
if cls.__name__ == "IoTHubDeviceClient":
pipeline_configuration.blob_upload = True
# Auth Provider setup
authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse(connection_string) authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse(connection_string)
authentication_provider.ca_cert = ca_cert # TODO: make this part of the instantiation authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
@classmethod # Pipeline setup
def create_from_shared_access_signature(cls, sas_token): http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
""" iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
Instantiate the client from a Shared Access Signature (SAS) token.
This method of instantiation is not recommended for general usage. return cls(iothub_pipeline, http_pipeline)
:param str sas_token: The string representation of a SAS token.
:raises: ValueError if given an invalid sas_token
"""
authentication_provider = auth.SharedAccessSignatureAuthenticationProvider.parse(sas_token)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
@abc.abstractmethod @abc.abstractmethod
def connect(self): def connect(self):
@ -107,25 +151,109 @@ class AbstractIoTHubClient(object):
def receive_twin_desired_properties_patch(self): def receive_twin_desired_properties_patch(self):
pass pass
@property
def connected(self):
"""
Read-only property to indicate if the transport is connected or not.
"""
return self._iothub_pipeline.connected
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubDeviceClient(AbstractIoTHubClient): class AbstractIoTHubDeviceClient(AbstractIoTHubClient):
@classmethod @classmethod
def create_from_x509_certificate(cls, x509, hostname, device_id): def create_from_x509_certificate(cls, x509, hostname, device_id, **kwargs):
""" """
Instantiate a client which using X509 certificate authentication. Instantiate a client which using X509 certificate authentication.
:param hostname: Host running the IotHub. Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates). :param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object.
To use the certificate the enrollment object needs to contain cert
(either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded. If the cert comes from a CER file, it needs to be base64 encoded.
:type x509: X509 :type x509: :class:`azure.iot.device.X509`
:param device_id: The ID is used to uniquely identify a device in the IoTHub :param str device_id: The ID used to uniquely identify a device in the IoTHub
:return: A IoTHubClient which can use X509 authentication.
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:param proxy_options: Options for sending traffic through proxy servers.
:type ProxyOptions: :class:`azure.iot.device.common.proxy_options`
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses an X509 certificate for authentication.
""" """
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.blob_upload = True # Blob Upload is a feature on Device Clients
# Auth Provider setup
authentication_provider = auth.X509AuthenticationProvider( authentication_provider = auth.X509AuthenticationProvider(
x509=x509, hostname=hostname, device_id=device_id x509=x509, hostname=hostname, device_id=device_id
) )
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider) authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
return cls(iothub_pipeline)
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@classmethod
def create_from_symmetric_key(cls, symmetric_key, hostname, device_id, **kwargs):
"""
Instantiate a client using symmetric key authentication.
:param symmetric_key: The symmetric key.
:param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param device_id: The device ID
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: TypeError if given an unrecognized parameter.
:return: An instance of an IoTHub client that uses a symmetric key for authentication.
"""
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.blob_upload = True # Blob Upload is a feature on Device Clients
# Auth Provider setup
authentication_provider = auth.SymmetricKeyAuthenticationProvider(
hostname=hostname, device_id=device_id, module_id=None, shared_access_key=symmetric_key
)
authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@abc.abstractmethod @abc.abstractmethod
def receive_message(self): def receive_message(self):
@ -134,28 +262,42 @@ class AbstractIoTHubDeviceClient(AbstractIoTHubClient):
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubModuleClient(AbstractIoTHubClient): class AbstractIoTHubModuleClient(AbstractIoTHubClient):
def __init__(self, iothub_pipeline, edge_pipeline=None): def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a module client. """Initializer for a module client.
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint.
:type edge_pipeline: EdgePipeline
""" """
super(AbstractIoTHubModuleClient, self).__init__(iothub_pipeline) super(AbstractIoTHubModuleClient, self).__init__(iothub_pipeline, http_pipeline)
self._edge_pipeline = edge_pipeline
@classmethod @classmethod
def create_from_edge_environment(cls): def create_from_edge_environment(cls, **kwargs):
""" """
Instantiate the client from the IoT Edge environment. Instantiate the client from the IoT Edge environment.
This method can only be run from inside an IoT Edge container, or in a debugging This method can only be run from inside an IoT Edge container, or in a debugging
environment configured for Edge development (e.g. Visual Studio, Visual Studio Code) environment configured for Edge development (e.g. Visual Studio, Visual Studio Code)
:raises: IoTEdgeError if the IoT Edge container is not configured correctly. :param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
:raises: ValueError if debug variables are invalid over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: OSError if the IoT Edge container is not configured correctly.
:raises: ValueError if debug variables are invalid.
:returns: An instance of an IoTHub client that uses the IoT Edge environment for
authentication.
""" """
_validate_kwargs(**kwargs)
if kwargs.get("server_verification_cert"):
raise TypeError(
"'server_verification_cert' is not supported by clients using an IoT Edge environment"
)
# First try the regular Edge container variables # First try the regular Edge container variables
try: try:
hostname = os.environ["IOTEDGE_IOTHUBHOSTNAME"] hostname = os.environ["IOTEDGE_IOTHUBHOSTNAME"]
@ -172,16 +314,16 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
try: try:
connection_string = os.environ["EdgeHubConnectionString"] connection_string = os.environ["EdgeHubConnectionString"]
ca_cert_filepath = os.environ["EdgeModuleCACertificateFile"] ca_cert_filepath = os.environ["EdgeModuleCACertificateFile"]
except KeyError: except KeyError as e:
# TODO: consider using a different error here. (OSError?) new_err = OSError("IoT Edge environment not configured correctly")
raise auth.IoTEdgeError("IoT Edge environment not configured correctly") new_err.__cause__ = e
raise new_err
# TODO: variant ca_cert file vs data object that would remove the need for this fopen # TODO: variant server_verification_cert file vs data object that would remove the need for this fopen
# Read the certificate file to pass it on as a string # Read the certificate file to pass it on as a string
try: try:
with io.open(ca_cert_filepath, mode="r") as ca_cert_file: with io.open(ca_cert_filepath, mode="r") as ca_cert_file:
ca_cert = ca_cert_file.read() server_verification_cert = ca_cert_file.read()
except (OSError, IOError): except (OSError, IOError) as e:
# In Python 2, a non-existent file raises IOError, and an invalid file raises an IOError. # In Python 2, a non-existent file raises IOError, and an invalid file raises an IOError.
# In Python 3, a non-existent file raises FileNotFoundError, and an invalid file raises an OSError. # In Python 3, a non-existent file raises FileNotFoundError, and an invalid file raises an OSError.
# However, FileNotFoundError inherits from OSError, and IOError has been turned into an alias for OSError, # However, FileNotFoundError inherits from OSError, and IOError has been turned into an alias for OSError,
@ -189,14 +331,20 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
# Unfortunately, we can't distinguish cause of error from error type, so the raised ValueError has a generic # Unfortunately, we can't distinguish cause of error from error type, so the raised ValueError has a generic
# message. If, in the future, we want to add detail, this could be accomplished by inspecting the e.errno # message. If, in the future, we want to add detail, this could be accomplished by inspecting the e.errno
# attribute # attribute
raise ValueError("Invalid CA certificate file") new_err = ValueError("Invalid CA certificate file")
new_err.__cause__ = e
raise new_err
# Use Symmetric Key authentication for local dev experience. # Use Symmetric Key authentication for local dev experience.
try:
authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse( authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse(
connection_string connection_string
) )
authentication_provider.ca_cert = ca_cert except ValueError:
raise
authentication_provider.server_verification_cert = server_verification_cert
else: else:
# Use an HSM for authentication in the general case # Use an HSM for authentication in the general case
try:
authentication_provider = auth.IoTEdgeAuthenticationProvider( authentication_provider = auth.IoTEdgeAuthenticationProvider(
hostname=hostname, hostname=hostname,
device_id=device_id, device_id=device_id,
@ -206,27 +354,70 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
workload_uri=workload_uri, workload_uri=workload_uri,
api_version=api_version, api_version=api_version,
) )
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider) except auth.IoTEdgeError as e:
edge_pipeline = pipeline.EdgePipeline(authentication_provider) new_err = OSError("Unexpected failure in IoTEdge")
return cls(iothub_pipeline, edge_pipeline=edge_pipeline) new_err.__cause__ = e
raise new_err
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.method_invoke = (
True
) # Method Invoke is allowed on modules created from edge environment
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@classmethod @classmethod
def create_from_x509_certificate(cls, x509, hostname, device_id, module_id): def create_from_x509_certificate(cls, x509, hostname, device_id, module_id, **kwargs):
""" """
Instantiate a client which using X509 certificate authentication. Instantiate a client which using X509 certificate authentication.
:param hostname: Host running the IotHub. Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates). :param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object.
To use the certificate the enrollment object needs to contain cert
(either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded. If the cert comes from a CER file, it needs to be base64 encoded.
:type x509: X509 :type x509: :class:`azure.iot.device.X509`
:param device_id: The ID is used to uniquely identify a device in the IoTHub :param str device_id: The ID used to uniquely identify a device in the IoTHub
:param module_id : The ID of the module to uniquely identify a module on a device on the IoTHub. :param str module_id: The ID used to uniquely identify a module on a device on the IoTHub.
:return: A IoTHubClient which can use X509 authentication.
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses an X509 certificate for authentication.
""" """
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
# Auth Provider setup
authentication_provider = auth.X509AuthenticationProvider( authentication_provider = auth.X509AuthenticationProvider(
x509=x509, hostname=hostname, device_id=device_id, module_id=module_id x509=x509, hostname=hostname, device_id=device_id, module_id=module_id
) )
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider) authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
return cls(iothub_pipeline)
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@abc.abstractmethod @abc.abstractmethod
def send_message_to_output(self, message, output_name): def send_message_to_output(self, message, output_name):

Просмотреть файл

@ -16,12 +16,38 @@ from azure.iot.device.iothub.abstract_clients import (
) )
from azure.iot.device.iothub.models import Message from azure.iot.device.iothub.models import Message
from azure.iot.device.iothub.pipeline import constant from azure.iot.device.iothub.pipeline import constant
from azure.iot.device.iothub.pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.iothub.inbox_manager import InboxManager from azure.iot.device.iothub.inbox_manager import InboxManager
from .async_inbox import AsyncClientInbox from .async_inbox import AsyncClientInbox
from azure.iot.device import constant as device_constant
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
async def handle_result(callback):
try:
return await callback.completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except pipeline_exceptions.TlsExchangeAuthError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client due to TLS exchanges.", cause=e
)
except pipeline_exceptions.ProtocolProxyError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client raised due to proxy connections.", cause=e
)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class GenericIoTHubClient(AbstractIoTHubClient): class GenericIoTHubClient(AbstractIoTHubClient):
"""A super class representing a generic asynchronous client. """A super class representing a generic asynchronous client.
This class needs to be extended for specific clients. This class needs to be extended for specific clients.
@ -33,8 +59,10 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
TODO: How to document kwargs? :param iothub_pipeline: The IoTHubPipeline used for the client
Possible values: iothub_pipeline, edge_pipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param http_pipeline: The HTTPPipeline used for the client
:type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
""" """
# Depending on the subclass calling this __init__, there could be different arguments, # Depending on the subclass calling this __init__, there could be different arguments,
# and the super() call could call a different class, due to the different MROs # and the super() call could call a different class, due to the different MROs
@ -62,25 +90,37 @@ class GenericIoTHubClient(AbstractIoTHubClient):
The destination is chosen based on the credentials passed via the auth_provider parameter The destination is chosen based on the credentials passed via the auth_provider parameter
that was provided when this object was initialized. that was provided when this object was initialized.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Connecting to Hub...") logger.info("Connecting to Hub...")
connect_async = async_adapter.emulate_async(self._iothub_pipeline.connect) connect_async = async_adapter.emulate_async(self._iothub_pipeline.connect)
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await connect_async(callback=callback) await connect_async(callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully connected to Hub") logger.info("Successfully connected to Hub")
async def disconnect(self): async def disconnect(self):
"""Disconnect the client from the Azure IoT Hub or Azure IoT Edge Hub instance. """Disconnect the client from the Azure IoT Hub or Azure IoT Edge Hub instance.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Disconnecting from Hub...") logger.info("Disconnecting from Hub...")
disconnect_async = async_adapter.emulate_async(self._iothub_pipeline.disconnect) disconnect_async = async_adapter.emulate_async(self._iothub_pipeline.disconnect)
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await disconnect_async(callback=callback) await disconnect_async(callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully disconnected from Hub") logger.info("Successfully disconnected from Hub")
@ -92,16 +132,30 @@ class GenericIoTHubClient(AbstractIoTHubClient):
:param message: The actual message to send. Anything passed that is not an instance of the :param message: The actual message to send. Anything passed that is not an instance of the
Message class will be converted to Message object. Message class will be converted to Message object.
:type message: :class:`azure.iot.device.Message` or str
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
""" """
if not isinstance(message, Message): if not isinstance(message, Message):
message = Message(message) message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of telemetry message can not exceed 256 KB.")
logger.info("Sending message to Hub...") logger.info("Sending message to Hub...")
send_message_async = async_adapter.emulate_async(self._iothub_pipeline.send_message) send_message_async = async_adapter.emulate_async(self._iothub_pipeline.send_message)
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await send_message_async(message, callback=callback) await send_message_async(message, callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully sent message to Hub") logger.info("Successfully sent message to Hub")
@ -115,6 +169,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
a different call to receive_method will be received. a different call to receive_method will be received.
:returns: MethodRequest object representing the received method request. :returns: MethodRequest object representing the received method request.
:rtype: `azure.iot.device.MethodRequest`
""" """
if not self._iothub_pipeline.feature_enabled[constant.METHODS]: if not self._iothub_pipeline.feature_enabled[constant.METHODS]:
await self._enable_feature(constant.METHODS) await self._enable_feature(constant.METHODS)
@ -133,6 +188,16 @@ class GenericIoTHubClient(AbstractIoTHubClient):
function will open the connection before sending the event. function will open the connection before sending the event.
:param method_response: The MethodResponse to send :param method_response: The MethodResponse to send
:type method_response: :class:`azure.iot.device.MethodResponse`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Sending method response to Hub...") logger.info("Sending method response to Hub...")
send_method_response_async = async_adapter.emulate_async( send_method_response_async = async_adapter.emulate_async(
@ -143,7 +208,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
# TODO: maybe consolidate method_request, result and status into a new object # TODO: maybe consolidate method_request, result and status into a new object
await send_method_response_async(method_response, callback=callback) await send_method_response_async(method_response, callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully sent method response to Hub") logger.info("Successfully sent method response to Hub")
@ -158,7 +223,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await enable_feature_async(feature_name, callback=callback) await enable_feature_async(feature_name, callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully enabled feature:" + feature_name) logger.info("Successfully enabled feature:" + feature_name)
@ -166,7 +231,17 @@ class GenericIoTHubClient(AbstractIoTHubClient):
""" """
Gets the device or module twin from the Azure IoT Hub or Azure IoT Edge Hub service. Gets the device or module twin from the Azure IoT Hub or Azure IoT Edge Hub service.
:returns: Twin object which was retrieved from the hub :returns: Complete Twin as a JSON dict
:rtype: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Getting twin") logger.info("Getting twin")
@ -177,7 +252,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback(return_arg_name="twin") callback = async_adapter.AwaitableCallback(return_arg_name="twin")
await get_twin_async(callback=callback) await get_twin_async(callback=callback)
twin = await callback.completion() twin = await handle_result(callback)
logger.info("Successfully retrieved twin") logger.info("Successfully retrieved twin")
return twin return twin
@ -188,8 +263,17 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If the service returns an error on the patch operation, this function will raise the If the service returns an error on the patch operation, this function will raise the
appropriate error. appropriate error.
:param reported_properties_patch: :param reported_properties_patch: Twin Reported Properties patch as a JSON dict
:type reported_properties_patch: dict, str, int, float, bool, or None (JSON compatible values) :type reported_properties_patch: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Patching twin reported properties") logger.info("Patching twin reported properties")
@ -202,7 +286,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await patch_twin_async(patch=reported_properties_patch, callback=callback) await patch_twin_async(patch=reported_properties_patch, callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully sent twin patch") logger.info("Successfully sent twin patch")
@ -212,7 +296,8 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If no method request is yet available, will wait until it is available. If no method request is yet available, will wait until it is available.
:returns: desired property patch. This can be dict, str, int, float, bool, or None (JSON compatible values) :returns: Twin Desired Properties patch as a JSON dict
:rtype: dict
""" """
if not self._iothub_pipeline.feature_enabled[constant.TWIN_PATCHES]: if not self._iothub_pipeline.feature_enabled[constant.TWIN_PATCHES]:
await self._enable_feature(constant.TWIN_PATCHES) await self._enable_feature(constant.TWIN_PATCHES)
@ -223,6 +308,48 @@ class GenericIoTHubClient(AbstractIoTHubClient):
logger.info("twin patch received") logger.info("twin patch received")
return patch return patch
async def get_storage_info_for_blob(self, blob_name):
"""Sends a POST request over HTTP to an IoTHub endpoint that will return information for uploading via the Azure Storage Account linked to the IoTHub your device is connected to.
:param str blob_name: The name in string format of the blob that will be uploaded using the storage API. This name will be used to generate the proper credentials for Storage, and needs to match what will be used with the Azure Storage SDK to perform the blob upload.
:returns: A JSON-like (dictionary) object from IoT Hub that will contain relevant information including: correlationId, hostName, containerName, blobName, sasToken.
"""
get_storage_info_for_blob_async = async_adapter.emulate_async(
self._http_pipeline.get_storage_info_for_blob
)
callback = async_adapter.AwaitableCallback(return_arg_name="storage_info")
await get_storage_info_for_blob_async(blob_name=blob_name, callback=callback)
storage_info = await handle_result(callback)
logger.info("Successfully retrieved storage_info")
return storage_info
async def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description
):
"""When the upload is complete, the device sends a POST request to the IoT Hub endpoint with information on the status of an upload to blob attempt. This is used by IoT Hub to notify listening clients.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
"""
notify_blob_upload_status_async = async_adapter.emulate_async(
self._http_pipeline.notify_blob_upload_status
)
callback = async_adapter.AwaitableCallback()
await notify_blob_upload_status_async(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=callback,
)
await handle_result(callback)
logger.info("Successfully notified blob upload status")
class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient): class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
"""An asynchronous device client that connects to an Azure IoT Hub instance. """An asynchronous device client that connects to an Azure IoT Hub instance.
@ -230,16 +357,16 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
Intended for usage with Python 3.5.3+ Intended for usage with Python 3.5.3+
""" """
def __init__(self, iothub_pipeline): def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a IoTHubDeviceClient. """Initializer for a IoTHubDeviceClient.
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
""" """
super().__init__(iothub_pipeline=iothub_pipeline) super().__init__(iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline)
self._iothub_pipeline.on_c2d_message_received = self._inbox_manager.route_c2d_message self._iothub_pipeline.on_c2d_message_received = self._inbox_manager.route_c2d_message
async def receive_message(self): async def receive_message(self):
@ -248,6 +375,7 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
If no message is yet available, will wait until an item is available. If no message is yet available, will wait until an item is available.
:returns: Message that was sent from the Azure IoT Hub. :returns: Message that was sent from the Azure IoT Hub.
:rtype: :class:`azure.iot.device.Message`
""" """
if not self._iothub_pipeline.feature_enabled[constant.C2D_MSG]: if not self._iothub_pipeline.feature_enabled[constant.C2D_MSG]:
await self._enable_feature(constant.C2D_MSG) await self._enable_feature(constant.C2D_MSG)
@ -265,18 +393,16 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
Intended for usage with Python 3.5.3+ Intended for usage with Python 3.5.3+
""" """
def __init__(self, iothub_pipeline, edge_pipeline=None): def __init__(self, iothub_pipeline, http_pipeline):
"""Intializer for a IoTHubModuleClient. """Intializer for a IoTHubModuleClient.
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint.
:type edge_pipeline: EdgePipeline
""" """
super().__init__(iothub_pipeline=iothub_pipeline, edge_pipeline=edge_pipeline) super().__init__(iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline)
self._iothub_pipeline.on_input_message_received = self._inbox_manager.route_input_message self._iothub_pipeline.on_input_message_received = self._inbox_manager.route_input_message
async def send_message_to_output(self, message, output_name): async def send_message_to_output(self, message, output_name):
@ -287,13 +413,27 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If the connection to the service has not previously been opened by a call to connect, this If the connection to the service has not previously been opened by a call to connect, this
function will open the connection before sending the event. function will open the connection before sending the event.
:param message: message to send to the given output. Anything passed that is not an instance of the :param message: Message to send to the given output. Anything passed that is not an
Message class will be converted to Message object. instance of the Message class will be converted to Message object.
:param output_name: Name of the output to send the event to. :type message: :class:`azure.iot.device.Message` or str
:param str output_name: Name of the output to send the event to.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
""" """
if not isinstance(message, Message): if not isinstance(message, Message):
message = Message(message) message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of message can not exceed 256 KB.")
message.output_name = output_name message.output_name = output_name
logger.info("Sending message to output:" + output_name + "...") logger.info("Sending message to output:" + output_name + "...")
@ -303,7 +443,7 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
callback = async_adapter.AwaitableCallback() callback = async_adapter.AwaitableCallback()
await send_output_event_async(message, callback=callback) await send_output_event_async(message, callback=callback)
await callback.completion() await handle_result(callback)
logger.info("Successfully sent message to output: " + output_name) logger.info("Successfully sent message to output: " + output_name)
@ -313,7 +453,9 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If no message is yet available, will wait until an item is available. If no message is yet available, will wait until an item is available.
:param str input_name: The input name to receive a message on. :param str input_name: The input name to receive a message on.
:returns: Message that was sent to the specified input. :returns: Message that was sent to the specified input.
:rtype: :class:`azure.iot.device.Message`
""" """
if not self._iothub_pipeline.feature_enabled[constant.INPUT_MSG]: if not self._iothub_pipeline.feature_enabled[constant.INPUT_MSG]:
await self._enable_feature(constant.INPUT_MSG) await self._enable_feature(constant.INPUT_MSG)
@ -323,3 +465,21 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
message = await inbox.get() message = await inbox.get()
logger.info("Input message received on: " + input_name) logger.info("Input message received on: " + input_name)
return message return message
async def invoke_method(self, method_params, device_id, module_id=None):
"""Invoke a method from your client onto a device or module client, and receive the response to the method call.
:param dict method_params: Should contain a method_name, payload, connect_timeout_in_seconds, response_timeout_in_seconds.
:param str device_id: Device ID of the target device where the method will be invoked.
:param str module_id: Module ID of the target module where the method will be invoked. (Optional)
:returns: method_result should contain a status, and a payload
:rtype: dict
"""
invoke_method_async = async_adapter.emulate_async(self._http_pipeline.invoke_method)
callback = async_adapter.AwaitableCallback(return_arg_name="invoke_method_response")
await invoke_method_async(device_id, method_params, callback=callback, module_id=module_id)
method_response = await handle_result(callback)
logger.info("Successfully invoked method")
return method_response

Просмотреть файл

@ -10,6 +10,7 @@ import abc
import logging import logging
import math import math
import six import six
import weakref
from threading import Timer from threading import Timer
import six.moves.urllib as urllib import six.moves.urllib as urllib
from .authentication_provider import AuthenticationProvider from .authentication_provider import AuthenticationProvider
@ -60,10 +61,9 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
self._token_update_timer = None self._token_update_timer = None
self.shared_access_key_name = None self.shared_access_key_name = None
self.sas_token_str = None self.sas_token_str = None
self.on_sas_token_updated_handler = None self.on_sas_token_updated_handler_list = []
def disconnect(self): def __del__(self):
"""Cancel updates to the SAS Token"""
self._cancel_token_update_timer() self._cancel_token_update_timer()
def generate_new_sas_token(self): def generate_new_sas_token(self):
@ -81,14 +81,14 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
If self.token_udpate_callback is set, this callback will be called to notify the If self.token_udpate_callback is set, this callback will be called to notify the
pipeline that a new token is available. The pipeline is responsible for doing pipeline that a new token is available. The pipeline is responsible for doing
whatever is necessary to leverage the new token when the on_sas_token_updated_handler whatever is necessary to leverage the new token when the on_sas_token_updated_handler_list
function is called. function is called.
The token that is generated expires at some point in the future, based on the token The token that is generated expires at some point in the future, based on the token
renewal interval and the token renewal margin. When a token is first generated, the renewal interval and the token renewal margin. When a token is first generated, the
authorization provider object will set a timer which will be responsible for renewing authorization provider object will set a timer which will be responsible for renewing
the token before the it expires. When this timer fires, it will automatically generate the token before the it expires. When this timer fires, it will automatically generate
a new sas token and notify the pipeline by calling self.on_sas_token_updated_handler. a new sas token and notify the pipeline by calling self.on_sas_token_updated_handler_list.
The token update timer is set based on two numbers: self.token_validity_period and The token update timer is set based on two numbers: self.token_validity_period and
self.token_renewal_margin self.token_renewal_margin
@ -144,7 +144,11 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
t = self._token_update_timer t = self._token_update_timer
self._token_update_timer = None self._token_update_timer = None
if t: if t:
logger.debug("Canceling token update timer for (%s,%s)", self.device_id, self.module_id) logger.debug(
"Canceling token update timer for (%s,%s)",
self.device_id,
self.module_id if self.module_id else "",
)
t.cancel() t.cancel()
def _schedule_token_update(self, seconds_until_update): def _schedule_token_update(self, seconds_until_update):
@ -160,9 +164,30 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
seconds_until_update, seconds_until_update,
) )
# It's important to use a weak reference to self inside this timer function
# because we don't want the timer to prevent this object (`self`) from being collected.
#
# We want `self` to get collected when the pipeline gets collected, and
# we want the pipeline to get collected when the client object gets collected.
# This way, everything gets cleaned up when the user is done with the client object,
# as expected.
#
# If timerfunc used `self` directly, that would be a strong reference, and that strong
# reference would prevent `self` from being collected as long as the timer existed.
#
# If this isn't collected when the client is collected, then the object that implements the
# on_sas_token_updated_hndler doesn't get collected. Since that object is part of the
# pipeline, a major part of the pipeline ends up staying around, probably orphaned from
# the client. Since that orphaned part of the pipeline contains Paho, bad things can happen
# if we don't clean up Paho correctly. This is especially noticable if one process
# destroys a client object and creates a new one.
#
self_weakref = weakref.ref(self)
def timerfunc(): def timerfunc():
logger.debug("Timed SAS update for (%s,%s)", self.device_id, self.module_id) this = self_weakref()
self.generate_new_sas_token() logger.debug("Timed SAS update for (%s,%s)", this.device_id, this.module_id)
this.generate_new_sas_token()
self._token_update_timer = Timer(seconds_until_update, timerfunc) self._token_update_timer = Timer(seconds_until_update, timerfunc)
self._token_update_timer.daemon = True self._token_update_timer.daemon = True
@ -173,14 +198,15 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
In response to this event, clients should re-initiate their connection in order to use In response to this event, clients should re-initiate their connection in order to use
the updated sas token. the updated sas token.
""" """
if self.on_sas_token_updated_handler: if bool(len(self.on_sas_token_updated_handler_list)):
logger.debug( logger.debug(
"sending token update notification for (%s, %s)", self.device_id, self.module_id "sending token update notification for (%s, %s)", self.device_id, self.module_id
) )
self.on_sas_token_updated_handler() for x in self.on_sas_token_updated_handler_list:
x()
else: else:
logger.warning( logger.warning(
"_notify_token_updated: on_sas_token_updated_handler not set. Doing nothing." "_notify_token_updated: on_sas_token_updated_handler_list not set. Doing nothing."
) )
def get_current_sas_token(self): def get_current_sas_token(self):

Просмотреть файл

@ -12,14 +12,15 @@ import requests
import requests_unixsocket import requests_unixsocket
import logging import logging
from .base_renewable_token_authentication_provider import BaseRenewableTokenAuthenticationProvider from .base_renewable_token_authentication_provider import BaseRenewableTokenAuthenticationProvider
from azure.iot.device import constant from azure.iot.device.common.chainable_exception import ChainableException
from azure.iot.device.product_info import ProductInfo
requests_unixsocket.monkeypatch() requests_unixsocket.monkeypatch()
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class IoTEdgeError(Exception): class IoTEdgeError(ChainableException):
pass pass
@ -56,7 +57,7 @@ class IoTEdgeAuthenticationProvider(BaseRenewableTokenAuthenticationProvider):
workload_uri=workload_uri, workload_uri=workload_uri,
) )
self.gateway_hostname = gateway_hostname self.gateway_hostname = gateway_hostname
self.ca_cert = self.hsm.get_trust_bundle() self.server_verification_cert = self.hsm.get_trust_bundle()
# TODO: reconsider this design when refactoring the BaseRenewableToken auth parent # TODO: reconsider this design when refactoring the BaseRenewableToken auth parent
# TODO: Consider handling the quoting within this function, and renaming quoted_resource_uri to resource_uri # TODO: Consider handling the quoting within this function, and renaming quoted_resource_uri to resource_uri
@ -107,7 +108,7 @@ class IoTEdgeHsm(object):
Return the trust bundle that can be used to validate the server-side SSL Return the trust bundle that can be used to validate the server-side SSL
TLS connection that we use to talk to edgeHub. TLS connection that we use to talk to edgeHub.
:return: The CA certificate to use for connections to the Azure IoT Edge :return: The server verification certificate to use for connections to the Azure IoT Edge
instance, as a PEM certificate in string form. instance, as a PEM certificate in string form.
:raises: IoTEdgeError if unable to retrieve the certificate. :raises: IoTEdgeError if unable to retrieve the certificate.
@ -115,23 +116,23 @@ class IoTEdgeHsm(object):
r = requests.get( r = requests.get(
self.workload_uri + "trust-bundle", self.workload_uri + "trust-bundle",
params={"api-version": self.api_version}, params={"api-version": self.api_version},
headers={"User-Agent": urllib.parse.quote_plus(constant.USER_AGENT)}, headers={"User-Agent": urllib.parse.quote_plus(ProductInfo.get_iothub_user_agent())},
) )
# Validate that the request was successful # Validate that the request was successful
try: try:
r.raise_for_status() r.raise_for_status()
except requests.exceptions.HTTPError: except requests.exceptions.HTTPError as e:
raise IoTEdgeError("Unable to get trust bundle from EdgeHub") raise IoTEdgeError(message="Unable to get trust bundle from EdgeHub", cause=e)
# Decode the trust bundle # Decode the trust bundle
try: try:
bundle = r.json() bundle = r.json()
except ValueError: except ValueError as e:
raise IoTEdgeError("Unable to decode trust bundle") raise IoTEdgeError(message="Unable to decode trust bundle", cause=e)
# Retrieve the certificate # Retrieve the certificate
try: try:
cert = bundle["certificate"] cert = bundle["certificate"]
except KeyError: except KeyError as e:
raise IoTEdgeError("No certificate in trust bundle") raise IoTEdgeError(message="No certificate in trust bundle", cause=e)
return cert return cert
def sign(self, data_str): def sign(self, data_str):
@ -161,21 +162,21 @@ class IoTEdgeHsm(object):
r = requests.post( # TODO: can we use json field instead of data? r = requests.post( # TODO: can we use json field instead of data?
url=path, url=path,
params={"api-version": self.api_version}, params={"api-version": self.api_version},
headers={"User-Agent": urllib.parse.quote_plus(constant.USER_AGENT)}, headers={"User-Agent": urllib.parse.quote_plus(ProductInfo.get_iothub_user_agent())},
data=json.dumps(sign_request), data=json.dumps(sign_request),
) )
try: try:
r.raise_for_status() r.raise_for_status()
except requests.exceptions.HTTPError: except requests.exceptions.HTTPError as e:
raise IoTEdgeError("Unable to sign data") raise IoTEdgeError(message="Unable to sign data", cause=e)
try: try:
sign_response = r.json() sign_response = r.json()
except ValueError: except ValueError as e:
raise IoTEdgeError("Unable to decode signed data") raise IoTEdgeError(message="Unable to decode signed data", cause=e)
try: try:
signed_data_str = sign_response["digest"] signed_data_str = sign_response["digest"]
except KeyError: except KeyError as e:
raise IoTEdgeError("No signed data received") raise IoTEdgeError(message="No signed data received", cause=e)
return urllib.parse.quote(signed_data_str) return urllib.parse.quote(signed_data_str)

Просмотреть файл

@ -64,7 +64,7 @@ class SymmetricKeyAuthenticationProvider(BaseRenewableTokenAuthenticationProvide
self.shared_access_key = shared_access_key self.shared_access_key = shared_access_key
self.shared_access_key_name = shared_access_key_name self.shared_access_key_name = shared_access_key_name
self.gateway_hostname = gateway_hostname self.gateway_hostname = gateway_hostname
self.ca_cert = None self.server_verification_cert = None
@staticmethod @staticmethod
def parse(connection_string): def parse(connection_string):

Просмотреть файл

@ -6,6 +6,7 @@
"""This module contains a class representing messages that are sent or received. """This module contains a class representing messages that are sent or received.
""" """
from azure.iot.device import constant from azure.iot.device import constant
import sys
# TODO: Revise this class. Does all of this REALLY need to be here? # TODO: Revise this class. Does all of this REALLY need to be here?
@ -15,7 +16,7 @@ class Message(object):
:ivar data: The data that constitutes the payload :ivar data: The data that constitutes the payload
:ivar custom_properties: Dictionary of custom message properties :ivar custom_properties: Dictionary of custom message properties
:ivar lock_token: Used by receiver to abandon, reject or complete the message :ivar lock_token: Used by receiver to abandon, reject or complete the message
:ivar message id: A user-settlable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''} :ivar message id: A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}
:ivar sequence_number: A number (unique per device-queue) assigned by IoT Hub to each message :ivar sequence_number: A number (unique per device-queue) assigned by IoT Hub to each message
:ivar to: A destination specified for Cloud-to-Device (C2D) messages :ivar to: A destination specified for Cloud-to-Device (C2D) messages
:ivar expiry_time_utc: Date and time of message expiration in UTC format :ivar expiry_time_utc: Date and time of message expiration in UTC format
@ -36,8 +37,8 @@ class Message(object):
:param data: The data that constitutes the payload :param data: The data that constitutes the payload
:param str message_id: A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''} :param str message_id: A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}
:param str content_encoding: Content encoding of the message data. Can be 'utf-8', 'utf-16' or 'utf-32' :param str content_encoding: Content encoding of the message data. Other values can be utf-16' or 'utf-32'
:param str content_type: Content type property used to routes with the message body. Can be 'application/json' :param str content_type: Content type property used to routes with the message body.
:param str output_name: Name of the output that the is being sent to. :param str output_name: Name of the output that the is being sent to.
""" """
self.data = data self.data = data
@ -70,3 +71,16 @@ class Message(object):
def __str__(self): def __str__(self):
return str(self.data) return str(self.data)
def get_size(self):
total = 0
total = total + sum(
sys.getsizeof(v)
for v in self.__dict__.values()
if v is not None and v is not self.custom_properties
)
if self.custom_properties:
total = total + sum(
sys.getsizeof(v) for v in self.custom_properties.values() if v is not None
)
return total

Просмотреть файл

@ -6,4 +6,5 @@ INTERNAL USAGE ONLY
""" """
from .iothub_pipeline import IoTHubPipeline from .iothub_pipeline import IoTHubPipeline
from .edge_pipeline import EdgePipeline from .http_pipeline import HTTPPipeline
from .config import IoTHubPipelineConfig

Просмотреть файл

@ -0,0 +1,30 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
from azure.iot.device.common.pipeline.config import BasePipelineConfig
logger = logging.getLogger(__name__)
class IoTHubPipelineConfig(BasePipelineConfig):
"""A class for storing all configurations/options for IoTHub clients in the Azure IoT Python Device Client Library.
"""
def __init__(self, product_info="", **kwargs):
"""Initializer for IoTHubPipelineConfig which passes all unrecognized keyword-args down to BasePipelineConfig
to be evaluated. This stacked options setting is to allow for unique configuration options to exist between the
IoTHub Client and the Provisioning Client, while maintaining a base configuration class with shared config options.
:param str product_info: A custom identification string for the type of device connecting to Azure IoT Hub.
"""
super(IoTHubPipelineConfig, self).__init__(**kwargs)
self.product_info = product_info
# Now, the parameters below are not exposed to the user via kwargs. They need to be set by manipulating the IoTHubPipelineConfig object.
# They are not in the BasePipelineConfig because these do not apply to the provisioning client.
self.blob_upload = False
self.method_invoke = False

Просмотреть файл

@ -0,0 +1,22 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the pipeline API"""
# For now, present relevant transport errors as part of the Pipeline API surface
# so that they do not have to be duplicated at this layer.
from azure.iot.device.common.pipeline.pipeline_exceptions import *
from azure.iot.device.common.transport_exceptions import (
ConnectionFailedError,
ConnectionDroppedError,
# TODO: UnauthorizedError (the one from transport) should probably not surface out of
# the pipeline due to confusion with the higher level service UnauthorizedError. It
# should probably get turned into some other error instead (e.g. ConnectionFailedError).
# But for now, this is a stopgap.
UnauthorizedError,
ProtocolClientError,
TlsExchangeAuthError,
ProtocolProxyError,
)

Просмотреть файл

@ -0,0 +1,67 @@
def translate_error(sc, reason):
"""
Codes_SRS_NODE_IOTHUB_REST_API_CLIENT_16_012: [Any error object returned by translate_error shall inherit from the generic Error Javascript object and have 3 properties:
- response shall contain the IncomingMessage object returned by the HTTP layer.
- reponseBody shall contain the content of the HTTP response.
- message shall contain a human-readable error message.]
"""
message = "Error: {}".format(reason)
if sc == 400:
# translate_error shall return an ArgumentError if the HTTP response status code is 400.
error = "ArgumentError({})".format(message)
elif sc == 401:
# translate_error shall return an UnauthorizedError if the HTTP response status code is 401.
error = "UnauthorizedError({})".format(message)
elif sc == 403:
# translate_error shall return an TooManyDevicesError if the HTTP response status code is 403.
error = "TooManyDevicesError({})".format(message)
elif sc == 404:
if reason == "Device Not Found":
# translate_error shall return an DeviceNotFoundError if the HTTP response status code is 404 and if the error code within the body of the error response is DeviceNotFound.
error = "DeviceNotFoundError({})".format(message)
elif reason == "IoTHub Not Found":
# translate_error shall return an IotHubNotFoundError if the HTTP response status code is 404 and if the error code within the body of the error response is IotHubNotFound.
error = "IotHubNotFoundError({})".format(message)
else:
error = "Error('Not found')"
elif sc == 408:
# translate_error shall return a DeviceTimeoutError if the HTTP response status code is 408.
error = "DeviceTimeoutError({})".format(message)
elif sc == 409:
# translate_error shall return an DeviceAlreadyExistsError if the HTTP response status code is 409.
error = "DeviceAlreadyExistsError({})".format(message)
elif sc == 412:
# translate_error shall return an InvalidEtagError if the HTTP response status code is 412.
error = "InvalidEtagError({})".format(message)
elif sc == 429:
# translate_error shall return an ThrottlingError if the HTTP response status code is 429.]
error = "ThrottlingError({})".format(message)
elif sc == 500:
# translate_error shall return an InternalServerError if the HTTP response status code is 500.
error = "InternalServerError({})".format(message)
elif sc == 502:
# translate_error shall return a BadDeviceResponseError if the HTTP response status code is 502.
error = "BadDeviceResponseError({})".format(message)
elif sc == 503:
# translate_error shall return an ServiceUnavailableError if the HTTP response status code is 503.
error = "ServiceUnavailableError({})".format(message)
elif sc == 504:
# translate_error shall return a GatewayTimeoutError if the HTTP response status code is 504.
error = "GatewayTimeoutError({})".format(message)
else:
# If the HTTP error code is unknown, translate_error should return a generic Javascript Error object.
error = "Error({})".format(message)
return error

Просмотреть файл

@ -0,0 +1,44 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six.moves.urllib as urllib
logger = logging.getLogger(__name__)
def get_method_invoke_path(device_id, module_id=None):
"""
:return: The path for invoking methods from one module to a device or module. It is of the format
twins/uri_encode($device_id)/modules/uri_encode($module_id)/methods
"""
if module_id:
return "twins/{device_id}/modules/{module_id}/methods".format(
device_id=urllib.parse.quote_plus(device_id),
module_id=urllib.parse.quote_plus(module_id),
)
else:
return "twins/{device_id}/methods".format(device_id=urllib.parse.quote_plus(device_id))
def get_storage_info_for_blob_path(device_id):
"""
This does not take a module_id since get_storage_info_for_blob_path should only ever be invoked on device clients.
:return: The path for getting the storage sdk credential information from IoT Hub. It is of the format
devices/uri_encode($device_id)/files
"""
return "devices/{}/files".format(urllib.parse.quote_plus(device_id))
def get_notify_blob_upload_status_path(device_id):
"""
This does not take a module_id since get_notify_blob_upload_status_path should only ever be invoked on device clients.
:return: The path for getting the storage sdk credential information from IoT Hub. It is of the format
devices/uri_encode($device_id)/files/notifications
"""
return "devices/{}/files/notifications".format(urllib.parse.quote_plus(device_id))

Просмотреть файл

@ -0,0 +1,170 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import sys
from azure.iot.device.common.evented_callback import EventedCallback
from azure.iot.device.common.pipeline import (
pipeline_stages_base,
pipeline_ops_base,
pipeline_stages_http,
)
from azure.iot.device.iothub.pipeline import exceptions as pipeline_exceptions
from . import (
constant,
pipeline_stages_iothub,
pipeline_ops_iothub,
pipeline_ops_iothub_http,
pipeline_stages_iothub_http,
)
from azure.iot.device.iothub.auth.x509_authentication_provider import X509AuthenticationProvider
logger = logging.getLogger(__name__)
class HTTPPipeline(object):
"""Pipeline to communicate with Edge.
Uses HTTP.
"""
def __init__(self, auth_provider, pipeline_configuration):
"""
Constructor for instantiating a pipeline adapter object.
:param auth_provider: The authentication provider
:param pipeline_configuration: The configuration generated based on user inputs
"""
self._pipeline = (
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
.append_stage(pipeline_stages_iothub.UseAuthProviderStage())
.append_stage(pipeline_stages_iothub_http.IoTHubHTTPTranslationStage())
.append_stage(pipeline_stages_http.HTTPTransportStage())
)
callback = EventedCallback()
if isinstance(auth_provider, X509AuthenticationProvider):
op = pipeline_ops_iothub.SetX509AuthProviderOperation(
auth_provider=auth_provider, callback=callback
)
else: # Currently everything else goes via this block.
op = pipeline_ops_iothub.SetAuthProviderOperation(
auth_provider=auth_provider, callback=callback
)
self._pipeline.run_op(op)
callback.wait_for_completion()
def invoke_method(self, device_id, method_params, callback, module_id=None):
"""
Send a request to the service to invoke a method on a target device or module.
:param device_id: The target device id
:param method_params: The method parameters to be invoked on the target client
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None.
On failure, this callback is called with error set to the cause of the failure.
:param module_id: The target module id
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline invoke_method called")
if not self._pipeline.pipeline_configuration.method_invoke:
# If this parameter is not set, that means that the pipeline was not generated by the edge environment. Method invoke only works for clients generated using the edge environment.
error = pipeline_exceptions.PipelineError(
"invoke_method called, but it is only supported on module clients generated from an edge environment. If you are not using a module generated from an edge environment, you cannot use invoke_method"
)
return callback(error=error)
def on_complete(op, error):
callback(error=error, invoke_method_response=op.method_response)
self._pipeline.run_op(
pipeline_ops_iothub_http.MethodInvokeOperation(
target_device_id=device_id,
target_module_id=module_id,
method_params=method_params,
callback=on_complete,
)
)
def get_storage_info_for_blob(self, blob_name, callback):
"""
Sends a POST request to the IoT Hub service endpoint to retrieve an object that contains information for uploading via the Storage SDK.
:param blob_name: The name of the blob that will be uploaded via the Azure Storage SDK.
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None, and the storage_info set to the information JSON received from the service.
On failure, this callback is called with error set to the cause of the failure, and the storage_info=None.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline get_storage_info_for_blob called")
if not self._pipeline.pipeline_configuration.blob_upload:
# If this parameter is not set, that means this is not a device client. Upload to blob is not supported on module clients.
error = pipeline_exceptions.PipelineError(
"get_storage_info_for_blob called, but it is only supported for use with device clients. Ensure you are using a device client."
)
return callback(error=error)
def on_complete(op, error):
callback(error=error, storage_info=op.storage_info)
self._pipeline.run_op(
pipeline_ops_iothub_http.GetStorageInfoOperation(
blob_name=blob_name, callback=on_complete
)
)
def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description, callback
):
"""
Sends a POST request to a IoT Hub service endpoint to notify the status of the Storage SDK call for a blob upload.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None.
On failure, this callback is called with error set to the cause of the failure.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline notify_blob_upload_status called")
if not self._pipeline.pipeline_configuration.blob_upload:
# If this parameter is not set, that means this is not a device client. Upload to blob is not supported on module clients.
error = pipeline_exceptions.PipelineError(
"notify_blob_upload_status called, but it is only supported for use with device clients. Ensure you are using a device client."
)
return callback(error=error)
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub_http.NotifyBlobUploadStatusOperation(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=on_complete,
)
)

Просмотреть файл

@ -25,11 +25,13 @@ logger = logging.getLogger(__name__)
class IoTHubPipeline(object): class IoTHubPipeline(object):
def __init__(self, auth_provider): def __init__(self, auth_provider, pipeline_configuration):
""" """
Constructor for instantiating a pipeline adapter object Constructor for instantiating a pipeline adapter object
:param auth_provider: The authentication provider :param auth_provider: The authentication provider
:param pipeline_configuration: The configuration generated based on user inputs
""" """
self.feature_enabled = { self.feature_enabled = {
constant.C2D_MSG: False, constant.C2D_MSG: False,
constant.INPUT_MSG: False, constant.INPUT_MSG: False,
@ -46,14 +48,70 @@ class IoTHubPipeline(object):
self.on_method_request_received = None self.on_method_request_received = None
self.on_twin_patch_received = None self.on_twin_patch_received = None
# Currently a single timeout stage and a single retry stage for MQTT retry only.
# Later, a higher level timeout and a higher level retry stage.
self._pipeline = ( self._pipeline = (
pipeline_stages_base.PipelineRootStage() #
# The root is always the root. By definition, it's the first stage in the pipeline.
#
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
#
# UseAuthProviderStage comes near the root by default because it doesn't need to be after
# anything, but it does need to be before IoTHubMQTTTranslationStage.
#
.append_stage(pipeline_stages_iothub.UseAuthProviderStage()) .append_stage(pipeline_stages_iothub.UseAuthProviderStage())
.append_stage(pipeline_stages_iothub.HandleTwinOperationsStage()) #
# TwinRequestResponseStage comes near the root by default because it doesn't need to be
# after anything
#
.append_stage(pipeline_stages_iothub.TwinRequestResponseStage())
#
# CoordinateRequestAndResponseStage needs to be after TwinRequestResponseStage because
# TwinRequestResponseStage creates the request ops that CoordinateRequestAndResponseStage
# is coordinating. It needs to be before IoTHubMQTTTranslationStage because that stage
# operates on ops that CoordinateRequestAndResponseStage produces
#
.append_stage(pipeline_stages_base.CoordinateRequestAndResponseStage()) .append_stage(pipeline_stages_base.CoordinateRequestAndResponseStage())
.append_stage(pipeline_stages_iothub_mqtt.IoTHubMQTTConverterStage()) #
.append_stage(pipeline_stages_base.EnsureConnectionStage()) # IoTHubMQTTTranslationStage comes here because this is the point where we can translate
.append_stage(pipeline_stages_base.SerializeConnectOpsStage()) # all operations directly into MQTT. After this stage, only pipeline_stages_base stages
# are allowed because IoTHubMQTTTranslationStage removes all the IoTHub-ness from the ops
#
.append_stage(pipeline_stages_iothub_mqtt.IoTHubMQTTTranslationStage())
#
# AutoConnectStage comes here because only MQTT ops have the need_connection flag set
# and this is the first place in the pipeline wherer we can guaranetee that all network
# ops are MQTT ops.
#
.append_stage(pipeline_stages_base.AutoConnectStage())
#
# ReconnectStage needs to be after AutoConnectStage because ReconnectStage sets/clears
# the virtually_conencted flag and we want an automatic connection op to set this flag so
# we can reconnect autoconnect operations. This is important, for example, if a
# send_message causes the transport to automatically connect, but that connection fails.
# When that happens, the ReconenctState will hold onto the ConnectOperation until it
# succeeds, and only then will return success to the AutoConnectStage which will
# allow the publish to continue.
#
.append_stage(pipeline_stages_base.ReconnectStage())
#
# ConnectionLockStage needs to be after ReconnectStage because we want any ops that
# ReconnectStage creates to go through the ConnectionLockStage gate
#
.append_stage(pipeline_stages_base.ConnectionLockStage())
#
# RetryStage needs to be near the end because it's retrying low-level MQTT operations.
#
.append_stage(pipeline_stages_base.RetryStage())
#
# OpTimeoutStage needs to be after RetryStage because OpTimeoutStage returns the timeout
# errors that RetryStage is watching for.
#
.append_stage(pipeline_stages_base.OpTimeoutStage())
#
# MQTTTransportStage needs to be at the very end of the pipeline because this is where
# operations turn into network traffic
#
.append_stage(pipeline_stages_mqtt.MQTTTransportStage()) .append_stage(pipeline_stages_mqtt.MQTTTransportStage())
) )
@ -110,23 +168,25 @@ class IoTHubPipeline(object):
self._pipeline.run_op(op) self._pipeline.run_op(op)
callback.wait_for_completion() callback.wait_for_completion()
if op.error:
logger.error("{} failed: {}".format(op.name, op.error))
raise op.error
def connect(self, callback): def connect(self, callback):
""" """
Connect to the service. Connect to the service.
:param callback: callback which is called when the connection to the service is complete. :param callback: callback which is called when the connection to the service is complete.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
logger.debug("Starting ConnectOperation on the pipeline") logger.debug("Starting ConnectOperation on the pipeline")
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=on_complete)) self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=on_complete))
@ -135,14 +195,16 @@ class IoTHubPipeline(object):
Disconnect from the service. Disconnect from the service.
:param callback: callback which is called when the connection to the service has been disconnected :param callback: callback which is called when the connection to the service has been disconnected
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
logger.debug("Starting DisconnectOperation on the pipeline") logger.debug("Starting DisconnectOperation on the pipeline")
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=on_complete)) self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=on_complete))
@ -152,13 +214,18 @@ class IoTHubPipeline(object):
:param message: message to send. :param message: message to send.
:param callback: callback which is called when the message publish has been acknowledged by the service. :param callback: callback which is called when the message publish has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_iothub.SendD2CMessageOperation(message=message, callback=on_complete) pipeline_ops_iothub.SendD2CMessageOperation(message=message, callback=on_complete)
@ -170,13 +237,18 @@ class IoTHubPipeline(object):
:param message: message to send. :param message: message to send.
:param callback: callback which is called when the message publish has been acknowledged by the service. :param callback: callback which is called when the message publish has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_iothub.SendOutputEventOperation(message=message, callback=on_complete) pipeline_ops_iothub.SendOutputEventOperation(message=message, callback=on_complete)
@ -188,14 +260,19 @@ class IoTHubPipeline(object):
:param method_response: the method response to send :param method_response: the method response to send
:param callback: callback which is called when response has been acknowledged by the service :param callback: callback which is called when response has been acknowledged by the service
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
logger.debug("IoTHubPipeline send_method_response called") logger.debug("IoTHubPipeline send_method_response called")
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_iothub.SendMethodResponseOperation( pipeline_ops_iothub.SendMethodResponseOperation(
@ -208,12 +285,22 @@ class IoTHubPipeline(object):
Send a request for a full twin to the service. Send a request for a full twin to the service.
:param callback: callback which is called when request has been acknowledged by the service. :param callback: callback which is called when request has been acknowledged by the service.
This callback should have one parameter, which will contain the requested twin when called. This callback should have two parameters. On success, this callback is called with the
requested twin and error=None. On failure, this callback is called with None for the requested
twin and error set to the cause of the failure.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
def on_complete(op): def on_complete(op, error):
if op.error: if error:
callback(error=op.error, twin=None) callback(error=error, twin=None)
else: else:
callback(twin=op.twin) callback(twin=op.twin)
@ -225,13 +312,18 @@ class IoTHubPipeline(object):
:param patch: the reported properties patch to send :param patch: the reported properties patch to send
:param callback: callback which is called when request has been acknowledged by the service. :param callback: callback which is called when request has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_iothub.PatchTwinReportedPropertiesOperation( pipeline_ops_iothub.PatchTwinReportedPropertiesOperation(
@ -253,11 +345,8 @@ class IoTHubPipeline(object):
raise ValueError("Invalid feature_name") raise ValueError("Invalid feature_name")
self.feature_enabled[feature_name] = True self.feature_enabled[feature_name] = True
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_base.EnableFeatureOperation( pipeline_ops_base.EnableFeatureOperation(
@ -279,14 +368,18 @@ class IoTHubPipeline(object):
raise ValueError("Invalid feature_name") raise ValueError("Invalid feature_name")
self.feature_enabled[feature_name] = False self.feature_enabled[feature_name] = False
def on_complete(op): def on_complete(op, error):
if op.error: callback(error=error)
callback(error=op.error)
else:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_base.DisableFeatureOperation( pipeline_ops_base.DisableFeatureOperation(
feature_name=feature_name, callback=on_complete feature_name=feature_name, callback=on_complete
) )
) )
@property
def connected(self):
"""
Read-only property to indicate if the transport is connected or not.
"""
return self._pipeline.connected

Просмотреть файл

@ -18,7 +18,7 @@ class SetX509AuthProviderOperation(PipelineOperation):
very IoTHub-specific very IoTHub-specific
""" """
def __init__(self, auth_provider, callback=None): def __init__(self, auth_provider, callback):
""" """
Initializer for SetAuthProviderOperation objects. Initializer for SetAuthProviderOperation objects.
@ -42,7 +42,7 @@ class SetAuthProviderOperation(PipelineOperation):
very IoTHub-specific very IoTHub-specific
""" """
def __init__(self, auth_provider, callback=None): def __init__(self, auth_provider, callback):
""" """
Initializer for SetAuthProviderOperation objects. Initializer for SetAuthProviderOperation objects.
@ -69,12 +69,12 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
self, self,
device_id, device_id,
hostname, hostname,
callback,
module_id=None, module_id=None,
gateway_hostname=None, gateway_hostname=None,
ca_cert=None, server_verification_cert=None,
client_cert=None, client_cert=None,
sas_token=None, sas_token=None,
callback=None,
): ):
""" """
Initializer for SetIoTHubConnectionArgsOperation objects. Initializer for SetIoTHubConnectionArgsOperation objects.
@ -85,8 +85,8 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
for the module we are connecting. for the module we are connecting.
:param str gateway_hostname: (optional) If we are going through a gateway host, this is the :param str gateway_hostname: (optional) If we are going through a gateway host, this is the
hostname for the gateway hostname for the gateway
:param str ca_cert: (Optional) The CA certificate to use if the server that we're going to :param str server_verification_cert: (Optional) The server verification certificate to use
connect to uses server-side TLS if the server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect :param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the service to the service
:param str sas_token: The token string which will be used to authenticate with the service :param str sas_token: The token string which will be used to authenticate with the service
@ -99,7 +99,7 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
self.module_id = module_id self.module_id = module_id
self.hostname = hostname self.hostname = hostname
self.gateway_hostname = gateway_hostname self.gateway_hostname = gateway_hostname
self.ca_cert = ca_cert self.server_verification_cert = server_verification_cert
self.client_cert = client_cert self.client_cert = client_cert
self.sas_token = sas_token self.sas_token = sas_token
@ -111,7 +111,7 @@ class SendD2CMessageOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client This operation is in the group of IoTHub operations because it is very specific to the IoTHub client
""" """
def __init__(self, message, callback=None): def __init__(self, message, callback):
""" """
Initializer for SendD2CMessageOperation objects. Initializer for SendD2CMessageOperation objects.
@ -131,7 +131,7 @@ class SendOutputEventOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client This operation is in the group of IoTHub operations because it is very specific to the IoTHub client
""" """
def __init__(self, message, callback=None): def __init__(self, message, callback):
""" """
Initializer for SendOutputEventOperation objects. Initializer for SendOutputEventOperation objects.
@ -152,7 +152,7 @@ class SendMethodResponseOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client. This operation is in the group of IoTHub operations because it is very specific to the IoTHub client.
""" """
def __init__(self, method_response, callback=None): def __init__(self, method_response, callback):
""" """
Initializer for SendMethodResponseOperation objects. Initializer for SendMethodResponseOperation objects.
@ -176,7 +176,7 @@ class GetTwinOperation(PipelineOperation):
:type twin: Twin :type twin: Twin
""" """
def __init__(self, callback=None): def __init__(self, callback):
""" """
Initializer for GetTwinOperation objects. Initializer for GetTwinOperation objects.
""" """
@ -190,7 +190,7 @@ class PatchTwinReportedPropertiesOperation(PipelineOperation):
IoT Hub or Azure IoT Edge Hub service. IoT Hub or Azure IoT Edge Hub service.
""" """
def __init__(self, patch, callback=None): def __init__(self, patch, callback):
""" """
Initializer for PatchTwinReportedPropertiesOperation object Initializer for PatchTwinReportedPropertiesOperation object

Просмотреть файл

@ -0,0 +1,79 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device.common.pipeline import PipelineOperation
class MethodInvokeOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to send a method invoke to an IoTHub or EdgeHub server.
This operation is in the group of EdgeHub operations because it is very specific to the EdgeHub client.
"""
def __init__(self, target_device_id, target_module_id, method_params, callback):
"""
Initializer for MethodInvokeOperation objects.
:param str target_device_id: The device id of the target device/module
:param str target_module_id: The module id of the target module
:param method_params: The parameters used to invoke the method, as defined by the IoT Hub specification.
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
"""
super(MethodInvokeOperation, self).__init__(callback=callback)
self.target_device_id = target_device_id
self.target_module_id = target_module_id
self.method_params = method_params
self.method_response = None
class GetStorageInfoOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to get the storage information from IoT Hub.
"""
def __init__(self, blob_name, callback):
"""
Initializer for GetStorageInfo objects.
:param str blob_name: The name of the blob that will be created in Azure Storage
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
:ivar dict storage_info: Upon completion, this contains the storage information which was retrieved from the service.
"""
super(GetStorageInfoOperation, self).__init__(callback=callback)
self.blob_name = blob_name
self.storage_info = None
class NotifyBlobUploadStatusOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to get the storage information from IoT Hub.
"""
def __init__(self, correlation_id, is_success, status_code, status_description, callback):
"""
Initializer for GetStorageInfo objects.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int request_status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
"""
super(NotifyBlobUploadStatusOperation, self).__init__(callback=callback)
self.correlation_id = correlation_id
self.is_success = is_success
self.request_status_code = status_code
self.status_description = status_description

Просмотреть файл

@ -6,13 +6,10 @@
import json import json
import logging import logging
from azure.iot.device.common.pipeline import ( from azure.iot.device.common.pipeline import pipeline_ops_base, PipelineStage, pipeline_thread
pipeline_ops_base, from azure.iot.device import exceptions
PipelineStage, from azure.iot.device.common import handle_exceptions
operation_flow, from azure.iot.device.common.callable_weak_method import CallableWeakMethod
pipeline_thread,
)
from azure.iot.device.common import unhandled_exceptions
from . import pipeline_ops_iothub from . import pipeline_ops_iothub
from . import constant from . import constant
@ -32,138 +29,146 @@ class UseAuthProviderStage(PipelineStage):
""" """
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetAuthProviderOperation): if isinstance(op, pipeline_ops_iothub.SetAuthProviderOperation):
self.auth_provider = op.auth_provider self.auth_provider = op.auth_provider
self.auth_provider.on_sas_token_updated_handler = self.on_sas_token_updated # Here we append rather than just add it to the handler value because otherwise it
operation_flow.delegate_to_different_op( # would overwrite the handler from another pipeline that might be using the same auth provider.
stage=self, self.auth_provider.on_sas_token_updated_handler_list.append(
original_op=op, CallableWeakMethod(self, "_on_sas_token_updated")
new_op=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation( )
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation,
device_id=self.auth_provider.device_id, device_id=self.auth_provider.device_id,
module_id=getattr(self.auth_provider, "module_id", None), module_id=self.auth_provider.module_id,
hostname=self.auth_provider.hostname, hostname=self.auth_provider.hostname,
gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None), gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None),
ca_cert=getattr(self.auth_provider, "ca_cert", None), server_verification_cert=getattr(
sas_token=self.auth_provider.get_current_sas_token(), self.auth_provider, "server_verification_cert", None
), ),
sas_token=self.auth_provider.get_current_sas_token(),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub.SetX509AuthProviderOperation): elif isinstance(op, pipeline_ops_iothub.SetX509AuthProviderOperation):
self.auth_provider = op.auth_provider self.auth_provider = op.auth_provider
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation,
original_op=op,
new_op=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation(
device_id=self.auth_provider.device_id, device_id=self.auth_provider.device_id,
module_id=getattr(self.auth_provider, "module_id", None), module_id=self.auth_provider.module_id,
hostname=self.auth_provider.hostname, hostname=self.auth_provider.hostname,
gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None), gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None),
ca_cert=getattr(self.auth_provider, "ca_cert", None), server_verification_cert=getattr(
client_cert=self.auth_provider.get_x509_certificate(), self.auth_provider, "server_verification_cert", None
), ),
client_cert=self.auth_provider.get_x509_certificate(),
) )
self.send_op_down(worker_op)
else: else:
operation_flow.pass_op_to_next_stage(self, op) super(UseAuthProviderStage, self)._run_op(op)
@pipeline_thread.invoke_on_pipeline_thread_nowait @pipeline_thread.invoke_on_pipeline_thread_nowait
def on_sas_token_updated(self): def _on_sas_token_updated(self):
logger.info( logger.info(
"{}: New sas token received. Passing down UpdateSasTokenOperation.".format(self.name) "{}: New sas token received. Passing down UpdateSasTokenOperation.".format(self.name)
) )
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def on_token_update_complete(op): def on_token_update_complete(op, error):
if op.error: if error:
logger.error( logger.error(
"{}({}): token update operation failed. Error={}".format( "{}({}): token update operation failed. Error={}".format(
self.name, op.name, op.error self.name, op.name, error
) )
) )
unhandled_exceptions.exception_caught_in_background_thread(op.error) handle_exceptions.handle_background_exception(error)
else: else:
logger.debug( logger.debug(
"{}({}): token update operation is complete".format(self.name, op.name) "{}({}): token update operation is complete".format(self.name, op.name)
) )
operation_flow.pass_op_to_next_stage( self.send_op_down(
stage=self, pipeline_ops_base.UpdateSasTokenOperation(
op=pipeline_ops_base.UpdateSasTokenOperation(
sas_token=self.auth_provider.get_current_sas_token(), sas_token=self.auth_provider.get_current_sas_token(),
callback=on_token_update_complete, callback=on_token_update_complete,
), )
) )
class HandleTwinOperationsStage(PipelineStage): class TwinRequestResponseStage(PipelineStage):
""" """
PipelineStage which handles twin operations. In particular, it converts twin GET and PATCH PipelineStage which handles twin operations. In particular, it converts twin GET and PATCH
operations into SendIotRequestAndWaitForResponseOperation operations. This is done at the IoTHub level because operations into RequestAndResponseOperation operations. This is done at the IoTHub level because
there is nothing protocol-specific about this code. The protocol-specific implementation there is nothing protocol-specific about this code. The protocol-specific implementation
for twin requests and responses is handled inside IoTHubMQTTConverterStage, when it converts for twin requests and responses is handled inside IoTHubMQTTTranslationStage, when it converts
the SendIotRequestOperation to a protocol-specific send operation and when it converts the the RequestOperation to a protocol-specific send operation and when it converts the
protocol-specific receive event into an IotResponseEvent event. protocol-specific receive event into an ResponseEvent event.
""" """
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
def map_twin_error(original_op, twin_op): def map_twin_error(error, twin_op):
if twin_op.error: if error:
original_op.error = twin_op.error return error
elif twin_op.status_code >= 300: elif twin_op.status_code >= 300:
# TODO map error codes to correct exceptions # TODO map error codes to correct exceptions
logger.error("Error {} received from twin operation".format(twin_op.status_code)) logger.error("Error {} received from twin operation".format(twin_op.status_code))
logger.error("response body: {}".format(twin_op.response_body)) logger.error("response body: {}".format(twin_op.response_body))
original_op.error = Exception( return exceptions.ServiceError(
"twin operation returned status {}".format(twin_op.status_code) "twin operation returned status {}".format(twin_op.status_code)
) )
if isinstance(op, pipeline_ops_iothub.GetTwinOperation): if isinstance(op, pipeline_ops_iothub.GetTwinOperation):
def on_twin_response(twin_op): # Alias to avoid overload within the callback below
logger.debug("{}({}): Got response for GetTwinOperation".format(self.name, op.name)) # CT-TODO: remove the need for this with better callback semantics
map_twin_error(original_op=op, twin_op=twin_op) op_waiting_for_response = op
if not twin_op.error:
op.twin = json.loads(twin_op.response_body.decode("utf-8"))
operation_flow.complete_op(self, op)
operation_flow.pass_op_to_next_stage( def on_twin_response(op, error):
self, logger.debug("{}({}): Got response for GetTwinOperation".format(self.name, op.name))
pipeline_ops_base.SendIotRequestAndWaitForResponseOperation( error = map_twin_error(error=error, twin_op=op)
if not error:
op_waiting_for_response.twin = json.loads(op.response_body.decode("utf-8"))
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.TWIN, request_type=constant.TWIN,
method="GET", method="GET",
resource_location="/", resource_location="/",
request_body=" ", request_body=" ",
callback=on_twin_response, callback=on_twin_response,
), )
) )
elif isinstance(op, pipeline_ops_iothub.PatchTwinReportedPropertiesOperation): elif isinstance(op, pipeline_ops_iothub.PatchTwinReportedPropertiesOperation):
def on_twin_response(twin_op): # Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_waiting_for_response = op
def on_twin_response(op, error):
logger.debug( logger.debug(
"{}({}): Got response for PatchTwinReportedPropertiesOperation operation".format( "{}({}): Got response for PatchTwinReportedPropertiesOperation operation".format(
self.name, op.name self.name, op.name
) )
) )
map_twin_error(original_op=op, twin_op=twin_op) error = map_twin_error(error=error, twin_op=op)
operation_flow.complete_op(self, op) op_waiting_for_response.complete(error=error)
logger.debug( logger.debug(
"{}({}): Sending reported properties patch: {}".format(self.name, op.name, op.patch) "{}({}): Sending reported properties patch: {}".format(self.name, op.name, op.patch)
) )
operation_flow.pass_op_to_next_stage( self.send_op_down(
self, pipeline_ops_base.RequestAndResponseOperation(
(
pipeline_ops_base.SendIotRequestAndWaitForResponseOperation(
request_type=constant.TWIN, request_type=constant.TWIN,
method="PATCH", method="PATCH",
resource_location="/properties/reported/", resource_location="/properties/reported/",
request_body=json.dumps(op.patch), request_body=json.dumps(op.patch),
callback=on_twin_response, callback=on_twin_response,
) )
),
) )
else: else:
operation_flow.pass_op_to_next_stage(self, op) super(TwinRequestResponseStage, self)._run_op(op)

Просмотреть файл

@ -0,0 +1,225 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import json
import six.moves.urllib as urllib
from azure.iot.device.common.pipeline import (
pipeline_events_base,
pipeline_ops_base,
pipeline_ops_http,
PipelineStage,
pipeline_thread,
)
from . import pipeline_ops_iothub, pipeline_ops_iothub_http, http_path_iothub, http_map_error
from azure.iot.device import exceptions
from azure.iot.device import constant as pkg_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__)
@pipeline_thread.runs_on_pipeline_thread
def map_http_error(error, http_op):
if error:
return error
elif http_op.status_code >= 300:
translated_error = http_map_error.translate_error(http_op.status_code, http_op.reason)
return exceptions.ServiceError(
"HTTP operation returned: {} {}".format(http_op.status_code, translated_error)
)
class IoTHubHTTPTranslationStage(PipelineStage):
"""
PipelineStage which converts other Iot and EdgeHub operations into HTTP operations. This stage also
converts http pipeline events into Iot and EdgeHub pipeline events.
"""
def __init__(self):
super(IoTHubHTTPTranslationStage, self).__init__()
self.device_id = None
self.module_id = None
self.hostname = None
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetIoTHubConnectionArgsOperation):
self.device_id = op.device_id
self.module_id = op.module_id
if op.gateway_hostname:
logger.debug(
"Gateway Hostname Present. Setting Hostname to: {}".format(op.gateway_hostname)
)
self.hostname = op.gateway_hostname
else:
logger.debug(
"Gateway Hostname not present. Setting Hostname to: {}".format(
op.gateway_hostname
)
)
self.hostname = op.hostname
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_http.SetHTTPConnectionArgsOperation,
hostname=self.hostname,
server_verification_cert=op.server_verification_cert,
client_cert=op.client_cert,
sas_token=op.sas_token,
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub_http.MethodInvokeOperation):
logger.debug(
"{}({}): Translating Method Invoke Operation for HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
# if the target is a module.
body = json.dumps(op.method_params)
path = http_path_iothub.get_method_invoke_path(op.target_device_id, op.target_module_id)
# Note we do not add the sas Authorization header here. Instead we add it later on in the stage above
# the transport layer, since that stage stores the updated SAS and also X509 certs if that is what is
# being used.
x_ms_edge_string = "{deviceId}/{moduleId}".format(
deviceId=self.device_id, moduleId=self.module_id
) # these are the identifiers of the current module
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
headers = {
"Host": self.hostname,
"Content-Type": "application/json",
"Content-Length": len(str(body)),
"x-ms-edge-moduleId": x_ms_edge_string,
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for MethodInvokeOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
if not error:
op_waiting_for_response.method_response = json.loads(
op.response_body.decode("utf-8")
)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
elif isinstance(op, pipeline_ops_iothub_http.GetStorageInfoOperation):
logger.debug(
"{}({}): Translating Get Storage Info Operation to HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
path = http_path_iothub.get_storage_info_for_blob_path(self.device_id)
body = json.dumps({"blobName": op.blob_name})
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
headers = {
"Host": self.hostname,
"Accept": "application/json",
"Content-Type": "application/json",
"Content-Length": len(str(body)),
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for GetStorageInfoOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
if not error:
op_waiting_for_response.storage_info = json.loads(
op.response_body.decode("utf-8")
)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
elif isinstance(op, pipeline_ops_iothub_http.NotifyBlobUploadStatusOperation):
logger.debug(
"{}({}): Translating Get Storage Info Operation to HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
path = http_path_iothub.get_notify_blob_upload_status_path(self.device_id)
body = json.dumps(
{
"correlationId": op.correlation_id,
"isSuccess": op.is_success,
"statusCode": op.request_status_code,
"statusDescription": op.status_description,
}
)
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
# Note we do not add the sas Authorization header here. Instead we add it later on in the stage above
# the transport layer, since that stage stores the updated SAS and also X509 certs if that is what is
# being used.
headers = {
"Host": self.hostname,
"Content-Type": "application/json; charset=utf-8",
"Content-Length": len(str(body)),
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for GetStorageInfoOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
else:
# All other operations get passed down
self.send_op_down(op)

Просмотреть файл

@ -13,29 +13,32 @@ from azure.iot.device.common.pipeline import (
pipeline_ops_mqtt, pipeline_ops_mqtt,
pipeline_events_mqtt, pipeline_events_mqtt,
PipelineStage, PipelineStage,
operation_flow,
pipeline_thread, pipeline_thread,
) )
from azure.iot.device.iothub.models import Message, MethodRequest from azure.iot.device.iothub.models import Message, MethodRequest
from . import pipeline_ops_iothub, pipeline_events_iothub, mqtt_topic_iothub from . import pipeline_ops_iothub, pipeline_events_iothub, mqtt_topic_iothub
from . import constant as pipeline_constant from . import constant as pipeline_constant
from . import exceptions as pipeline_exceptions
from azure.iot.device import constant as pkg_constant from azure.iot.device import constant as pkg_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class IoTHubMQTTConverterStage(PipelineStage): class IoTHubMQTTTranslationStage(PipelineStage):
""" """
PipelineStage which converts other Iot and IoTHub operations into MQTT operations. This stage also PipelineStage which converts other Iot and IoTHub operations into MQTT operations. This stage also
converts mqtt pipeline events into Iot and IoTHub pipeline events. converts mqtt pipeline events into Iot and IoTHub pipeline events.
""" """
def __init__(self): def __init__(self):
super(IoTHubMQTTConverterStage, self).__init__() super(IoTHubMQTTTranslationStage, self).__init__()
self.feature_to_topic = {} self.feature_to_topic = {}
self.device_id = None
self.module_id = None
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetIoTHubConnectionArgsOperation): if isinstance(op, pipeline_ops_iothub.SetIoTHubConnectionArgsOperation):
self.device_id = op.device_id self.device_id = op.device_id
@ -51,14 +54,21 @@ class IoTHubMQTTConverterStage(PipelineStage):
else: else:
client_id = op.device_id client_id = op.device_id
# For MQTT, the entire user agent string should be appended to the username field in the connect packet
# For example, the username may look like this without custom parameters:
# yosephsandboxhub.azure-devices.net/alpha/?api-version=2018-06-30&DeviceClientType=py-azure-iot-device%2F2.0.0-preview.12
# The customer user agent string would simply be appended to the end of this username, in URL Encoded format.
query_param_seq = [ query_param_seq = [
("api-version", pkg_constant.IOTHUB_API_VERSION), ("api-version", pkg_constant.IOTHUB_API_VERSION),
("DeviceClientType", pkg_constant.USER_AGENT), ("DeviceClientType", ProductInfo.get_iothub_user_agent()),
] ]
username = "{hostname}/{client_id}/?{query_params}".format( username = "{hostname}/{client_id}/?{query_params}{optional_product_info}".format(
hostname=op.hostname, hostname=op.hostname,
client_id=client_id, client_id=client_id,
query_params=urllib.parse.urlencode(query_param_seq), query_params=urllib.parse.urlencode(query_param_seq),
optional_product_info=urllib.parse.quote(
str(self.pipeline_root.pipeline_configuration.product_info)
),
) )
if op.gateway_hostname: if op.gateway_hostname:
@ -67,91 +77,64 @@ class IoTHubMQTTConverterStage(PipelineStage):
hostname = op.hostname hostname = op.hostname
# TODO: test to make sure client_cert and sas_token travel down correctly # TODO: test to make sure client_cert and sas_token travel down correctly
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation,
original_op=op,
new_op=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation(
client_id=client_id, client_id=client_id,
hostname=hostname, hostname=hostname,
username=username, username=username,
ca_cert=op.ca_cert, server_verification_cert=op.server_verification_cert,
client_cert=op.client_cert, client_cert=op.client_cert,
sas_token=op.sas_token, sas_token=op.sas_token,
),
) )
self.send_op_down(worker_op)
elif ( elif (
isinstance(op, pipeline_ops_base.UpdateSasTokenOperation) isinstance(op, pipeline_ops_base.UpdateSasTokenOperation)
and self.pipeline_root.connected and self.pipeline_root.connected
): ):
logger.debug( logger.debug(
"{}({}): Connected. Passing op down and reconnecting after token is updated.".format( "{}({}): Connected. Passing op down and reauthorizing after token is updated.".format(
self.name, op.name self.name, op.name
) )
) )
# make a callback that can call the user's callback after the reconnect is complete
def on_reconnect_complete(reconnect_op):
if reconnect_op.error:
op.error = reconnect_op.error
logger.error(
"{}({}) reconnection failed. returning error {}".format(
self.name, op.name, op.error
)
)
operation_flow.complete_op(stage=self, op=op)
else:
logger.debug(
"{}({}) reconnection succeeded. returning success.".format(
self.name, op.name
)
)
operation_flow.complete_op(stage=self, op=op)
# save the old user callback so we can call it later.
old_callback = op.callback
# make a callback that either fails the UpdateSasTokenOperation (if the lower level failed it), # make a callback that either fails the UpdateSasTokenOperation (if the lower level failed it),
# or issues a ReconnectOperation (if the lower level returned success for the UpdateSasTokenOperation) # or issues a ReauthorizeConnectionOperation (if the lower level returned success for the UpdateSasTokenOperation)
def on_token_update_complete(op): def on_token_update_complete(op, error):
op.callback = old_callback if error:
if op.error:
logger.error( logger.error(
"{}({}) token update failed. returning failure {}".format( "{}({}) token update failed. returning failure {}".format(
self.name, op.name, op.error self.name, op.name, error
) )
) )
operation_flow.complete_op(stage=self, op=op)
else: else:
logger.debug( logger.debug(
"{}({}) token update succeeded. reconnecting".format(self.name, op.name) "{}({}) token update succeeded. reauthorizing".format(self.name, op.name)
) )
operation_flow.pass_op_to_next_stage( # Stop completion of Token Update op, and only continue upon completion of ReauthorizeConnectionOperation
stage=self, op.halt_completion()
op=pipeline_ops_base.ReconnectOperation(callback=on_reconnect_complete), worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_base.ReauthorizeConnectionOperation
) )
logger.debug( self.send_op_down(worker_op)
"{}({}): passing to next stage with updated callback.".format(
self.name, op.name
)
)
# now, pass the UpdateSasTokenOperation down with our new callback. # now, pass the UpdateSasTokenOperation down with our new callback.
op.callback = on_token_update_complete op.add_callback(on_token_update_complete)
operation_flow.pass_op_to_next_stage(stage=self, op=op) self.send_op_down(op)
elif isinstance(op, pipeline_ops_iothub.SendD2CMessageOperation) or isinstance( elif isinstance(op, pipeline_ops_iothub.SendD2CMessageOperation) or isinstance(
op, pipeline_ops_iothub.SendOutputEventOperation op, pipeline_ops_iothub.SendOutputEventOperation
): ):
# Convert SendTelementry and SendOutputEventOperation operations into MQTT Publish operations # Convert SendTelementry and SendOutputEventOperation operations into MQTT Publish operations
topic = mqtt_topic_iothub.encode_properties(op.message, self.telemetry_topic) topic = mqtt_topic_iothub.encode_properties(op.message, self.telemetry_topic)
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
original_op=op, topic=topic,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(topic=topic, payload=op.message.data), payload=op.message.data,
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub.SendMethodResponseOperation): elif isinstance(op, pipeline_ops_iothub.SendMethodResponseOperation):
# Sending a Method Response gets translated into an MQTT Publish operation # Sending a Method Response gets translated into an MQTT Publish operation
@ -159,52 +142,48 @@ class IoTHubMQTTConverterStage(PipelineStage):
op.method_response.request_id, str(op.method_response.status) op.method_response.request_id, str(op.method_response.status)
) )
payload = json.dumps(op.method_response.payload) payload = json.dumps(op.method_response.payload)
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation, topic=topic, payload=payload
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(topic=topic, payload=payload),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.EnableFeatureOperation): elif isinstance(op, pipeline_ops_base.EnableFeatureOperation):
# Enabling a feature gets translated into an MQTT subscribe operation # Enabling a feature gets translated into an MQTT subscribe operation
topic = self.feature_to_topic[op.feature_name] topic = self.feature_to_topic[op.feature_name]
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTSubscribeOperation, topic=topic
original_op=op,
new_op=pipeline_ops_mqtt.MQTTSubscribeOperation(topic=topic),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.DisableFeatureOperation): elif isinstance(op, pipeline_ops_base.DisableFeatureOperation):
# Disabling a feature gets turned into an MQTT unsubscribe operation # Disabling a feature gets turned into an MQTT unsubscribe operation
topic = self.feature_to_topic[op.feature_name] topic = self.feature_to_topic[op.feature_name]
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTUnsubscribeOperation, topic=topic
original_op=op,
new_op=pipeline_ops_mqtt.MQTTUnsubscribeOperation(topic=topic),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.SendIotRequestOperation): elif isinstance(op, pipeline_ops_base.RequestOperation):
if op.request_type == pipeline_constant.TWIN: if op.request_type == pipeline_constant.TWIN:
topic = mqtt_topic_iothub.get_twin_topic_for_publish( topic = mqtt_topic_iothub.get_twin_topic_for_publish(
method=op.method, method=op.method,
resource_location=op.resource_location, resource_location=op.resource_location,
request_id=op.request_id, request_id=op.request_id,
) )
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
original_op=op, topic=topic,
new_op=pipeline_ops_mqtt.MQTTPublishOperation( payload=op.request_body,
topic=topic, payload=op.request_body
),
) )
self.send_op_down(worker_op)
else: else:
raise NotImplementedError( raise pipeline_exceptions.OperationError(
"SendIotRequestOperation request_type {} not supported".format(op.request_type) "RequestOperation request_type {} not supported".format(op.request_type)
) )
else: else:
# All other operations get passed down # All other operations get passed down
operation_flow.pass_op_to_next_stage(self, op) super(IoTHubMQTTTranslationStage, self)._run_op(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _set_topic_names(self, device_id, module_id): def _set_topic_names(self, device_id, module_id):
@ -240,17 +219,13 @@ class IoTHubMQTTConverterStage(PipelineStage):
if mqtt_topic_iothub.is_c2d_topic(topic, self.device_id): if mqtt_topic_iothub.is_c2d_topic(topic, self.device_id):
message = Message(event.payload) message = Message(event.payload)
mqtt_topic_iothub.extract_properties_from_topic(topic, message) mqtt_topic_iothub.extract_properties_from_topic(topic, message)
operation_flow.pass_event_to_previous_stage( self.send_event_up(pipeline_events_iothub.C2DMessageEvent(message))
self, pipeline_events_iothub.C2DMessageEvent(message)
)
elif mqtt_topic_iothub.is_input_topic(topic, self.device_id, self.module_id): elif mqtt_topic_iothub.is_input_topic(topic, self.device_id, self.module_id):
message = Message(event.payload) message = Message(event.payload)
mqtt_topic_iothub.extract_properties_from_topic(topic, message) mqtt_topic_iothub.extract_properties_from_topic(topic, message)
input_name = mqtt_topic_iothub.get_input_name_from_topic(topic) input_name = mqtt_topic_iothub.get_input_name_from_topic(topic)
operation_flow.pass_event_to_previous_stage( self.send_event_up(pipeline_events_iothub.InputMessageEvent(input_name, message))
self, pipeline_events_iothub.InputMessageEvent(input_name, message)
)
elif mqtt_topic_iothub.is_method_topic(topic): elif mqtt_topic_iothub.is_method_topic(topic):
request_id = mqtt_topic_iothub.get_method_request_id_from_topic(topic) request_id = mqtt_topic_iothub.get_method_request_id_from_topic(topic)
@ -260,32 +235,28 @@ class IoTHubMQTTConverterStage(PipelineStage):
name=method_name, name=method_name,
payload=json.loads(event.payload.decode("utf-8")), payload=json.loads(event.payload.decode("utf-8")),
) )
operation_flow.pass_event_to_previous_stage( self.send_event_up(pipeline_events_iothub.MethodRequestEvent(method_received))
self, pipeline_events_iothub.MethodRequestEvent(method_received)
)
elif mqtt_topic_iothub.is_twin_response_topic(topic): elif mqtt_topic_iothub.is_twin_response_topic(topic):
request_id = mqtt_topic_iothub.get_twin_request_id_from_topic(topic) request_id = mqtt_topic_iothub.get_twin_request_id_from_topic(topic)
status_code = int(mqtt_topic_iothub.get_twin_status_code_from_topic(topic)) status_code = int(mqtt_topic_iothub.get_twin_status_code_from_topic(topic))
operation_flow.pass_event_to_previous_stage( self.send_event_up(
self, pipeline_events_base.ResponseEvent(
pipeline_events_base.IotResponseEvent(
request_id=request_id, status_code=status_code, response_body=event.payload request_id=request_id, status_code=status_code, response_body=event.payload
), )
) )
elif mqtt_topic_iothub.is_twin_desired_property_patch_topic(topic): elif mqtt_topic_iothub.is_twin_desired_property_patch_topic(topic):
operation_flow.pass_event_to_previous_stage( self.send_event_up(
self,
pipeline_events_iothub.TwinDesiredPropertiesPatchEvent( pipeline_events_iothub.TwinDesiredPropertiesPatchEvent(
patch=json.loads(event.payload.decode("utf-8")) patch=json.loads(event.payload.decode("utf-8"))
), )
) )
else: else:
logger.debug("Uunknown topic: {} passing up to next handler".format(topic)) logger.debug("Unknown topic: {} passing up to next handler".format(topic))
operation_flow.pass_event_to_previous_stage(self, event) self.send_event_up(event)
else: else:
# all other messages get passed up # all other messages get passed up
operation_flow.pass_event_to_previous_stage(self, event) super(IoTHubMQTTTranslationStage, self)._handle_pipeline_event(event)

Просмотреть файл

@ -15,13 +15,41 @@ from .abstract_clients import (
) )
from .models import Message from .models import Message
from .inbox_manager import InboxManager from .inbox_manager import InboxManager
from .sync_inbox import SyncClientInbox from .sync_inbox import SyncClientInbox, InboxEmpty
from .pipeline import constant from .pipeline import constant as pipeline_constant
from .pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.common.evented_callback import EventedCallback from azure.iot.device.common.evented_callback import EventedCallback
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
from azure.iot.device import constant as device_constant
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def handle_result(callback):
try:
return callback.wait_for_completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except pipeline_exceptions.TlsExchangeAuthError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client due to TLS exchanges.", cause=e
)
except pipeline_exceptions.ProtocolProxyError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client raised due to proxy connections.", cause=e
)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class GenericIoTHubClient(AbstractIoTHubClient): class GenericIoTHubClient(AbstractIoTHubClient):
"""A superclass representing a generic synchronous client. """A superclass representing a generic synchronous client.
This class needs to be extended for specific clients. This class needs to be extended for specific clients.
@ -33,8 +61,10 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
TODO: How to document kwargs? :param iothub_pipeline: The IoTHubPipeline used for the client
Possible values: iothub_pipeline, edge_pipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param http_pipeline: The HTTPPipeline used for the client
:type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
""" """
# Depending on the subclass calling this __init__, there could be different arguments, # Depending on the subclass calling this __init__, there could be different arguments,
# and the super() call could call a different class, due to the different MROs # and the super() call could call a different class, due to the different MROs
@ -42,10 +72,14 @@ class GenericIoTHubClient(AbstractIoTHubClient):
# **kwargs. # **kwargs.
super(GenericIoTHubClient, self).__init__(**kwargs) super(GenericIoTHubClient, self).__init__(**kwargs)
self._inbox_manager = InboxManager(inbox_type=SyncClientInbox) self._inbox_manager = InboxManager(inbox_type=SyncClientInbox)
self._iothub_pipeline.on_connected = self._on_connected self._iothub_pipeline.on_connected = CallableWeakMethod(self, "_on_connected")
self._iothub_pipeline.on_disconnected = self._on_disconnected self._iothub_pipeline.on_disconnected = CallableWeakMethod(self, "_on_disconnected")
self._iothub_pipeline.on_method_request_received = self._inbox_manager.route_method_request self._iothub_pipeline.on_method_request_received = CallableWeakMethod(
self._iothub_pipeline.on_twin_patch_received = self._inbox_manager.route_twin_patch self._inbox_manager, "route_method_request"
)
self._iothub_pipeline.on_twin_patch_received = CallableWeakMethod(
self._inbox_manager, "route_twin_patch"
)
def _on_connected(self): def _on_connected(self):
"""Helper handler that is called upon an iothub pipeline connect""" """Helper handler that is called upon an iothub pipeline connect"""
@ -65,12 +99,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the connection This is a synchronous call, meaning that this function will not return until the connection
to the service has been completely established. to the service has been completely established.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Connecting to Hub...") logger.info("Connecting to Hub...")
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.connect(callback=callback) self._iothub_pipeline.connect(callback=callback)
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully connected to Hub") logger.info("Successfully connected to Hub")
@ -79,12 +122,15 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the connection This is a synchronous call, meaning that this function will not return until the connection
to the service has been completely closed. to the service has been completely closed.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Disconnecting from Hub...") logger.info("Disconnecting from Hub...")
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.disconnect(callback=callback) self._iothub_pipeline.disconnect(callback=callback)
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully disconnected from Hub") logger.info("Successfully disconnected from Hub")
@ -99,15 +145,29 @@ class GenericIoTHubClient(AbstractIoTHubClient):
:param message: The actual message to send. Anything passed that is not an instance of the :param message: The actual message to send. Anything passed that is not an instance of the
Message class will be converted to Message object. Message class will be converted to Message object.
:type message: :class:`azure.iot.device.Message` or str
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
""" """
if not isinstance(message, Message): if not isinstance(message, Message):
message = Message(message) message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of telemetry message can not exceed 256 KB.")
logger.info("Sending message to Hub...") logger.info("Sending message to Hub...")
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.send_message(message, callback=callback) self._iothub_pipeline.send_message(message, callback=callback)
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully sent message to Hub") logger.info("Successfully sent message to Hub")
@ -118,21 +178,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If this parameter is not given, all methods not already being specifically targeted by If this parameter is not given, all methods not already being specifically targeted by
a different request to receive_method will be received. a different request to receive_method will be received.
:param bool block: Indicates if the operation should block until a request is received. :param bool block: Indicates if the operation should block until a request is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out. :param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation. :returns: MethodRequest object representing the received method request, or None if
:raises: InboxEmpty if no request is available on a non-blocking operation. no method request has been received by the end of the blocking period.
:returns: MethodRequest object representing the received method request.
""" """
if not self._iothub_pipeline.feature_enabled[constant.METHODS]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.METHODS]:
self._enable_feature(constant.METHODS) self._enable_feature(pipeline_constant.METHODS)
method_inbox = self._inbox_manager.get_method_request_inbox(method_name) method_inbox = self._inbox_manager.get_method_request_inbox(method_name)
logger.info("Waiting for method request...") logger.info("Waiting for method request...")
try:
method_request = method_inbox.get(block=block, timeout=timeout) method_request = method_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
method_request = None
logger.info("Received method request") logger.info("Received method request")
return method_request return method_request
@ -146,13 +206,22 @@ class GenericIoTHubClient(AbstractIoTHubClient):
function will open the connection before sending the event. function will open the connection before sending the event.
:param method_response: The MethodResponse to send. :param method_response: The MethodResponse to send.
:type method_response: MethodResponse :type method_response: :class:`azure.iot.device.MethodResponse`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Sending method response to Hub...") logger.info("Sending method response to Hub...")
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.send_method_response(method_response, callback=callback) self._iothub_pipeline.send_method_response(method_response, callback=callback)
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully sent method response to Hub") logger.info("Successfully sent method response to Hub")
@ -180,14 +249,24 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the twin This is a synchronous call, meaning that this function will not return until the twin
has been retrieved from the service. has been retrieved from the service.
:returns: Twin object which was retrieved from the hub :returns: Complete Twin as a JSON dict
:rtype: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
if not self._iothub_pipeline.feature_enabled[constant.TWIN]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN]:
self._enable_feature(constant.TWIN) self._enable_feature(pipeline_constant.TWIN)
callback = EventedCallback(return_arg_name="twin") callback = EventedCallback(return_arg_name="twin")
self._iothub_pipeline.get_twin(callback=callback) self._iothub_pipeline.get_twin(callback=callback)
twin = callback.wait_for_completion() twin = handle_result(callback)
logger.info("Successfully retrieved twin") logger.info("Successfully retrieved twin")
return twin return twin
@ -202,17 +281,26 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If the service returns an error on the patch operation, this function will raise the If the service returns an error on the patch operation, this function will raise the
appropriate error. appropriate error.
:param reported_properties_patch: :param reported_properties_patch: Twin Reported Properties patch as a JSON dict
:type reported_properties_patch: dict, str, int, float, bool, or None (JSON compatible values) :type reported_properties_patch: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
if not self._iothub_pipeline.feature_enabled[constant.TWIN]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN]:
self._enable_feature(constant.TWIN) self._enable_feature(pipeline_constant.TWIN)
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.patch_twin_reported_properties( self._iothub_pipeline.patch_twin_reported_properties(
patch=reported_properties_patch, callback=callback patch=reported_properties_patch, callback=callback
) )
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully patched twin") logger.info("Successfully patched twin")
@ -231,20 +319,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
an InboxEmpty exception an InboxEmpty exception
:param bool block: Indicates if the operation should block until a request is received. :param bool block: Indicates if the operation should block until a request is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out. :param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation. :returns: Twin Desired Properties patch as a JSON dict, or None if no patch has been
:raises: InboxEmpty if no request is available on a non-blocking operation. received by the end of the blocking period
:rtype: dict or None
:returns: desired property patch. This can be dict, str, int, float, bool, or None (JSON compatible values)
""" """
if not self._iothub_pipeline.feature_enabled[constant.TWIN_PATCHES]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN_PATCHES]:
self._enable_feature(constant.TWIN_PATCHES) self._enable_feature(pipeline_constant.TWIN_PATCHES)
twin_patch_inbox = self._inbox_manager.get_twin_patch_inbox() twin_patch_inbox = self._inbox_manager.get_twin_patch_inbox()
logger.info("Waiting for twin patches...") logger.info("Waiting for twin patches...")
try:
patch = twin_patch_inbox.get(block=block, timeout=timeout) patch = twin_patch_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
return None
logger.info("twin patch received") logger.info("twin patch received")
return patch return patch
@ -255,39 +344,78 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+. Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+.
""" """
def __init__(self, iothub_pipeline): def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a IoTHubDeviceClient. """Initializer for a IoTHubDeviceClient.
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
""" """
super(IoTHubDeviceClient, self).__init__(iothub_pipeline=iothub_pipeline) super(IoTHubDeviceClient, self).__init__(
self._iothub_pipeline.on_c2d_message_received = self._inbox_manager.route_c2d_message iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline
)
self._iothub_pipeline.on_c2d_message_received = CallableWeakMethod(
self._inbox_manager, "route_c2d_message"
)
def receive_message(self, block=True, timeout=None): def receive_message(self, block=True, timeout=None):
"""Receive a message that has been sent from the Azure IoT Hub. """Receive a message that has been sent from the Azure IoT Hub.
:param bool block: Indicates if the operation should block until a message is received. :param bool block: Indicates if the operation should block until a message is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out. :param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation. :returns: Message that was sent from the Azure IoT Hub, or None if
:raises: InboxEmpty if no message is available on a non-blocking operation. no method request has been received by the end of the blocking period.
:rtype: :class:`azure.iot.device.Message` or None
:returns: Message that was sent from the Azure IoT Hub.
""" """
if not self._iothub_pipeline.feature_enabled[constant.C2D_MSG]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.C2D_MSG]:
self._enable_feature(constant.C2D_MSG) self._enable_feature(pipeline_constant.C2D_MSG)
c2d_inbox = self._inbox_manager.get_c2d_message_inbox() c2d_inbox = self._inbox_manager.get_c2d_message_inbox()
logger.info("Waiting for message from Hub...") logger.info("Waiting for message from Hub...")
try:
message = c2d_inbox.get(block=block, timeout=timeout) message = c2d_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
message = None
logger.info("Message received") logger.info("Message received")
return message return message
def get_storage_info_for_blob(self, blob_name):
"""Sends a POST request over HTTP to an IoTHub endpoint that will return information for uploading via the Azure Storage Account linked to the IoTHub your device is connected to.
:param str blob_name: The name in string format of the blob that will be uploaded using the storage API. This name will be used to generate the proper credentials for Storage, and needs to match what will be used with the Azure Storage SDK to perform the blob upload.
:returns: A JSON-like (dictionary) object from IoT Hub that will contain relevant information including: correlationId, hostName, containerName, blobName, sasToken.
"""
callback = EventedCallback(return_arg_name="storage_info")
self._http_pipeline.get_storage_info_for_blob(blob_name, callback=callback)
storage_info = handle_result(callback)
logger.info("Successfully retrieved storage_info")
return storage_info
def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description
):
"""When the upload is complete, the device sends a POST request to the IoT Hub endpoint with information on the status of an upload to blob attempt. This is used by IoT Hub to notify listening clients.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
"""
callback = EventedCallback()
self._http_pipeline.notify_blob_upload_status(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=callback,
)
handle_result(callback)
logger.info("Successfully notified blob upload status")
class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient): class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
"""A synchronous module client that connects to an Azure IoT Hub or Azure IoT Edge instance. """A synchronous module client that connects to an Azure IoT Hub or Azure IoT Edge instance.
@ -295,21 +423,23 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+. Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+.
""" """
def __init__(self, iothub_pipeline, edge_pipeline=None): def __init__(self, iothub_pipeline, http_pipeline):
"""Intializer for a IoTHubModuleClient. """Intializer for a IoTHubModuleClient.
This initializer should not be called directly. This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint. :param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline :type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint. :param http_pipeline: The pipeline used to connect to the IoTHub endpoint via HTTP.
:type edge_pipeline: EdgePipeline :type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
""" """
super(IoTHubModuleClient, self).__init__( super(IoTHubModuleClient, self).__init__(
iothub_pipeline=iothub_pipeline, edge_pipeline=edge_pipeline iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline
)
self._iothub_pipeline.on_input_message_received = CallableWeakMethod(
self._inbox_manager, "route_input_message"
) )
self._iothub_pipeline.on_input_message_received = self._inbox_manager.route_input_message
def send_message_to_output(self, message, output_name): def send_message_to_output(self, message, output_name):
"""Sends an event/message to the given module output. """Sends an event/message to the given module output.
@ -322,19 +452,34 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If the connection to the service has not previously been opened by a call to connect, this If the connection to the service has not previously been opened by a call to connect, this
function will open the connection before sending the event. function will open the connection before sending the event.
:param message: message to send to the given output. Anything passed that is not an instance of the :param message: Message to send to the given output. Anything passed that is not an instance of the
Message class will be converted to Message object. Message class will be converted to Message object.
:param output_name: Name of the output to send the event to. :type message: :class:`azure.iot.device.Message` or str
:param str output_name: Name of the output to send the event to.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
""" """
if not isinstance(message, Message): if not isinstance(message, Message):
message = Message(message) message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of message can not exceed 256 KB.")
message.output_name = output_name message.output_name = output_name
logger.info("Sending message to output:" + output_name + "...") logger.info("Sending message to output:" + output_name + "...")
callback = EventedCallback() callback = EventedCallback()
self._iothub_pipeline.send_output_event(message, callback=callback) self._iothub_pipeline.send_output_event(message, callback=callback)
callback.wait_for_completion() handle_result(callback)
logger.info("Successfully sent message to output: " + output_name) logger.info("Successfully sent message to output: " + output_name)
@ -343,19 +488,37 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
:param str input_name: The input name to receive a message on. :param str input_name: The input name to receive a message on.
:param bool block: Indicates if the operation should block until a message is received. :param bool block: Indicates if the operation should block until a message is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out. :param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation. :returns: Message that was sent to the specified input, or None if
:raises: InboxEmpty if no message is available on a non-blocking operation. no method request has been received by the end of the blocking period.
:returns: Message that was sent to the specified input.
""" """
if not self._iothub_pipeline.feature_enabled[constant.INPUT_MSG]: if not self._iothub_pipeline.feature_enabled[pipeline_constant.INPUT_MSG]:
self._enable_feature(constant.INPUT_MSG) self._enable_feature(pipeline_constant.INPUT_MSG)
input_inbox = self._inbox_manager.get_input_message_inbox(input_name) input_inbox = self._inbox_manager.get_input_message_inbox(input_name)
logger.info("Waiting for input message on: " + input_name + "...") logger.info("Waiting for input message on: " + input_name + "...")
try:
message = input_inbox.get(block=block, timeout=timeout) message = input_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
message = None
logger.info("Input message received on: " + input_name) logger.info("Input message received on: " + input_name)
return message return message
def invoke_method(self, method_params, device_id, module_id=None):
"""Invoke a method from your client onto a device or module client, and receive the response to the method call.
:param dict method_params: Should contain a method_name, payload, connect_timeout_in_seconds, response_timeout_in_seconds.
:param str device_id: Device ID of the target device where the method will be invoked.
:param str module_id: Module ID of the target module where the method will be invoked. (Optional)
:returns: method_result should contain a status, and a payload
:rtype: dict
"""
callback = EventedCallback(return_arg_name="invoke_method_response")
self._http_pipeline.invoke_method(
device_id, method_params, callback=callback, module_id=module_id
)
invoke_method_response = handle_result(callback)
logger.info("Successfully invoked method")
return invoke_method_response

Просмотреть файл

@ -0,0 +1,50 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import platform
from azure.iot.device.constant import VERSION, IOTHUB_IDENTIFIER, PROVISIONING_IDENTIFIER
python_runtime = platform.python_version()
os_type = platform.system()
os_release = platform.version()
architecture = platform.machine()
class ProductInfo(object):
"""
A class for creating product identifiers or agent strings for IotHub as well as Provisioning.
"""
@staticmethod
def _get_common_user_agent():
return "({python_runtime};{os_type} {os_release};{architecture})".format(
python_runtime=python_runtime,
os_type=os_type,
os_release=os_release,
architecture=architecture,
)
@staticmethod
def get_iothub_user_agent():
"""
Create the user agent for IotHub
"""
return "{iothub_iden}/{version}{common}".format(
iothub_iden=IOTHUB_IDENTIFIER,
version=VERSION,
common=ProductInfo._get_common_user_agent(),
)
@staticmethod
def get_provisioning_user_agent():
"""
Create the user agent for Provisioning
"""
return "{provisioning_iden}/{version}{common}".format(
provisioning_iden=PROVISIONING_IDENTIFIER,
version=VERSION,
common=ProductInfo._get_common_user_agent(),
)

Просмотреть файл

@ -11,13 +11,22 @@ Device Provisioning Service.
import abc import abc
import six import six
import logging import logging
from .security.sk_security_client import SymmetricKeySecurityClient from azure.iot.device.provisioning import pipeline, security
from .security.x509_security_client import X509SecurityClient
from azure.iot.device.provisioning.pipeline.provisioning_pipeline import ProvisioningPipeline
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def _validate_kwargs(**kwargs):
"""Helper function to validate user provided kwargs.
Raises TypeError if an invalid option has been provided"""
# TODO: add support for server_verification_cert
valid_kwargs = ["websockets", "cipher"]
for kwarg in kwargs:
if kwarg not in valid_kwargs:
raise TypeError("Got an unexpected keyword argument '{}'".format(kwarg))
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class AbstractProvisioningDeviceClient(object): class AbstractProvisioningDeviceClient(object):
""" """
@ -27,80 +36,110 @@ class AbstractProvisioningDeviceClient(object):
def __init__(self, provisioning_pipeline): def __init__(self, provisioning_pipeline):
""" """
Initializes the provisioning client. Initializes the provisioning client.
NOTE: This initializer should not be called directly.
Instead, the class methods that start with `create_from_` should be used to create a
client object.
:param provisioning_pipeline: Instance of the provisioning pipeline object. :param provisioning_pipeline: Instance of the provisioning pipeline object.
:type provisioning_pipeline: :class:`azure.iot.device.provisioning.pipeline.ProvisioningPipeline`
""" """
self._provisioning_pipeline = provisioning_pipeline self._provisioning_pipeline = provisioning_pipeline
self._provisioning_payload = None
@classmethod @classmethod
def create_from_symmetric_key( def create_from_symmetric_key(
cls, provisioning_host, registration_id, id_scope, symmetric_key, protocol_choice=None cls, provisioning_host, registration_id, id_scope, symmetric_key, **kwargs
): ):
""" """
Create a client which can be used to run the registration of a device with provisioning service Create a client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication. using Symmetric Key authentication.
:param provisioning_host: Host running the Device Provisioning Service. Can be found in the Azure portal in the
Overview tab as the string Global device endpoint :param str provisioning_host: Host running the Device Provisioning Service.
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service. Can be found in the Azure portal in the Overview tab as the string Global device endpoint.
The registration ID is alphanumeric, lowercase string and may contain hyphens. :param str registration_id: The registration ID used to uniquely identify a device in the
:param id_scope: The ID scope is used to uniquely identify the specific provisioning service the device will Device Provisioning Service. The registration ID is alphanumeric, lowercase string
register through. The ID scope is assigned to a Device Provisioning Service when it is created by the user and and may contain hyphens.
is generated by the service and is immutable, guaranteeing uniqueness. :param str id_scope: The ID scope used to uniquely identify the specific provisioning
:param symmetric_key: The key which will be used to create the shared access signature token to authenticate service the device will register through. The ID scope is assigned to a
the device with the Device Provisioning Service. By default, the Device Provisioning Service creates Device Provisioning Service when it is created by the user and is generated by the
new symmetric keys with a default length of 32 bytes when new enrollments are saved with the Auto-generate keys service and is immutable, guaranteeing uniqueness.
option enabled. Users can provide their own symmetric keys for enrollments by disabling this option within :param str symmetric_key: The key which will be used to create the shared access signature
16 bytes and 64 bytes and in valid Base64 format. token to authenticate the device with the Device Provisioning Service. By default,
:param protocol_choice: The choice for the protocol to be used. This is optional and will default to protocol MQTT currently. the Device Provisioning Service creates new symmetric keys with a default length of
:return: A ProvisioningDeviceClient which can register via Symmetric Key. 32 bytes when new enrollments are saved with the Auto-generate keys option enabled.
Users can provide their own symmetric keys for enrollments by disabling this option
within 16 bytes and 64 bytes and in valid Base64 format.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:raises: TypeError if given an unrecognized parameter.
:returns: A ProvisioningDeviceClient instance which can register via Symmetric Key.
""" """
if protocol_choice is not None: _validate_kwargs(**kwargs)
protocol_name = protocol_choice.lower()
else: security_client = security.SymmetricKeySecurityClient(
protocol_name = "mqtt" provisioning_host=provisioning_host,
if protocol_name == "mqtt": registration_id=registration_id,
security_client = SymmetricKeySecurityClient( id_scope=id_scope,
provisioning_host, registration_id, id_scope, symmetric_key symmetric_key=symmetric_key,
)
pipeline_configuration = pipeline.ProvisioningPipelineConfig(**kwargs)
mqtt_provisioning_pipeline = pipeline.ProvisioningPipeline(
security_client, pipeline_configuration
) )
mqtt_provisioning_pipeline = ProvisioningPipeline(security_client)
return cls(mqtt_provisioning_pipeline) return cls(mqtt_provisioning_pipeline)
else:
raise NotImplementedError(
"A symmetric key can only create symmetric key security client which is compatible "
"only with MQTT protocol.Any other protocol has not been implemented."
)
@classmethod @classmethod
def create_from_x509_certificate( def create_from_x509_certificate(
cls, provisioning_host, registration_id, id_scope, x509, protocol_choice=None cls, provisioning_host, registration_id, id_scope, x509, **kwargs
): ):
""" """
Create a client which can be used to run the registration of a device with provisioning service Create a client which can be used to run the registration of a device with
using X509 certificate authentication. provisioning service using X509 certificate authentication.
:param provisioning_host: Host running the Device Provisioning Service. Can be found in the Azure portal in the
Overview tab as the string Global device endpoint :param str provisioning_host: Host running the Device Provisioning Service. Can be found in
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service. the Azure portal in the Overview tab as the string Global device endpoint.
The registration ID is alphanumeric, lowercase string and may contain hyphens. :param str registration_id: The registration ID used to uniquely identify a device in the
:param id_scope: The ID scope is used to uniquely identify the specific provisioning service the device will Device Provisioning Service. The registration ID is alphanumeric, lowercase string
register through. The ID scope is assigned to a Device Provisioning Service when it is created by the user and and may contain hyphens.
is generated by the service and is immutable, guaranteeing uniqueness. :param str id_scope: The ID scope is used to uniquely identify the specific
:param x509: The x509 certificate, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates). provisioning service the device will register through. The ID scope is assigned to a
Device Provisioning Service when it is created by the user and is generated by the
service and is immutable, guaranteeing uniqueness.
:param x509: The x509 certificate, To use the certificate the enrollment object needs to
contain cert (either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded. If the cert comes from a CER file, it needs to be base64 encoded.
:param protocol_choice: The choice for the protocol to be used. This is optional and will default to protocol MQTT currently. :type x509: :class:`azure.iot.device.X509`
:return: A ProvisioningDeviceClient which can register via Symmetric Key.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:raises: TypeError if given an unrecognized parameter.
:returns: A ProvisioningDeviceClient which can register via Symmetric Key.
""" """
if protocol_choice is None: _validate_kwargs(**kwargs)
protocol_name = "mqtt"
else: security_client = security.X509SecurityClient(
protocol_name = protocol_choice.lower() provisioning_host=provisioning_host,
if protocol_name == "mqtt": registration_id=registration_id,
security_client = X509SecurityClient(provisioning_host, registration_id, id_scope, x509) id_scope=id_scope,
mqtt_provisioning_pipeline = ProvisioningPipeline(security_client) x509=x509,
return cls(mqtt_provisioning_pipeline)
else:
raise NotImplementedError(
"A x509 certificate can only create x509 security client which is compatible only "
"with MQTT protocol.Any other protocol has not been implemented."
) )
pipeline_configuration = pipeline.ProvisioningPipelineConfig(**kwargs)
mqtt_provisioning_pipeline = pipeline.ProvisioningPipeline(
security_client, pipeline_configuration
)
return cls(mqtt_provisioning_pipeline)
@abc.abstractmethod @abc.abstractmethod
def register(self): def register(self):
@ -109,12 +148,19 @@ class AbstractProvisioningDeviceClient(object):
""" """
pass pass
@abc.abstractmethod @property
def cancel(self): def provisioning_payload(self):
return self._provisioning_payload
@provisioning_payload.setter
def provisioning_payload(self, provisioning_payload):
""" """
Cancel an in progress registration of the device with the Device Provisioning Service. Set the payload that will form the request payload in a registration request.
:param provisioning_payload: The payload that can be supplied by the user.
:type provisioning_payload: This can be an object or dictionary or a string or an integer.
""" """
pass self._provisioning_payload = provisioning_payload
def log_on_register_complete(result=None): def log_on_register_complete(result=None):

Просмотреть файл

@ -3,8 +3,10 @@
# Licensed under the MIT License. See License.txt in the project root for # Licensed under the MIT License. See License.txt in the project root for
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
"""This module contains user-facing asynchronous clients for the """
Azure Provisioning Device SDK for Python. This module contains user-facing asynchronous Provisioning Device Client for Azure Provisioning
Device SDK. This client uses Symmetric Key and X509 authentication to register devices with an
IoT Hub via the Device Provisioning Service.
""" """
import logging import logging
@ -15,55 +17,77 @@ from azure.iot.device.provisioning.abstract_provisioning_device_client import (
from azure.iot.device.provisioning.abstract_provisioning_device_client import ( from azure.iot.device.provisioning.abstract_provisioning_device_client import (
log_on_register_complete, log_on_register_complete,
) )
from azure.iot.device.provisioning.internal.polling_machine import PollingMachine from azure.iot.device.provisioning.pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.provisioning.pipeline import constant as dps_constant
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
async def handle_result(callback):
try:
return await callback.completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class ProvisioningDeviceClient(AbstractProvisioningDeviceClient): class ProvisioningDeviceClient(AbstractProvisioningDeviceClient):
""" """
Client which can be used to run the registration of a device with provisioning service Client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication. using Symmetric Key or X509 authentication.
""" """
def __init__(self, provisioning_pipeline):
"""
Initializer for the Provisioning Client.
NOTE : This initializer should not be called directly.
Instead, the class method `create_from_security_client` should be used to create a client object.
:param provisioning_pipeline: The protocol pipeline for provisioning. As of now this only supports MQTT.
"""
super(ProvisioningDeviceClient, self).__init__(provisioning_pipeline)
self._polling_machine = PollingMachine(provisioning_pipeline)
async def register(self): async def register(self):
""" """
Register the device with the provisioning service. Register the device with the provisioning service.
Before returning the client will also disconnect from the provisioning service. Before returning the client will also disconnect from the provisioning service.
If a registration attempt is made while a previous registration is in progress it may throw an error. If a registration attempt is made while a previous registration is in progress it may
throw an error.
:returns: RegistrationResult indicating the result of the registration.
:rtype: :class:`azure.iot.device.RegistrationResult`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Registering with Provisioning Service...") logger.info("Registering with Provisioning Service...")
register_async = async_adapter.emulate_async(self._polling_machine.register)
callback = async_adapter.AwaitableCallback(return_arg_name="result") if not self._provisioning_pipeline.responses_enabled[dps_constant.REGISTER]:
await register_async(callback=callback) await self._enable_responses()
result = await callback.completion()
register_async = async_adapter.emulate_async(self._provisioning_pipeline.register)
register_complete = async_adapter.AwaitableCallback(return_arg_name="result")
await register_async(payload=self._provisioning_payload, callback=register_complete)
result = await handle_result(register_complete)
log_on_register_complete(result) log_on_register_complete(result)
return result return result
async def cancel(self): async def _enable_responses(self):
"""Enable to receive responses from Device Provisioning Service.
""" """
Before returning the client will also disconnect from the provisioning service. logger.info("Enabling reception of response from Device Provisioning Service...")
subscribe_async = async_adapter.emulate_async(self._provisioning_pipeline.enable_responses)
In case there is no registration in process it will throw an error as there is subscription_complete = async_adapter.AwaitableCallback()
no registration process to cancel. await subscribe_async(callback=subscription_complete)
""" await handle_result(subscription_complete)
logger.info("Disconnecting from Provisioning Service...")
cancel_async = async_adapter.emulate_async(self._polling_machine.cancel)
callback = async_adapter.AwaitableCallback() logger.info("Successfully subscribed to Device Provisioning Service to receive responses")
await cancel_async(callback=callback)
await callback.completion()
logger.info("Successfully cancelled the current registration process")

Просмотреть файл

@ -1,4 +0,0 @@
"""Azure Provisioning Device Internal
This package provides internal classes for use within the Azure Provisioning Device SDK.
"""

Просмотреть файл

@ -1,450 +0,0 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import uuid
import json
import traceback
from threading import Timer
from transitions import Machine
from azure.iot.device.provisioning.pipeline import constant
import six.moves.urllib as urllib
from .request_response_provider import RequestResponseProvider
from azure.iot.device.provisioning.models.registration_result import (
RegistrationResult,
RegistrationState,
)
from .registration_query_status_result import RegistrationQueryStatusResult
logger = logging.getLogger(__name__)
POS_STATUS_CODE_IN_TOPIC = 3
POS_QUERY_PARAM_PORTION = 2
class PollingMachine(object):
"""
Class that is responsible for sending the initial registration request and polling the
registration process for constant updates.
"""
def __init__(self, provisioning_pipeline):
"""
:param provisioning_pipeline: The pipeline for provisioning.
"""
self._polling_timer = None
self._query_timer = None
self._register_callback = None
self._cancel_callback = None
self._registration_error = None
self._registration_result = None
self._operations = {}
self._request_response_provider = RequestResponseProvider(provisioning_pipeline)
states = [
"disconnected",
"initializing",
"registering",
"waiting_to_poll",
"polling",
"completed",
"error",
"cancelling",
]
transitions = [
{
"trigger": "_trig_register",
"source": "disconnected",
"before": "_initialize_register",
"dest": "initializing",
},
{
"trigger": "_trig_register",
"source": "error",
"before": "_initialize_register",
"dest": "initializing",
},
{"trigger": "_trig_register", "source": "registering", "dest": None},
{
"trigger": "_trig_send_register_request",
"source": "initializing",
"before": "_send_register_request",
"dest": "registering",
},
{
"trigger": "_trig_send_register_request",
"source": "waiting_to_poll",
"before": "_send_register_request",
"dest": "registering",
},
{
"trigger": "_trig_wait",
"source": "registering",
"dest": "waiting_to_poll",
"after": "_wait_for_interval",
},
{"trigger": "_trig_wait", "source": "cancelling", "dest": None},
{
"trigger": "_trig_wait",
"source": "polling",
"dest": "waiting_to_poll",
"after": "_wait_for_interval",
},
{
"trigger": "_trig_poll",
"source": "waiting_to_poll",
"dest": "polling",
"after": "_query_operation_status",
},
{"trigger": "_trig_poll", "source": "cancelling", "dest": None},
{
"trigger": "_trig_complete",
"source": ["registering", "waiting_to_poll", "polling"],
"dest": "completed",
"after": "_call_complete",
},
{
"trigger": "_trig_error",
"source": ["registering", "waiting_to_poll", "polling"],
"dest": "error",
"after": "_call_error",
},
{"trigger": "_trig_error", "source": "cancelling", "dest": None},
{
"trigger": "_trig_cancel",
"source": ["disconnected", "completed"],
"dest": None,
"after": "_inform_no_process",
},
{
"trigger": "_trig_cancel",
"source": ["initializing", "registering", "waiting_to_poll", "polling"],
"dest": "cancelling",
"after": "_call_cancel",
},
]
def _on_transition_complete(event_data):
if not event_data.transition:
dest = "[no transition]"
else:
dest = event_data.transition.dest
logger.debug(
"Transition complete. Trigger={}, Src={}, Dest={}, result={}, error{}".format(
event_data.event.name,
event_data.transition.source,
dest,
str(event_data.result),
str(event_data.error),
)
)
self._state_machine = Machine(
model=self,
states=states,
transitions=transitions,
initial="disconnected",
send_event=True, # Use event_data structures to pass transition arguments
finalize_event=_on_transition_complete,
queued=True,
)
def register(self, callback=None):
"""
Register the device with the provisioning service.
:param:Callback to be called upon finishing the registration process
"""
logger.info("register called from polling machine")
self._register_callback = callback
self._trig_register()
def cancel(self, callback=None):
"""
Cancels the current registration process of the device.
:param:Callback to be called upon finishing the cancellation process
"""
logger.info("cancel called from polling machine")
self._cancel_callback = callback
self._trig_cancel()
def _initialize_register(self, event_data):
logger.info("Initializing the registration process.")
self._request_response_provider.enable_responses(callback=self._on_subscribe_completed)
def _send_register_request(self, event_data):
"""
Send the registration request.
"""
logger.info("Sending registration request")
self._set_query_timer()
request_id = str(uuid.uuid4())
self._operations[request_id] = constant.PUBLISH_TOPIC_REGISTRATION.format(request_id)
self._request_response_provider.send_request(
request_id=request_id,
request_payload=" ",
operation_id=None,
callback_on_response=self._on_register_response_received,
)
def _query_operation_status(self, event_data):
"""
Poll the service for operation status.
"""
logger.info("Querying operation status from polling machine")
self._set_query_timer()
request_id = str(uuid.uuid4())
result = event_data.args[0].args[0]
operation_id = result.operation_id
self._operations[request_id] = constant.PUBLISH_TOPIC_QUERYING.format(
request_id, operation_id
)
self._request_response_provider.send_request(
request_id=request_id,
request_payload=" ",
operation_id=operation_id,
callback_on_response=self._on_query_response_received,
)
def _on_register_response_received(self, request_id, status_code, key_values_dict, response):
"""
The function to call in case of a response from a registration request.
:param request_id: The id of the original register request.
:param status_code: The status code in the response.
:param key_values_dict: The dictionary containing the query parameters of the returned topic.
:param response: The complete response from the service.
"""
self._query_timer.cancel()
retry_after = (
None if "retry-after" not in key_values_dict else str(key_values_dict["retry-after"][0])
)
intermediate_registration_result = RegistrationQueryStatusResult(request_id, retry_after)
if int(status_code, 10) >= 429:
del self._operations[request_id]
self._trig_wait(intermediate_registration_result)
elif int(status_code, 10) >= 300: # pure failure
self._registration_error = ValueError("Incoming message failure")
self._trig_error()
else: # successful case, transition into complete or poll status
self._process_successful_response(request_id, retry_after, response)
def _on_query_response_received(self, request_id, status_code, key_values_dict, response):
"""
The function to call in case of a response from a polling/query request.
:param request_id: The id of the original query request.
:param status_code: The status code in the response.
:param key_values_dict: The dictionary containing the query parameters of the returned topic.
:param response: The complete response from the service.
"""
self._query_timer.cancel()
self._polling_timer.cancel()
retry_after = (
None if "retry-after" not in key_values_dict else str(key_values_dict["retry-after"][0])
)
intermediate_registration_result = RegistrationQueryStatusResult(request_id, retry_after)
if int(status_code, 10) >= 429:
if request_id in self._operations:
publish_query_topic = self._operations[request_id]
del self._operations[request_id]
topic_parts = publish_query_topic.split("$")
key_values_publish_topic = urllib.parse.parse_qs(
topic_parts[POS_QUERY_PARAM_PORTION]
)
operation_id = key_values_publish_topic["operationId"][0]
intermediate_registration_result.operation_id = operation_id
self._trig_wait(intermediate_registration_result)
else:
self._registration_error = ValueError("This request was never sent")
self._trig_error()
elif int(status_code, 10) >= 300: # pure failure
self._registration_error = ValueError("Incoming message failure")
self._trig_error()
else: # successful status code case, transition into complete or another poll status
self._process_successful_response(request_id, retry_after, response)
def _process_successful_response(self, request_id, retry_after, response):
"""
Fucntion to call in case of 200 response from the service
:param request_id: The request id
:param retry_after: The time after which to try again.
:param response: The complete response
"""
del self._operations[request_id]
successful_result = self._decode_json_response(request_id, retry_after, response)
if successful_result.status == "assigning":
self._trig_wait(successful_result)
elif successful_result.status == "assigned" or successful_result.status == "failed":
complete_registration_result = self._decode_complete_json_response(
successful_result, response
)
self._registration_result = complete_registration_result
self._trig_complete()
else:
self._registration_error = ValueError("Other types of failure have occurred.", response)
self._trig_error()
def _inform_no_process(self, event_data):
raise RuntimeError("There is no registration process to cancel.")
def _call_cancel(self, event_data):
"""
Completes the cancellation process
"""
logger.info("Cancel called from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_cancel)
def _call_error(self, event_data):
logger.info("Failed register from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_error)
def _call_complete(self, event_data):
logger.info("Complete register from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_register)
def _clear_timers(self):
"""
Clears all the timers and disconnects from the service
"""
if self._query_timer is not None:
self._query_timer.cancel()
if self._polling_timer is not None:
self._polling_timer.cancel()
def _set_query_timer(self):
def time_up_query():
logger.error("Time is up for query timer")
self._query_timer.cancel()
# TimeoutError not defined in python 2
self._registration_error = ValueError("Time is up for query timer")
self._trig_error()
self._query_timer = Timer(constant.DEFAULT_TIMEOUT_INTERVAL, time_up_query)
self._query_timer.start()
def _wait_for_interval(self, event_data):
def time_up_polling():
self._polling_timer.cancel()
logger.debug("Done waiting for polling interval of {} secs".format(polling_interval))
if result.operation_id is None:
self._trig_send_register_request(event_data)
else:
self._trig_poll(event_data)
result = event_data.args[0]
polling_interval = (
constant.DEFAULT_POLLING_INTERVAL
if result.retry_after is None
else int(result.retry_after, 10)
)
self._polling_timer = Timer(polling_interval, time_up_polling)
logger.debug("Waiting for " + str(constant.DEFAULT_POLLING_INTERVAL) + " secs")
self._polling_timer.start() # This is waiting for that polling interval
def _decode_complete_json_response(self, query_result, response):
"""
Decodes the complete json response for details regarding the registration process.
:param query_result: The partially formed result.
:param response: The complete response from the service
"""
decoded_result = json.loads(response)
decoded_state = (
None
if "registrationState" not in decoded_result
else decoded_result["registrationState"]
)
registration_state = None
if decoded_state is not None:
# Everything needs to be converted to string explicitly for python 2
# as everything is by default a unicode character
registration_state = RegistrationState(
None if "deviceId" not in decoded_state else str(decoded_state["deviceId"]),
None if "assignedHub" not in decoded_state else str(decoded_state["assignedHub"]),
None if "substatus" not in decoded_state else str(decoded_state["substatus"]),
None
if "createdDateTimeUtc" not in decoded_state
else str(decoded_state["createdDateTimeUtc"]),
None
if "lastUpdatedDateTimeUtc" not in decoded_state
else str(decoded_state["lastUpdatedDateTimeUtc"]),
None if "etag" not in decoded_state else str(decoded_state["etag"]),
)
registration_result = RegistrationResult(
request_id=query_result.request_id,
operation_id=query_result.operation_id,
status=query_result.status,
registration_state=registration_state,
)
return registration_result
def _decode_json_response(self, request_id, retry_after, response):
"""
Decodes the json response for operation id and status
:param request_id: The request id.
:param retry_after: The time in secs after which to retry.
:param response: The complete response from the service.
"""
decoded_result = json.loads(response)
operation_id = (
None if "operationId" not in decoded_result else str(decoded_result["operationId"])
)
status = None if "status" not in decoded_result else str(decoded_result["status"])
return RegistrationQueryStatusResult(request_id, retry_after, operation_id, status)
def _on_disconnect_completed_error(self):
logger.info("on_disconnect_completed for Device Provisioning Service")
callback = self._register_callback
if callback:
self._register_callback = None
try:
callback(error=self._registration_error)
except Exception:
logger.error("Unexpected error calling callback supplied to register")
logger.error(traceback.format_exc())
def _on_disconnect_completed_cancel(self):
logger.info("on_disconnect_completed after cancelling current Device Provisioning Service")
callback = self._cancel_callback
if callback:
self._cancel_callback = None
callback()
def _on_disconnect_completed_register(self):
logger.info("on_disconnect_completed after registration to Device Provisioning Service")
callback = self._register_callback
if callback:
self._register_callback = None
try:
callback(result=self._registration_result)
except Exception:
logger.error("Unexpected error calling callback supplied to register")
logger.error(traceback.format_exc())
def _on_subscribe_completed(self):
logger.debug("on_subscribe_completed for Device Provisioning Service")
self._trig_send_register_request()

Просмотреть файл

@ -1,58 +0,0 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class RegistrationQueryStatusResult(object):
"""
The result of any registration attempt
:ivar:request_id: The request id to which the response is being obtained
:ivar:operation_id: The id of the operation as returned by the registration request.
:ivar status: The status of the registration process as returned by provisioning service.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
from the provisioning service.
"""
def __init__(self, request_id=None, retry_after=None, operation_id=None, status=None):
"""
:param request_id: The request id to which the response is being obtained
:param retry_after : Number of secs after which to retry again.
:param operation_id: The id of the operation as returned by the initial registration request.
:param status: The status of the registration process.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
from the provisioning service.
"""
self._request_id = request_id
self._operation_id = operation_id
self._status = status
self._retry_after = retry_after
@property
def request_id(self):
return self._request_id
@property
def retry_after(self):
return self._retry_after
@retry_after.setter
def retry_after(self, val):
self._retry_after = val
@property
def operation_id(self):
return self._operation_id
@operation_id.setter
def operation_id(self, val):
self._operation_id = val
@property
def status(self):
return self._status
@status.setter
def status(self, val):
self._status = val

Просмотреть файл

@ -1,101 +0,0 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
logger = logging.getLogger(__name__)
POS_STATUS_CODE_IN_TOPIC = 3
POS_URL_PORTION = 1
POS_QUERY_PARAM_PORTION = 2
class RequestResponseProvider(object):
"""
Class that processes requests sent from device and responses received at device.
"""
def __init__(self, provisioning_pipeline):
self._provisioning_pipeline = provisioning_pipeline
self._provisioning_pipeline.on_message_received = self._receive_response
self._pending_requests = {}
def send_request(
self, request_id, request_payload, operation_id=None, callback_on_response=None
):
"""
Sends a request
:param request_id: Id of the request
:param request_payload: The payload of the request.
:param operation_id: A id of the operation in case it is an ongoing process.
:param callback_on_response: callback which is called when response comes back for this request.
"""
self._pending_requests[request_id] = callback_on_response
self._provisioning_pipeline.send_request(
request_id=request_id,
request_payload=request_payload,
operation_id=operation_id,
callback=self._on_publish_completed,
)
def connect(self, callback=None):
if callback is None:
callback = self._on_connection_state_change
self._provisioning_pipeline.connect(callback=callback)
def disconnect(self, callback=None):
if callback is None:
callback = self._on_connection_state_change
self._provisioning_pipeline.disconnect(callback=callback)
def enable_responses(self, callback=None):
if callback is None:
callback = self._on_subscribe_completed
self._provisioning_pipeline.enable_responses(callback=callback)
def disable_responses(self, callback=None):
if callback is None:
callback = self._on_unsubscribe_completed
self._provisioning_pipeline.disable_responses(callback=callback)
def _receive_response(self, request_id, status_code, key_value_dict, response_payload):
"""
Handler that processes the response from the service.
:param request_id: The id of the request which is being responded to.
:param status_code: The status code inside the response
:param key_value_dict: A dictionary of keys mapped to a list of values extracted from the topic of the response.
:param response_payload: String payload of the message received.
:return:
"""
# """ Sample topic and payload
# $dps/registrations/res/200/?$rid=28c32371-608c-4390-8da7-c712353c1c3b
# {"operationId":"4.550cb20c3349a409.390d2957-7b58-4701-b4f9-7fe848348f4a","status":"assigning"}
# """
logger.debug("Received response {}:".format(response_payload))
if request_id in self._pending_requests:
callback = self._pending_requests[request_id]
# Only send the status code and the extracted topic
callback(request_id, status_code, key_value_dict, response_payload)
del self._pending_requests[request_id]
# TODO : What happens when request_id if not there ? trigger error ?
def _on_connection_state_change(self, new_state):
"""Handler to be called by the pipeline upon a connection state change."""
logger.info("Connection State - {}".format(new_state))
def _on_publish_completed(self):
logger.debug("publish completed for request response provider")
def _on_subscribe_completed(self):
logger.debug("subscribe completed for request response provider")
def _on_unsubscribe_completed(self):
logger.debug("on_unsubscribe_completed for request response provider")

Просмотреть файл

@ -3,6 +3,7 @@
# Licensed under the MIT License. See License.txt in the project root for # Licensed under the MIT License. See License.txt in the project root for
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
import json
class RegistrationResult(object): class RegistrationResult(object):
@ -16,24 +17,18 @@ class RegistrationResult(object):
from the provisioning service. from the provisioning service.
""" """
def __init__(self, request_id, operation_id, status, registration_state=None): def __init__(self, operation_id, status, registration_state=None):
""" """
:param request_id: The request id to which the response is being obtained
:param operation_id: The id of the operation as returned by the initial registration request. :param operation_id: The id of the operation as returned by the initial registration request.
:param status: The status of the registration process. :param status: The status of the registration process.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled" Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
:param registration_state : Details like device id, assigned hub , date times etc returned :param registration_state : Details like device id, assigned hub , date times etc returned
from the provisioning service. from the provisioning service.
""" """
self._request_id = request_id
self._operation_id = operation_id self._operation_id = operation_id
self._status = status self._status = status
self._registration_state = registration_state self._registration_state = registration_state
@property
def request_id(self):
return self._request_id
@property @property
def operation_id(self): def operation_id(self):
return self._operation_id return self._operation_id
@ -70,6 +65,7 @@ class RegistrationState(object):
created_date_time=None, created_date_time=None,
last_update_date_time=None, last_update_date_time=None,
etag=None, etag=None,
payload=None,
): ):
""" """
:param device_id: Desired device id for the provisioned device :param device_id: Desired device id for the provisioned device
@ -79,6 +75,7 @@ class RegistrationState(object):
:param created_date_time: Registration create date time (in UTC). :param created_date_time: Registration create date time (in UTC).
:param last_update_date_time: Last updated date time (in UTC). :param last_update_date_time: Last updated date time (in UTC).
:param etag: The entity tag associated with the resource. :param etag: The entity tag associated with the resource.
:param payload: The payload with which hub is responding
""" """
self._device_id = device_id self._device_id = device_id
self._assigned_hub = assigned_hub self._assigned_hub = assigned_hub
@ -86,6 +83,7 @@ class RegistrationState(object):
self._created_date_time = created_date_time self._created_date_time = created_date_time
self._last_update_date_time = last_update_date_time self._last_update_date_time = last_update_date_time
self._etag = etag self._etag = etag
self._response_payload = payload
@property @property
def device_id(self): def device_id(self):
@ -111,5 +109,11 @@ class RegistrationState(object):
def etag(self): def etag(self):
return self._etag return self._etag
@property
def response_payload(self):
return json.dumps(self._response_payload, default=lambda o: o.__dict__, sort_keys=True)
def __str__(self): def __str__(self):
return "\n".join([self.device_id, self.assigned_hub, self.sub_status]) return "\n".join(
[self.device_id, self.assigned_hub, self.sub_status, self.response_payload]
)

Просмотреть файл

@ -5,3 +5,4 @@ This package provides pipeline for use with the Azure Provisioning Device SDK.
INTERNAL USAGE ONLY INTERNAL USAGE ONLY
""" """
from .provisioning_pipeline import ProvisioningPipeline from .provisioning_pipeline import ProvisioningPipeline
from .config import ProvisioningPipelineConfig

Просмотреть файл

@ -0,0 +1,17 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
from azure.iot.device.common.pipeline.config import BasePipelineConfig
logger = logging.getLogger(__name__)
class ProvisioningPipelineConfig(BasePipelineConfig):
"""A class for storing all configurations/options for Provisioning clients in the Azure IoT Python Device Client Library.
"""
pass

Просмотреть файл

@ -0,0 +1,21 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the pipeline API"""
# For now, present relevant transport errors as part of the Pipeline API surface
# so that they do not have to be duplicated at this layer.
# OK TODO This mimics the IotHub Case. Both IotHub and Provisioning needs to change
from azure.iot.device.common.pipeline.pipeline_exceptions import *
from azure.iot.device.common.transport_exceptions import (
ConnectionFailedError,
ConnectionDroppedError,
# CT TODO: UnauthorizedError (the one from transport) should probably not surface out of
# the pipeline due to confusion with the higher level service UnauthorizedError. It
# should probably get turned into some other error instead (e.g. ConnectionFailedError).
# But for now, this is a stopgap.
UnauthorizedError,
ProtocolClientError,
)

Просмотреть файл

@ -24,24 +24,24 @@ def get_topic_for_subscribe():
return _get_topic_base() + "res/#" return _get_topic_base() + "res/#"
def get_topic_for_register(request_id): def get_topic_for_register(method, request_id):
""" """
return the topic string used to publish telemetry return the topic string used to publish telemetry
""" """
return (_get_topic_base() + "PUT/iotdps-register/?$rid={request_id}").format( return (_get_topic_base() + "{method}/iotdps-register/?$rid={request_id}").format(
request_id=request_id method=method, request_id=request_id
) )
def get_topic_for_query(request_id, operation_id): def get_topic_for_query(method, request_id, operation_id):
""" """
:return: The topic for cloud to device messages.It is of the format :return: The topic for cloud to device messages.It is of the format
"devices/<deviceid>/messages/devicebound/#" "devices/<deviceid>/messages/devicebound/#"
""" """
return ( return (
_get_topic_base() _get_topic_base()
+ "GET/iotdps-get-operationstatus/?$rid={request_id}&operationId={operation_id}" + "{method}/iotdps-get-operationstatus/?$rid={request_id}&operationId={operation_id}"
).format(request_id=request_id, operation_id=operation_id) ).format(method=method, request_id=request_id, operation_id=operation_id)
def get_topic_for_response(): def get_topic_for_response():
@ -93,3 +93,22 @@ def extract_status_code_from_topic(topic):
url_parts = topic_parts[1].split("/") url_parts = topic_parts[1].split("/")
status_code = url_parts[POS_STATUS_CODE_IN_TOPIC] status_code = url_parts[POS_STATUS_CODE_IN_TOPIC]
return status_code return status_code
def get_optional_element(content, element_name, index=0):
"""
Gets an optional element from json string , or dictionary.
:param content: The content from which the element needs to be retrieved.
:param element_name: The name of the element
:param index: Optional index in case the return is a collection of elements.
"""
element = None if element_name not in content else content[element_name]
if element is None:
return None
else:
if isinstance(element, list):
return element[index]
elif isinstance(element, object):
return element
else:
return str(element)

Просмотреть файл

@ -1,26 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device.common.pipeline.pipeline_events_base import PipelineEvent
class RegistrationResponseEvent(PipelineEvent):
"""
A PipelineEvent object which represents an incoming RegistrationResponse event. This object is probably
created by some converter stage based on a pipeline-specific event
"""
def __init__(self, request_id, status_code, key_values, response_payload):
"""
Initializer for RegistrationResponse objects.
:param request_id : The id of the request to which the response arrived.
:param status_code: The status code received in the topic.
:param key_values: A dictionary containing key mapped to a list of values that were extarcted from the topic.
:param response_payload: The response received from a registration process
"""
super(RegistrationResponseEvent, self).__init__()
self.request_id = request_id
self.status_code = status_code
self.key_values = key_values
self.response_payload = response_payload

Просмотреть файл

@ -16,7 +16,7 @@ class SetSymmetricKeySecurityClientOperation(PipelineOperation):
very provisioning-specific very provisioning-specific
""" """
def __init__(self, security_client, callback=None): def __init__(self, security_client, callback):
""" """
Initializer for SetSecurityClient. Initializer for SetSecurityClient.
@ -41,7 +41,7 @@ class SetX509SecurityClientOperation(PipelineOperation):
(such as a Provisioning client). (such as a Provisioning client).
""" """
def __init__(self, security_client, callback=None): def __init__(self, security_client, callback):
""" """
Initializer for SetSecurityClient. Initializer for SetSecurityClient.
@ -71,9 +71,9 @@ class SetProvisioningClientConnectionArgsOperation(PipelineOperation):
provisioning_host, provisioning_host,
registration_id, registration_id,
id_scope, id_scope,
callback,
client_cert=None, client_cert=None,
sas_token=None, sas_token=None,
callback=None,
): ):
""" """
Initializer for SetProvisioningClientConnectionArgsOperation. Initializer for SetProvisioningClientConnectionArgsOperation.
@ -91,7 +91,7 @@ class SetProvisioningClientConnectionArgsOperation(PipelineOperation):
self.sas_token = sas_token self.sas_token = sas_token
class SendRegistrationRequestOperation(PipelineOperation): class RegisterOperation(PipelineOperation):
""" """
A PipelineOperation object which contains arguments used to send a registration request A PipelineOperation object which contains arguments used to send a registration request
to an Device Provisioning Service. to an Device Provisioning Service.
@ -99,22 +99,26 @@ class SendRegistrationRequestOperation(PipelineOperation):
This operation is in the group of DPS operations because it is very specific to the DPS client. This operation is in the group of DPS operations because it is very specific to the DPS client.
""" """
def __init__(self, request_id, request_payload, callback=None): def __init__(self, request_payload, registration_id, callback, registration_result=None):
""" """
Initializer for SendRegistrationRequestOperation objects. Initializer for RegisterOperation objects.
:param request_id : The id of the request being sent
:param request_payload: The request that we are sending to the service :param request_payload: The request that we are sending to the service
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
:param Function callback: The function that gets called when this operation is complete or has failed. :param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed. has completed or failed.
""" """
super(SendRegistrationRequestOperation, self).__init__(callback=callback) super(RegisterOperation, self).__init__(callback=callback)
self.request_id = request_id
self.request_payload = request_payload self.request_payload = request_payload
self.registration_id = registration_id
self.registration_result = registration_result
self.retry_after_timer = None
self.polling_timer = None
self.provisioning_timeout_timer = None
class SendQueryRequestOperation(PipelineOperation): class PollStatusOperation(PipelineOperation):
""" """
A PipelineOperation object which contains arguments used to send a registration request A PipelineOperation object which contains arguments used to send a registration request
to an Device Provisioning Service. to an Device Provisioning Service.
@ -122,17 +126,20 @@ class SendQueryRequestOperation(PipelineOperation):
This operation is in the group of DPS operations because it is very specific to the DPS client. This operation is in the group of DPS operations because it is very specific to the DPS client.
""" """
def __init__(self, request_id, operation_id, request_payload, callback=None): def __init__(self, operation_id, request_payload, callback, registration_result=None):
""" """
Initializer for SendRegistrationRequestOperation objects. Initializer for PollStatusOperation objects.
:param request_id :param operation_id: The id of the existing operation for which the polling was started.
:param request_payload: The request that we are sending to the service :param request_payload: The request that we are sending to the service
:param Function callback: The function that gets called when this operation is complete or has failed. :param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed. has completed or failed.
""" """
super(SendQueryRequestOperation, self).__init__(callback=callback) super(PollStatusOperation, self).__init__(callback=callback)
self.request_id = request_id
self.operation_id = operation_id self.operation_id = operation_id
self.request_payload = request_payload self.request_payload = request_payload
self.registration_result = registration_result
self.retry_after_timer = None
self.polling_timer = None
self.provisioning_timeout_timer = None

Просмотреть файл

@ -4,9 +4,23 @@
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
from azure.iot.device.common.pipeline import pipeline_ops_base, operation_flow, pipeline_thread from azure.iot.device.common.pipeline import pipeline_ops_base, pipeline_thread
from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage
from . import pipeline_ops_provisioning from . import pipeline_ops_provisioning
from azure.iot.device import exceptions
from azure.iot.device.provisioning.pipeline import constant
from azure.iot.device.provisioning.models.registration_result import (
RegistrationResult,
RegistrationState,
)
import logging
import weakref
import json
from threading import Timer
import time
from .mqtt_topic import get_optional_element
logger = logging.getLogger(__name__)
class UseSecurityClientStage(PipelineStage): class UseSecurityClientStage(PipelineStage):
@ -18,33 +32,477 @@ class UseSecurityClientStage(PipelineStage):
""" """
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.SetSymmetricKeySecurityClientOperation): if isinstance(op, pipeline_ops_provisioning.SetSymmetricKeySecurityClientOperation):
security_client = op.security_client security_client = op.security_client
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation,
original_op=op,
new_op=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation(
provisioning_host=security_client.provisioning_host, provisioning_host=security_client.provisioning_host,
registration_id=security_client.registration_id, registration_id=security_client.registration_id,
id_scope=security_client.id_scope, id_scope=security_client.id_scope,
sas_token=security_client.get_current_sas_token(), sas_token=security_client.get_current_sas_token(),
),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_provisioning.SetX509SecurityClientOperation): elif isinstance(op, pipeline_ops_provisioning.SetX509SecurityClientOperation):
security_client = op.security_client security_client = op.security_client
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation,
original_op=op,
new_op=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation(
provisioning_host=security_client.provisioning_host, provisioning_host=security_client.provisioning_host,
registration_id=security_client.registration_id, registration_id=security_client.registration_id,
id_scope=security_client.id_scope, id_scope=security_client.id_scope,
client_cert=security_client.get_x509_certificate(), client_cert=security_client.get_x509_certificate(),
), )
self.send_op_down(worker_op)
else:
super(UseSecurityClientStage, self)._run_op(op)
class CommonProvisioningStage(PipelineStage):
"""
This is a super stage that the RegistrationStage and PollingStatusStage of
provisioning would both use. It contains some common functions like decoding response
and retrieving error, retrieving registration status, retrieving operation id
and forming a complete result.
"""
@pipeline_thread.runs_on_pipeline_thread
def _clear_timeout_timer(self, op, error):
"""
Clearing timer for provisioning operations (Register and PollStatus)
when they respond back from service.
"""
if op.provisioning_timeout_timer:
logger.debug("{}({}): Cancelling provisioning timeout timer".format(self.name, op.name))
op.provisioning_timeout_timer.cancel()
op.provisioning_timeout_timer = None
@staticmethod
def _decode_response(provisioning_op):
return json.loads(provisioning_op.response_body.decode("utf-8"))
@staticmethod
def _get_registration_status(decoded_response):
return get_optional_element(decoded_response, "status")
@staticmethod
def _get_operation_id(decoded_response):
return get_optional_element(decoded_response, "operationId")
@staticmethod
def _form_complete_result(operation_id, decoded_response, status):
"""
Create the registration result from the complete decoded json response for details regarding the registration process.
"""
decoded_state = get_optional_element(decoded_response, "registrationState")
registration_state = None
if decoded_state is not None:
registration_state = RegistrationState(
device_id=get_optional_element(decoded_state, "deviceId"),
assigned_hub=get_optional_element(decoded_state, "assignedHub"),
sub_status=get_optional_element(decoded_state, "substatus"),
created_date_time=get_optional_element(decoded_state, "createdDateTimeUtc"),
last_update_date_time=get_optional_element(decoded_state, "lastUpdatedDateTimeUtc"),
etag=get_optional_element(decoded_state, "etag"),
payload=get_optional_element(decoded_state, "payload"),
)
registration_result = RegistrationResult(
operation_id=operation_id, status=status, registration_state=registration_state
)
return registration_result
def _process_service_error_status_code(self, original_provisioning_op, request_response_op):
logger.error(
"{stage_name}({op_name}): Received error with status code {status_code} for {prov_op_name} request operation".format(
stage_name=self.name,
op_name=request_response_op.name,
prov_op_name=request_response_op.request_type,
status_code=request_response_op.status_code,
)
)
logger.error(
"{stage_name}({op_name}): Response body: {body}".format(
stage_name=self.name,
op_name=request_response_op.name,
body=request_response_op.response_body,
)
)
original_provisioning_op.complete(
error=exceptions.ServiceError(
"{prov_op_name} request returned a service error status code {status_code}".format(
prov_op_name=request_response_op.request_type,
status_code=request_response_op.status_code,
)
)
)
def _process_retry_status_code(self, error, original_provisioning_op, request_response_op):
retry_interval = (
int(request_response_op.retry_after, 10)
if request_response_op.retry_after is not None
else constant.DEFAULT_POLLING_INTERVAL
)
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_retry_after():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): retrying".format(
stage_name=this.name, op_name=request_response_op.name
)
)
original_provisioning_op.retry_after_timer.cancel()
original_provisioning_op.retry_after_timer = None
original_provisioning_op.completed = False
this.run_op(original_provisioning_op)
logger.warning(
"{stage_name}({op_name}): Op needs retry with interval {interval} because of {error}. Setting timer.".format(
stage_name=self.name,
op_name=request_response_op.name,
interval=retry_interval,
error=error,
)
)
logger.debug("{}({}): Creating retry timer".format(self.name, request_response_op.name))
original_provisioning_op.retry_after_timer = Timer(retry_interval, do_retry_after)
original_provisioning_op.retry_after_timer.start()
@staticmethod
def _process_failed_and_assigned_registration_status(
error,
operation_id,
decoded_response,
registration_status,
original_provisioning_op,
request_response_op,
):
complete_registration_result = CommonProvisioningStage._form_complete_result(
operation_id=operation_id, decoded_response=decoded_response, status=registration_status
)
original_provisioning_op.registration_result = complete_registration_result
if registration_status == "failed":
error = exceptions.ServiceError(
"Query Status operation returned a failed registration status with a status code of {status_code}".format(
status_code=request_response_op.status_code
)
)
original_provisioning_op.complete(error=error)
@staticmethod
def _process_unknown_registration_status(
registration_status, original_provisioning_op, request_response_op
):
error = exceptions.ServiceError(
"Query Status Operation encountered an invalid registration status {status} with a status code of {status_code}".format(
status=registration_status, status_code=request_response_op.status_code
)
)
original_provisioning_op.complete(error=error)
class PollingStatusStage(CommonProvisioningStage):
"""
This stage is responsible for sending the query request once initial response
is received from the registration response.
Upon the receipt of the response this stage decides whether
to send another query request or complete the procedure.
"""
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.PollStatusOperation):
query_status_op = op
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def query_timeout():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): returning timeout error".format(
stage_name=this.name, op_name=op.name
)
)
query_status_op.complete(
error=(
exceptions.ServiceError(
"Operation timed out before provisioning service could respond for {op_type} operation".format(
op_type=constant.QUERY
)
)
)
)
logger.debug("{}({}): Creating provisioning timeout timer".format(self.name, op.name))
query_status_op.provisioning_timeout_timer = Timer(
constant.DEFAULT_TIMEOUT_INTERVAL, query_timeout
)
query_status_op.provisioning_timeout_timer.start()
def on_query_response(op, error):
self._clear_timeout_timer(query_status_op, error)
logger.debug(
"{stage_name}({op_name}): Received response with status code {status_code} for PollStatusOperation with operation id {oper_id}".format(
stage_name=self.name,
op_name=op.name,
status_code=op.status_code,
oper_id=op.query_params["operation_id"],
)
)
if error:
logger.error(
"{stage_name}({op_name}): Received error for {prov_op_name} operation".format(
stage_name=self.name, op_name=op.name, prov_op_name=op.request_type
)
)
query_status_op.complete(error=error)
else:
if 300 <= op.status_code < 429:
self._process_service_error_status_code(query_status_op, op)
elif op.status_code >= 429:
self._process_retry_status_code(error, query_status_op, op)
else:
decoded_response = self._decode_response(op)
operation_id = self._get_operation_id(decoded_response)
registration_status = self._get_registration_status(decoded_response)
if registration_status == "assigning":
polling_interval = (
int(op.retry_after, 10)
if op.retry_after is not None
else constant.DEFAULT_POLLING_INTERVAL
)
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_polling():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): retrying".format(
stage_name=this.name, op_name=op.name
)
)
query_status_op.polling_timer.cancel()
query_status_op.polling_timer = None
query_status_op.completed = False
this.run_op(query_status_op)
logger.info(
"{stage_name}({op_name}): Op needs retry with interval {interval} because of {error}. Setting timer.".format(
stage_name=self.name,
op_name=op.name,
interval=polling_interval,
error=error,
)
)
logger.debug(
"{}({}): Creating polling timer".format(self.name, op.name)
)
query_status_op.polling_timer = Timer(polling_interval, do_polling)
query_status_op.polling_timer.start()
elif registration_status == "assigned" or registration_status == "failed":
self._process_failed_and_assigned_registration_status(
error=error,
operation_id=operation_id,
decoded_response=decoded_response,
registration_status=registration_status,
original_provisioning_op=query_status_op,
request_response_op=op,
) )
else: else:
operation_flow.pass_op_to_next_stage(self, op) self._process_unknown_registration_status(
registration_status=registration_status,
original_provisioning_op=query_status_op,
request_response_op=op,
)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.QUERY,
method="GET",
resource_location="/",
query_params={"operation_id": query_status_op.operation_id},
request_body=query_status_op.request_payload,
callback=on_query_response,
)
)
else:
super(PollingStatusStage, self)._run_op(op)
class RegistrationStage(CommonProvisioningStage):
"""
This is the first stage that decides converts a registration request
into a normal request and response operation.
Upon the receipt of the response this stage decides whether
to send another registration request or send a query request.
Depending on the status and result of the response
this stage may also complete the registration process.
"""
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.RegisterOperation):
initial_register_op = op
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def register_timeout():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): returning timeout error".format(
stage_name=this.name, op_name=op.name
)
)
initial_register_op.complete(
error=(
exceptions.ServiceError(
"Operation timed out before provisioning service could respond for {op_type} operation".format(
op_type=constant.REGISTER
)
)
)
)
logger.debug("{}({}): Creating provisioning timeout timer".format(self.name, op.name))
initial_register_op.provisioning_timeout_timer = Timer(
constant.DEFAULT_TIMEOUT_INTERVAL, register_timeout
)
initial_register_op.provisioning_timeout_timer.start()
def on_registration_response(op, error):
self._clear_timeout_timer(initial_register_op, error)
logger.debug(
"{stage_name}({op_name}): Received response with status code {status_code} for RegisterOperation".format(
stage_name=self.name, op_name=op.name, status_code=op.status_code
)
)
if error:
logger.error(
"{stage_name}({op_name}): Received error for {prov_op_name} operation".format(
stage_name=self.name, op_name=op.name, prov_op_name=op.request_type
)
)
initial_register_op.complete(error=error)
else:
if 300 <= op.status_code < 429:
self._process_service_error_status_code(initial_register_op, op)
elif op.status_code >= 429:
self._process_retry_status_code(error, initial_register_op, op)
else:
decoded_response = self._decode_response(op)
operation_id = self._get_operation_id(decoded_response)
registration_status = self._get_registration_status(decoded_response)
if registration_status == "assigning":
self_weakref = weakref.ref(self)
def copy_result_to_original_op(op, error):
logger.debug(
"Copying registration result from Query Status Op to Registration Op"
)
initial_register_op.registration_result = op.registration_result
initial_register_op.error = error
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_query_after_interval():
this = self_weakref()
initial_register_op.polling_timer.cancel()
initial_register_op.polling_timer = None
logger.info(
"{stage_name}({op_name}): polling".format(
stage_name=this.name, op_name=op.name
)
)
query_worker_op = initial_register_op.spawn_worker_op(
worker_op_type=pipeline_ops_provisioning.PollStatusOperation,
request_payload=" ",
operation_id=operation_id,
callback=copy_result_to_original_op,
)
self.send_op_down(query_worker_op)
logger.warning(
"{stage_name}({op_name}): Op will transition into polling after interval {interval}. Setting timer.".format(
stage_name=self.name,
op_name=op.name,
interval=constant.DEFAULT_POLLING_INTERVAL,
)
)
logger.debug(
"{}({}): Creating polling timer".format(self.name, op.name)
)
initial_register_op.polling_timer = Timer(
constant.DEFAULT_POLLING_INTERVAL, do_query_after_interval
)
initial_register_op.polling_timer.start()
elif registration_status == "failed" or registration_status == "assigned":
self._process_failed_and_assigned_registration_status(
error=error,
operation_id=operation_id,
decoded_response=decoded_response,
registration_status=registration_status,
original_provisioning_op=initial_register_op,
request_response_op=op,
)
else:
self._process_unknown_registration_status(
registration_status=registration_status,
original_provisioning_op=initial_register_op,
request_response_op=op,
)
registration_payload = DeviceRegistrationPayload(
registration_id=initial_register_op.registration_id,
custom_payload=initial_register_op.request_payload,
)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.REGISTER,
method="PUT",
resource_location="/",
request_body=registration_payload.get_json_string(),
callback=on_registration_response,
)
)
else:
super(RegistrationStage, self)._run_op(op)
class DeviceRegistrationPayload(object):
"""
The class representing the payload that needs to be sent to the service.
"""
def __init__(self, registration_id, custom_payload=None):
# This is not a convention to name variables in python but the
# DPS service spec needs the name to be exact for it to work
self.registrationId = registration_id
self.payload = custom_payload
def get_json_string(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)

Просмотреть файл

@ -10,32 +10,31 @@ from azure.iot.device.common.pipeline import (
pipeline_ops_base, pipeline_ops_base,
pipeline_ops_mqtt, pipeline_ops_mqtt,
pipeline_events_mqtt, pipeline_events_mqtt,
operation_flow,
pipeline_thread, pipeline_thread,
pipeline_events_base,
) )
from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage
from azure.iot.device.provisioning.pipeline import mqtt_topic from azure.iot.device.provisioning.pipeline import mqtt_topic
from azure.iot.device.provisioning.pipeline import ( from azure.iot.device.provisioning.pipeline import pipeline_ops_provisioning
pipeline_events_provisioning,
pipeline_ops_provisioning,
)
from azure.iot.device import constant as pkg_constant from azure.iot.device import constant as pkg_constant
from . import constant as pipeline_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class ProvisioningMQTTConverterStage(PipelineStage): class ProvisioningMQTTTranslationStage(PipelineStage):
""" """
PipelineStage which converts other Provisioning pipeline operations into MQTT operations. This stage also PipelineStage which converts other Provisioning pipeline operations into MQTT operations. This stage also
converts MQTT pipeline events into Provisioning pipeline events. converts MQTT pipeline events into Provisioning pipeline events.
""" """
def __init__(self): def __init__(self):
super(ProvisioningMQTTConverterStage, self).__init__() super(ProvisioningMQTTTranslationStage, self).__init__()
self.action_to_topic = {} self.action_to_topic = {}
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op): def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation): if isinstance(op, pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation):
# get security client args from above, save some, use some to build topic names, # get security client args from above, save some, use some to build topic names,
@ -44,7 +43,7 @@ class ProvisioningMQTTConverterStage(PipelineStage):
client_id = op.registration_id client_id = op.registration_id
query_param_seq = [ query_param_seq = [
("api-version", pkg_constant.PROVISIONING_API_VERSION), ("api-version", pkg_constant.PROVISIONING_API_VERSION),
("ClientVersion", pkg_constant.USER_AGENT), ("ClientVersion", ProductInfo.get_provisioning_user_agent()),
] ]
username = "{id_scope}/registrations/{registration_id}/{query_params}".format( username = "{id_scope}/registrations/{registration_id}/{query_params}".format(
id_scope=op.id_scope, id_scope=op.id_scope,
@ -54,61 +53,59 @@ class ProvisioningMQTTConverterStage(PipelineStage):
hostname = op.provisioning_host hostname = op.provisioning_host
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation,
original_op=op,
new_op=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation(
client_id=client_id, client_id=client_id,
hostname=hostname, hostname=hostname,
username=username, username=username,
client_cert=op.client_cert, client_cert=op.client_cert,
sas_token=op.sas_token, sas_token=op.sas_token,
),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_provisioning.SendRegistrationRequestOperation): elif isinstance(op, pipeline_ops_base.RequestOperation):
# Convert Sending the request into MQTT Publish operations if op.request_type == pipeline_constant.REGISTER:
topic = mqtt_topic.get_topic_for_register(op.request_id) topic = mqtt_topic.get_topic_for_register(
operation_flow.delegate_to_different_op( method=op.method, request_id=op.request_id
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(
topic=topic, payload=op.request_payload
),
) )
worker_op = op.spawn_worker_op(
elif isinstance(op, pipeline_ops_provisioning.SendQueryRequestOperation): worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
# Convert Sending the request into MQTT Publish operations topic=topic,
topic = mqtt_topic.get_topic_for_query(op.request_id, op.operation_id) payload=op.request_body,
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(
topic=topic, payload=op.request_payload
),
) )
self.send_op_down(worker_op)
else:
topic = mqtt_topic.get_topic_for_query(
method=op.method,
request_id=op.request_id,
operation_id=op.query_params["operation_id"],
)
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
topic=topic,
payload=op.request_body,
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.EnableFeatureOperation): elif isinstance(op, pipeline_ops_base.EnableFeatureOperation):
# Enabling for register gets translated into an MQTT subscribe operation # Enabling for register gets translated into an MQTT subscribe operation
topic = mqtt_topic.get_topic_for_subscribe() topic = mqtt_topic.get_topic_for_subscribe()
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTSubscribeOperation, topic=topic
original_op=op,
new_op=pipeline_ops_mqtt.MQTTSubscribeOperation(topic=topic),
) )
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.DisableFeatureOperation): elif isinstance(op, pipeline_ops_base.DisableFeatureOperation):
# Disabling a register response gets turned into an MQTT unsubscribe operation # Disabling a register response gets turned into an MQTT unsubscribe operation
topic = mqtt_topic.get_topic_for_subscribe() topic = mqtt_topic.get_topic_for_subscribe()
operation_flow.delegate_to_different_op( worker_op = op.spawn_worker_op(
stage=self, worker_op_type=pipeline_ops_mqtt.MQTTUnsubscribeOperation, topic=topic
original_op=op,
new_op=pipeline_ops_mqtt.MQTTUnsubscribeOperation(topic=topic),
) )
self.send_op_down(worker_op)
else: else:
# All other operations get passed down # All other operations get passed down
operation_flow.pass_op_to_next_stage(self, op) super(ProvisioningMQTTTranslationStage, self)._run_op(op)
@pipeline_thread.runs_on_pipeline_thread @pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event): def _handle_pipeline_event(self, event):
@ -126,22 +123,22 @@ class ProvisioningMQTTConverterStage(PipelineStage):
) )
) )
key_values = mqtt_topic.extract_properties_from_topic(topic) key_values = mqtt_topic.extract_properties_from_topic(topic)
retry_after = mqtt_topic.get_optional_element(key_values, "retry-after", 0)
status_code = mqtt_topic.extract_status_code_from_topic(topic) status_code = mqtt_topic.extract_status_code_from_topic(topic)
request_id = key_values["rid"][0] request_id = key_values["rid"][0]
if event.payload is not None:
response = event.payload.decode("utf-8") self.send_event_up(
# Extract pertinent information from mqtt topic pipeline_events_base.ResponseEvent(
# like status code request_id and send it upwards. request_id=request_id,
operation_flow.pass_event_to_previous_stage( status_code=int(status_code, 10),
self, response_body=event.payload,
pipeline_events_provisioning.RegistrationResponseEvent( retry_after=retry_after,
request_id, status_code, key_values, response )
),
) )
else: else:
logger.warning("Unknown topic: {} passing up to next handler".format(topic)) logger.warning("Unknown topic: {} passing up to next handler".format(topic))
operation_flow.pass_event_to_previous_stage(self, event) self.send_event_up(event)
else: else:
# all other messages get passed up # all other messages get passed up
operation_flow.pass_event_to_previous_stage(self, event) super(ProvisioningMQTTTranslationStage, self)._handle_pipeline_event(event)

Просмотреть файл

@ -13,46 +13,94 @@ from azure.iot.device.provisioning.pipeline import (
pipeline_stages_provisioning, pipeline_stages_provisioning,
pipeline_stages_provisioning_mqtt, pipeline_stages_provisioning_mqtt,
) )
from azure.iot.device.provisioning.pipeline import pipeline_events_provisioning
from azure.iot.device.provisioning.pipeline import pipeline_ops_provisioning from azure.iot.device.provisioning.pipeline import pipeline_ops_provisioning
from azure.iot.device.provisioning.security import SymmetricKeySecurityClient, X509SecurityClient from azure.iot.device.provisioning.security import SymmetricKeySecurityClient, X509SecurityClient
from azure.iot.device.provisioning.pipeline import constant as provisioning_constants
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class ProvisioningPipeline(object): class ProvisioningPipeline(object):
def __init__(self, security_client): def __init__(self, security_client, pipeline_configuration):
""" """
Constructor for instantiating a pipeline Constructor for instantiating a pipeline
:param security_client: The security client which stores credentials :param security_client: The security client which stores credentials
""" """
self.responses_enabled = {provisioning_constants.REGISTER: False}
# Event Handlers - Will be set by Client after instantiation of pipeline # Event Handlers - Will be set by Client after instantiation of pipeline
self.on_connected = None self.on_connected = None
self.on_disconnected = None self.on_disconnected = None
self.on_message_received = None self.on_message_received = None
self._registration_id = security_client.registration_id
self._pipeline = ( self._pipeline = (
pipeline_stages_base.PipelineRootStage() #
# The root is always the root. By definition, it's the first stage in the pipeline.
#
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
#
# UseSecurityClientStager comes near the root by default because it doesn't need to be after
# anything, but it does need to be before ProvisoningMQTTTranslationStage.
#
.append_stage(pipeline_stages_provisioning.UseSecurityClientStage()) .append_stage(pipeline_stages_provisioning.UseSecurityClientStage())
.append_stage(pipeline_stages_provisioning_mqtt.ProvisioningMQTTConverterStage()) #
.append_stage(pipeline_stages_base.EnsureConnectionStage()) # RegistrationStage needs to come early because this is the stage that converts registration
.append_stage(pipeline_stages_base.SerializeConnectOpsStage()) # or query requests into request and response objects which are used by later stages
#
.append_stage(pipeline_stages_provisioning.RegistrationStage())
#
# PollingStatusStage needs to come after RegistrationStage because RegistrationStage counts
# on PollingStatusStage to poll until the registration is complete.
#
.append_stage(pipeline_stages_provisioning.PollingStatusStage())
#
# CoordinateRequestAndResponseStage needs to be after RegistrationStage and PollingStatusStage
# because these 2 stages create the request ops that CoordinateRequestAndResponseStage
# is coordinating. It needs to be before ProvisioningMQTTTranslationStage because that stage
# operates on ops that CoordinateRequestAndResponseStage produces
#
.append_stage(pipeline_stages_base.CoordinateRequestAndResponseStage())
#
# ProvisioningMQTTTranslationStage comes here because this is the point where we can translate
# all operations directly into MQTT. After this stage, only pipeline_stages_base stages
# are allowed because ProvisioningMQTTTranslationStage removes all the provisioning-ness from the ops
#
.append_stage(pipeline_stages_provisioning_mqtt.ProvisioningMQTTTranslationStage())
#
# AutoConnectStage comes here because only MQTT ops have the need_connection flag set
# and this is the first place in the pipeline wherer we can guaranetee that all network
# ops are MQTT ops.
#
.append_stage(pipeline_stages_base.AutoConnectStage())
#
# ReconnectStage needs to be after AutoConnectStage because ReconnectStage sets/clears
# the virtually_conencted flag and we want an automatic connection op to set this flag so
# we can reconnect autoconnect operations.
#
.append_stage(pipeline_stages_base.ReconnectStage())
#
# ConnectionLockStage needs to be after ReconnectStage because we want any ops that
# ReconnectStage creates to go through the ConnectionLockStage gate
#
.append_stage(pipeline_stages_base.ConnectionLockStage())
#
# RetryStage needs to be near the end because it's retrying low-level MQTT operations.
#
.append_stage(pipeline_stages_base.RetryStage())
#
# OpTimeoutStage needs to be after RetryStage because OpTimeoutStage returns the timeout
# errors that RetryStage is watching for.
#
.append_stage(pipeline_stages_base.OpTimeoutStage())
#
# MQTTTransportStage needs to be at the very end of the pipeline because this is where
# operations turn into network traffic
#
.append_stage(pipeline_stages_mqtt.MQTTTransportStage()) .append_stage(pipeline_stages_mqtt.MQTTTransportStage())
) )
def _on_pipeline_event(event): def _on_pipeline_event(event):
if isinstance(event, pipeline_events_provisioning.RegistrationResponseEvent):
if self.on_message_received:
self.on_message_received(
event.request_id,
event.status_code,
event.key_values,
event.response_payload,
)
else:
logger.warning("Provisioning event received with no handler. dropping.")
else:
logger.warning("Dropping unknown pipeline event {}".format(event.name)) logger.warning("Dropping unknown pipeline event {}".format(event.name))
def _on_connected(): def _on_connected():
@ -82,24 +130,25 @@ class ProvisioningPipeline(object):
self._pipeline.run_op(op) self._pipeline.run_op(op)
callback.wait_for_completion() callback.wait_for_completion()
if op.error:
logger.error("{} failed: {}".format(op.name, op.error))
raise op.error
def connect(self, callback=None): def connect(self, callback=None):
""" """
Connect to the service. Connect to the service.
:param callback: callback which is called when the connection to the service is complete. :param callback: callback which is called when the connection to the service is complete.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ProtocolClientError`
""" """
logger.info("connect called") logger.info("connect called")
def pipeline_callback(call): def pipeline_callback(op, error):
if call.error: callback(error=error)
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=pipeline_callback)) self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=pipeline_callback))
@ -108,83 +157,60 @@ class ProvisioningPipeline(object):
Disconnect from the service. Disconnect from the service.
:param callback: callback which is called when the connection to the service has been disconnected :param callback: callback which is called when the connection to the service has been disconnected
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
""" """
logger.info("disconnect called") logger.info("disconnect called")
def pipeline_callback(call): def pipeline_callback(op, error):
if call.error: callback(error=error)
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=pipeline_callback)) self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=pipeline_callback))
def send_request(self, request_id, request_payload, operation_id=None, callback=None):
"""
Send a request to the Device Provisioning Service.
:param request_id: The id of the request
:param request_payload: The request which is to be sent.
:param operation_id: The id of the operation.
:param callback: callback which is called when the message publish has been acknowledged by the service.
"""
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
op = None
if operation_id is not None:
op = pipeline_ops_provisioning.SendQueryRequestOperation(
request_id=request_id,
operation_id=operation_id,
request_payload=request_payload,
callback=pipeline_callback,
)
else:
op = pipeline_ops_provisioning.SendRegistrationRequestOperation(
request_id=request_id, request_payload=request_payload, callback=pipeline_callback
)
self._pipeline.run_op(op)
def enable_responses(self, callback=None): def enable_responses(self, callback=None):
""" """
Disable response from the DPS service by subscribing to the appropriate topics. Enable response from the DPS service by subscribing to the appropriate topics.
:param callback: callback which is called when the feature is enabled :param callback: callback which is called when responses are enabled
""" """
logger.debug("enable_responses called") logger.debug("enable_responses called")
def pipeline_callback(call): self.responses_enabled[provisioning_constants.REGISTER] = True
if call.error:
# TODO we need error semantics on the client def pipeline_callback(op, error):
exit(1) callback(error=error)
if callback:
callback()
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_base.EnableFeatureOperation(feature_name=None, callback=pipeline_callback) pipeline_ops_base.EnableFeatureOperation(feature_name=None, callback=pipeline_callback)
) )
def disable_responses(self, callback=None): def register(self, payload=None, callback=None):
""" """
Disable response from the DPS service by unsubscribing from the appropriate topics. Register to the device provisioning service.
:param callback: callback which is called when the feature is disabled :param payload: Payload that can be sent with the registration request.
:param callback: callback which is called when the registration is done.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ProtocolClientError`
""" """
logger.debug("disable_responses called")
def pipeline_callback(call): def on_complete(op, error):
if call.error: # TODO : Apparently when its failed we can get result as well as error.
# TODO we need error semantics on the client if error:
exit(1) callback(error=error, result=None)
if callback: else:
callback() callback(result=op.registration_result)
self._pipeline.run_op( self._pipeline.run_op(
pipeline_ops_base.DisableFeatureOperation(feature_name=None, callback=pipeline_callback) pipeline_ops_provisioning.RegisterOperation(
request_payload=payload, registration_id=self._registration_id, callback=on_complete
)
) )

Просмотреть файл

@ -4,63 +4,93 @@
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
""" """
This module contains one of the implementations of the Provisioning Device Client which uses Symmetric Key authentication. This module contains user-facing synchronous Provisioning Device Client for Azure Provisioning
Device SDK. This client uses Symmetric Key and X509 authentication to register devices with an
IoT Hub via the Device Provisioning Service.
""" """
import logging import logging
from azure.iot.device.common.evented_callback import EventedCallback from azure.iot.device.common.evented_callback import EventedCallback
from .abstract_provisioning_device_client import AbstractProvisioningDeviceClient from .abstract_provisioning_device_client import AbstractProvisioningDeviceClient
from .abstract_provisioning_device_client import log_on_register_complete from .abstract_provisioning_device_client import log_on_register_complete
from .internal.polling_machine import PollingMachine from azure.iot.device.provisioning.pipeline import constant as dps_constant
from .pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def handle_result(callback):
try:
return callback.wait_for_completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class ProvisioningDeviceClient(AbstractProvisioningDeviceClient): class ProvisioningDeviceClient(AbstractProvisioningDeviceClient):
""" """
Client which can be used to run the registration of a device with provisioning service Client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication. using Symmetric Key orr X509 authentication.
""" """
def __init__(self, provisioning_pipeline):
"""
Initializer for the Provisioning Client.
NOTE : This initializer should not be called directly.
Instead, the class methods that start with `create_from_` should be used to create a client object.
:param provisioning_pipeline: The protocol pipeline for provisioning. As of now this only supports MQTT.
"""
super(ProvisioningDeviceClient, self).__init__(provisioning_pipeline)
self._polling_machine = PollingMachine(provisioning_pipeline)
def register(self): def register(self):
""" """
Register the device with the with thw provisioning service Register the device with the with the provisioning service
This is a synchronous call, meaning that this function will not return until the registration
process has completed successfully or the attempt has resulted in a failure. Before returning This is a synchronous call, meaning that this function will not return until the
the client will also disconnect from the provisioning service. registration process has completed successfully or the attempt has resulted in a failure.
If a registration attempt is made while a previous registration is in progress it may throw an error. Before returning, the client will also disconnect from the provisioning service.
If a registration attempt is made while a previous registration is in progress it may
throw an error.
:returns: RegistrationResult indicating the result of the registration.
:rtype: :class:`azure.iot.device.RegistrationResult`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
""" """
logger.info("Registering with Provisioning Service...") logger.info("Registering with Provisioning Service...")
if not self._provisioning_pipeline.responses_enabled[dps_constant.REGISTER]:
self._enable_responses()
register_complete = EventedCallback(return_arg_name="result") register_complete = EventedCallback(return_arg_name="result")
self._polling_machine.register(callback=register_complete)
result = register_complete.wait_for_completion() self._provisioning_pipeline.register(
payload=self._provisioning_payload, callback=register_complete
)
result = handle_result(register_complete)
log_on_register_complete(result) log_on_register_complete(result)
return result return result
def cancel(self): def _enable_responses(self):
"""Enable to receive responses from Device Provisioning Service.
This is a synchronous call, meaning that this function will not return until the feature
has been enabled.
""" """
This is a synchronous call, meaning that this function will not return until the cancellation logger.info("Enabling reception of response from Device Provisioning Service...")
process has completed successfully or the attempt has resulted in a failure. Before returning
the client will also disconnect from the provisioning service.
In case there is no registration in process it will throw an error as there is subscription_complete = EventedCallback()
no registration process to cancel. self._provisioning_pipeline.enable_responses(callback=subscription_complete)
"""
logger.info("Cancelling the current registration process")
cancel_complete = EventedCallback() handle_result(subscription_complete)
self._polling_machine.cancel(callback=cancel_complete)
cancel_complete.wait_for_completion()
logger.info("Successfully cancelled the current registration process") logger.info("Successfully subscribed to Device Provisioning Service to receive responses")

Двоичные данные
azure-iot-device/doc/images/azure_iot_sdk_python_banner.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 15 KiB

Просмотреть файл

@ -11,6 +11,7 @@ This directory contains samples showing how to use the various features of the M
```bash ```bash
az iot hub create --resource-group <your resource group> --name <your IoT Hub name> az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
``` ```
* Note that this operation make take a few minutes. * Note that this operation make take a few minutes.
2. Add the IoT Extension to the Azure CLI, and then [register a device identity](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-create) 2. Add the IoT Extension to the Azure CLI, and then [register a device identity](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-create)
@ -20,14 +21,15 @@ This directory contains samples showing how to use the various features of the M
az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id> az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>
``` ```
2. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI 3. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI
```bash ```bash
az iot hub device-identity show-connection-string --device-id <your device id> --hub-name <your IoT Hub name> az iot hub device-identity show-connection-string --device-id <your device id> --hub-name <your IoT Hub name>
``` ```
It should be in the format: It should be in the format:
```
```Text
HostName=<your IoT Hub name>.azure-devices.net;DeviceId=<your device id>;SharedAccessKey=<some value> HostName=<your IoT Hub name>.azure-devices.net;DeviceId=<your device id>;SharedAccessKey=<some value>
``` ```
@ -39,13 +41,16 @@ This directory contains samples showing how to use the various features of the M
5. On your device, set the Device Connection String as an enviornment variable called `IOTHUB_DEVICE_CONNECTION_STRING`. 5. On your device, set the Device Connection String as an enviornment variable called `IOTHUB_DEVICE_CONNECTION_STRING`.
### Windows (cmd) **Windows (cmd)**
```cmd ```cmd
set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here> set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
``` ```
* Note that there are **NO** quotation marks around the connection string. * Note that there are **NO** quotation marks around the connection string.
### Linux (bash) **Linux (bash)**
```bash ```bash
export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>" export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
``` ```
@ -56,7 +61,6 @@ This directory contains samples showing how to use the various features of the M
import os import os
import asyncio import asyncio
from azure.iot.device.aio import IoTHubDeviceClient from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import auth
async def main(): async def main():
@ -94,6 +98,7 @@ This directory contains samples showing how to use the various features of the M
8. Your device is now able to connect to Azure IoT Hub! 8. Your device is now able to connect to Azure IoT Hub!
## Additional Samples ## Additional Samples
Further samples with more complex IoT Hub scenarios are contained in the [advanced-hub-scenarios](advanced-hub-scenarios) directory, including: Further samples with more complex IoT Hub scenarios are contained in the [advanced-hub-scenarios](advanced-hub-scenarios) directory, including:
* Send multiple telemetry messages from a Device * Send multiple telemetry messages from a Device
@ -101,10 +106,10 @@ Further samples with more complex IoT Hub scenarios are contained in the [advanc
* Send and receive updates to device twin * Send and receive updates to device twin
* Receive direct method invocations * Receive direct method invocations
Further samples with more complex IoT Edge scnearios are contained in the [advanced-edge-scenarios](advanced-edge-scenarios) directory, including: Further samples with more complex IoT Edge scenarios are contained in the [advanced-edge-scenarios](advanced-edge-scenarios) directory, including:
* Send multiple telemetry messages from a Module * Send multiple telemetry messages from a Module
* Receive input messages on a Module * Receive input messages on a Module
* Send messages to a Module Output * Send messages to a Module Output
Samples for the legacy clients, that use a synchronous API, intended for use with Python 2.7, Python 3.4, or compatibility scenarios for Python 3.5+ are contained in the [legacy-samples](legacy-samples) directory. Samples for the synchronous clients, intended for use with Python 2.7 or compatibility scenarios for Python 3.5+ are contained in the [sync-samples](sync-samples) directory.

Просмотреть файл

@ -1,59 +0,0 @@
# Advanced IoT Hub Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub.
**These samples are written to run in Python 3.7+**, but can be made to work with Python 3.5 and 3.6 with a slight modification as noted in each sample:
```python
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()
```
## Included Samples
### IoTHub Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
### DPS Samples
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [register_symmetric_key.py](register_symmetric_key.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [register_x509.py](register_x509.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.

Просмотреть файл

@ -0,0 +1,43 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# -------------------------------------------------------------------------
import asyncio
import time
import uuid
from azure.iot.device.aio import IoTHubModuleClient
from azure.iot.device import Message
messages_to_send = 10
async def main():
# Inputs/Ouputs are only supported in the context of Azure IoT Edge and module client
# The module client object acts as an Azure IoT Edge module and interacts with an Azure IoT Edge hub
module_client = IoTHubModuleClient.create_from_edge_environment()
# Connect the client.
await module_client.connect()
fake_method_params = {
"methodName": "doSomethingInteresting",
"payload": "foo",
"responseTimeoutInSeconds": 5,
"connectTimeoutInSeconds": 2,
}
response = await module_client.invoke_method(
device_id="fakeDeviceId", module_id="fakeModuleId", method_params=fake_method_params
)
print("Method Response: {}".format(response))
# finally, disconnect
module_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,83 @@
# Advanced IoT Hub Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub.
**These samples are written to run in Python 3.7+**, but can be made to work with Python 3.5 and 3.6 with a slight modification as noted in each sample:
```python
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()
```
## Included Samples
### IoTHub Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```bash
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```bash
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```bash
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```bash
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
### DPS Samples
#### Individual
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [provision_symmetric_key.py](provision_symmetric_key.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_and_send_telemetry.py](provision_symmetric_key_and_send_telemetry.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key, then send a telemetry message to IoTHub. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_with_payload.py](provision_symmetric_key_with_payload.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key while supplying a custom payload. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_x509.py](provision_x509.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
* [provision_x509_and_send_telemetry.py](provision_x509_and_send_telemetry.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key, then send a telemetry message to IoTHub. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
#### Group
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* [provision_symmetric_key_group.py](provision_symmetric_key_group.py) - Provision multiple devices to IoTHub by registering them to the Device Provisioning Service using derived symmetric keys. For this you must have the environment variables PROVISIONING_MASTER_SYMMETRIC_KEY, PROVISIONING_DEVICE_ID_1, PROVISIONING_DEVICE_ID_2, PROVISIONING_DEVICE_ID_3.

Просмотреть файл

@ -15,7 +15,6 @@ symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
async def main(): async def main():
async def register_device():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key( provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host, provisioning_host=provisioning_host,
registration_id=registration_id, registration_id=registration_id,
@ -23,10 +22,8 @@ async def main():
symmetric_key=symmetric_key, symmetric_key=symmetric_key,
) )
return await provisioning_device_client.register() registration_result = await provisioning_device_client.register()
results = await asyncio.gather(register_device())
registration_result = results[0]
print("The complete registration result is") print("The complete registration result is")
print(registration_result.registration_state) print(registration_result.registration_state)

Просмотреть файл

@ -0,0 +1,70 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import asyncio
from azure.iot.device.aio import ProvisioningDeviceClient
import os
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message
import uuid
messages_to_send = 10
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
async def main():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
device_client = IoTHubDeviceClient.create_from_symmetric_key(
symmetric_key=symmetric_key,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["count"] = i
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,87 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
import base64
import hmac
import hashlib
from azure.iot.device.aio import ProvisioningDeviceClient
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
# These are the names of the devices that will eventually show up on the IoTHub
device_id_1 = os.getenv("PROVISIONING_DEVICE_ID_1")
device_id_2 = os.getenv("PROVISIONING_DEVICE_ID_2")
device_id_3 = os.getenv("PROVISIONING_DEVICE_ID_3")
# For computation of device keys
device_ids_to_keys = {}
# NOTE : Only for illustration purposes.
# This is how a device key can be derived from the group symmetric key.
# This is just a helper function to show how it is done.
# Please don't directly store the group key on the device.
# Follow the following method to compute the device key somewhere else.
def derive_device_key(device_id, group_symmetric_key):
"""
The unique device ID and the group master key should be encoded into "utf-8"
After this the encoded group master key must be used to compute an HMAC-SHA256 of the encoded registration ID.
Finally the result must be converted into Base64 format.
The device key is the "utf-8" decoding of the above result.
"""
message = device_id.encode("utf-8")
signing_key = base64.b64decode(group_symmetric_key.encode("utf-8"))
signed_hmac = hmac.HMAC(signing_key, message, hashlib.sha256)
device_key_encoded = base64.b64encode(signed_hmac.digest())
return device_key_encoded.decode("utf-8")
# derived_device_key has been computed already using the helper function somewhere else
# AND NOT on this sample. Do not use the direct master key on this sample to compute device key.
derived_device_key_1 = "some_value_already_computed"
derived_device_key_2 = "some_value_already_computed"
derived_device_key_3 = "some_value_already_computed"
device_ids_to_keys[device_id_1] = derived_device_key_1
device_ids_to_keys[device_id_1] = derived_device_key_2
device_ids_to_keys[device_id_1] = derived_device_key_3
async def main():
async def register_device(registration_id):
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=device_ids_to_keys[registration_id],
)
return await provisioning_device_client.register()
results = await asyncio.gather(
register_device(device_ids_to_keys[device_id_1]),
register_device(device_ids_to_keys[device_id_2]),
register_device(device_ids_to_keys[device_id_3]),
)
for index in range(0, len(device_ids_to_keys)):
registration_result = results[index]
print("The complete state of registration result is")
print(registration_result.registration_state)
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,47 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device.aio import ProvisioningDeviceClient
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID_PAYLOAD")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY_PAYLOAD")
class Wizard(object):
def __init__(self, first_name, last_name, dict_of_stuff):
self.first_name = first_name
self.last_name = last_name
self.props = dict_of_stuff
async def main():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
properties = {"House": "Gryffindor", "Muggle-Born": "False"}
wizard_a = Wizard("Harry", "Potter", properties)
provisioning_device_client.provisioning_payload = wizard_a
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -3,10 +3,6 @@
# Licensed under the MIT License. See License.txt in the project root for # Licensed under the MIT License. See License.txt in the project root for
# license information. # license information.
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
# This is for illustration purposes only. The sample will not work currently.
import os import os
import asyncio import asyncio
from azure.iot.device import X509 from azure.iot.device import X509
@ -18,7 +14,6 @@ registration_id = os.getenv("DPS_X509_REGISTRATION_ID")
async def main(): async def main():
async def register_device():
x509 = X509( x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"), cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"), key_file=os.getenv("X509_KEY_FILE"),
@ -31,10 +26,8 @@ async def main():
x509=x509, x509=x509,
) )
return await provisioning_device_client.register() registration_result = await provisioning_device_client.register()
results = await asyncio.gather(register_device())
registration_result = results[0]
print("The complete registration result is") print("The complete registration result is")
print(registration_result.registration_state) print(registration_result.registration_state)

Просмотреть файл

@ -0,0 +1,76 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device import X509
from azure.iot.device.aio import ProvisioningDeviceClient
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message
import uuid
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("DPS_X509_REGISTRATION_ID")
messages_to_send = 10
async def main():
x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"),
pass_phrase=os.getenv("PASS_PHRASE"),
)
provisioning_device_client = ProvisioningDeviceClient.create_from_x509_certificate(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
x509=x509,
)
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
device_client = IoTHubDeviceClient.create_from_x509_certificate(
x509=x509,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["count"] = i
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,35 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient
async def main():
# Fetch the connection string from an enviornment variable
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create instance of the device client using the connection string
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str, websockets=True)
# We do not need to call device_client.connect(), since it will be connected when we send a message.
# Send a single message
print("Sending message...")
await device_client.send_message("This is a message that is being sent")
print("Message successfully sent!")
# Finally, we do not need a disconnect. When the program completes, the client will be disconnected and destroyed.
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,55 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
import uuid
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message, ProxyOptions
import socks
messages_to_send = 10
async def main():
# The connection string for a device should never be stored in code. For the sake of simplicity we're using an environment variable here.
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
proxy_opts = ProxyOptions(
proxy_type=socks.HTTP, proxy_addr="127.0.0.1", proxy_port=8888 # localhost
)
# The client object is used to interact with your Azure IoT hub.
device_client = IoTHubDeviceClient.create_from_connection_string(
conn_str, websockets=True, proxy_options=proxy_opts
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,121 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import uuid
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient, IoTHubModuleClient
from azure.iot.device import X509
import http.client
import pprint
import json
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
"""
Welcome to the Upload to Blob sample. To use this sample you must have azure.storage.blob installed in your python environment.
To do this, you can run:
$ pip isntall azure.storage.blob
This sample covers using the following Device Client APIs:
get_storage_info_for_blob
- used to get relevant information from IoT Hub about a linked Storage Account, including
a hostname, a container name, a blob name, and a sas token. Additionally it returns a correlation_id
which is used in the notify_blob_upload_status, since the correlation_id is IoT Hub's way of marking
which blob you are working on.
notify_blob_upload_status
- used to notify IoT Hub of the status of your blob storage operation. This uses the correlation_id obtained
by the get_storage_info_for_blob task, and will tell IoT Hub to notify any service that might be listening for a notification on the
status of the file upload task.
You can learn more about File Upload with IoT Hub here:
https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-file-upload
"""
# Host is in format "<iothub name>.azure-devices.net"
async def storage_blob(blob_info):
try:
print("Azure Blob storage v12 - Python quickstart sample")
sas_url = "https://{}/{}/{}{}".format(
blob_info["hostName"],
blob_info["containerName"],
blob_info["blobName"],
blob_info["sasToken"],
)
blob_client = BlobClient.from_blob_url(sas_url)
# Create a file in local Documents directory to upload and download
local_file_name = "data/quickstart" + str(uuid.uuid4()) + ".txt"
filename = os.path.join(os.path.dirname(os.path.realpath(__file__)), local_file_name)
# Write text to the file
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
file = open(filename, "w")
file.write("Hello, World!")
file.close()
print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)
# # Upload the created file
with open(filename, "rb") as f:
result = blob_client.upload_blob(f)
return (None, result)
except Exception as ex:
print("Exception:")
print(ex)
return ex
async def main():
hostname = os.getenv("IOTHUB_HOSTNAME")
device_id = os.getenv("IOTHUB_DEVICE_ID")
x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"),
pass_phrase=os.getenv("PASS_PHRASE"),
)
device_client = IoTHubDeviceClient.create_from_x509_certificate(
hostname=hostname, device_id=device_id, x509=x509
)
# device_client = IoTHubModuleClient.create_from_connection_string(conn_str)
# Connect the client.
await device_client.connect()
# await device_client.get_storage_info_for_blob("fake_device", "fake_method_params")
# get the storage sas
blob_name = "fakeBlobName12"
storage_info = await device_client.get_storage_info_for_blob(blob_name)
# upload to blob
connection = http.client.HTTPSConnection(hostname)
connection.connect()
# notify iot hub of blob upload result
# await device_client.notify_upload_result(storage_blob_result)
storage_blob_result = await storage_blob(storage_info)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(storage_blob_result)
connection.close()
await device_client.notify_blob_upload_status(
storage_info["correlationId"], True, 200, "fake status description"
)
# Finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())

Просмотреть файл

@ -1,54 +0,0 @@
# Legacy Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub and Azure IoT Edge.
**These samples are legacy samples**, they use the sycnhronous API intended for use with Python 2.7 and 3.4, or in compatibility scenarios with later versions. We recommend you use the [asynchronous API instead](../advanced-hub-scenarios).
## IoTHub Device Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
## IoT Edge Module Samples
In order to use these samples, they **must** be run from inside an Edge container.
* [receive_message_on_input.py](receive_message_on_input.py) - Receive messages sent to an Edge module on a specific module input.
* [send_message_to_output.py](send_message_to_output.py) - Send multiple messages in parallel from an Edge module to a specific output
## DPS Samples
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [register_symmetric_key.py](register_symmetric_key.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [register_x509.py](register_x509.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.

Просмотреть файл

@ -0,0 +1,75 @@
# Legacy Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub and Azure IoT Edge.
**These samples are legacy samples**, they use the sycnhronous API intended for use with Python 2.7, or in compatibility scenarios with later versions. We recommend you use the [asynchronous API instead](../advanced-hub-scenarios).
## IoTHub Device Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```Shell
bash az iot hub monitor-events --hub-name <your IoT Hub name> --output table```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```Shell
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```Shell
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```Shell
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```Shell
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
## IoT Edge Module Samples
In order to use these samples, they **must** be run from inside an Edge container.
* [receive_message_on_input.py](receive_message_on_input.py) - Receive messages sent to an Edge module on a specific module input.
* [send_message_to_output.py](send_message_to_output.py) - Send multiple messages in parallel from an Edge module to a specific output
## DPS Samples
### Individual
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [provision_symmetric_key.py](provision_symmetric_key.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_with_payload.py](provision_symmetric_key_with_payload.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key while supplying a custom payload. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_x509.py](provision_x509.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
#### Group
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* [provision_symmetric_key_group.py](provision_symmetric_key_group.py) - Provision multiple devices to IoTHub by registering them to the Device Provisioning Service using derived symmetric keys. For this you must have the environment variables PROVISIONING_MASTER_SYMMETRIC_KEY, PROVISIONING_DEVICE_ID_1, PROVISIONING_DEVICE_ID_2, PROVISIONING_DEVICE_ID_3.

Просмотреть файл

@ -23,7 +23,7 @@ registration_result = provisioning_device_client.register()
print(registration_result) print(registration_result)
# Individual attributes can be seen as well # Individual attributes can be seen as well
print("The request_id was :-") print("The status was :-")
print(registration_result.request_id) print(registration_result.status)
print("The etag is :-") print("The etag is :-")
print(registration_result.registration_state.etag) print(registration_result.registration_state.etag)

Просмотреть файл

@ -0,0 +1,62 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device import ProvisioningDeviceClient
import os
import time
from azure.iot.device import IoTHubDeviceClient, Message
import uuid
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
registration_result = provisioning_device_client.register()
# The result can be directly printed to view the important details.
print(registration_result)
# Individual attributes can be seen as well
print("The request_id was :-")
print(registration_result.request_id)
print("The etag is :-")
print(registration_result.registration_state.etag)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
# Create device client from the above result
device_client = IoTHubDeviceClient.create_from_symmetric_key(
symmetric_key=symmetric_key,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
device_client.connect()
for i in range(1, 6):
print("sending message #" + str(i))
device_client.send_message("test payload message " + str(i))
time.sleep(1)
for i in range(6, 11):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.custom_properties["tornado-warning"] = "yes"
device_client.send_message(msg)
time.sleep(1)
# finally, disconnect
device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше