зеркало из
1
0
Форкнуть 0

chore(azure-iot-sdk-python): Merge master into digitaltwins-preview (#481)

This commit is contained in:
Zoltan Varga 2020-03-06 12:04:08 -08:00 коммит произвёл GitHub
Родитель fee0661edc
Коммит 7deede7d50
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
238 изменённых файлов: 28390 добавлений и 8331 удалений

112
README.md
Просмотреть файл

@ -1,39 +1,104 @@
![Build Status](https://azure-iot-sdks.visualstudio.com/azure-iot-sdks/_apis/build/status/python/python-preview)
#
<div align=center>
<img src="./azure-iot-device/doc/images/azure_iot_sdk_python_banner.png"></img>
<h1> V2 - We are now GA! </h1>
</div>
# Azure IoT Hub Python SDKs v2 - PREVIEW
![Build Status](https://azure-iot-sdks.visualstudio.com/azure-iot-sdks/_apis/build/status/Azure.azure-iot-sdk-python)
This repository contains the code for the future v2.0.0 of the Azure IoT SDKs for Python. The goal of v2.0.0 is to be a complete rewrite of the existing SDK that maximizes the use of the Python language and its standard features rather than wrap over the C SDK, like v1.x.x of the SDK did.
This repository contains code for the Azure IoT SDKs for Python. This enables python developers to easily create IoT device solutions that semealessly
connection to the Azure IoTHub ecosystem.
*If you're looking for the v1.x.x client library, it is now preserved in the [v1-deprecated](https://github.com/Azure/azure-iot-sdk-python/tree/v1-deprecated) branch.*
**Note that these SDKs are currently in preview, and are subject to change.**
# SDKs
## Azure IoT SDK for Python
This repository contains the following SDKs:
This repository contains the following libraries:
* [Azure IoT Device SDK](azure-iot-device) - /azure-iot-device
* Provision a device using the Device Provisioning Service for use with the Azure IoT hub
* Send/receive telemetry between a device or module and the Azure IoT hub or Azure IoT Edge device
* Handle direct methods invoked by the Azure IoT hub on a device
* Handle twin events and report twin updates
* *Still in development*
- *Blob/File upload*
- *Invoking method from a module client onto a leaf device*
* [Azure IoT Device library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/README.md)
* Azure IoT Hub SDK **(COMING SOON)**
* Do service/management operations on the Azure IoT Hub
* [Azure IoT Hub Service library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/README.md)
* Azure IoT Hub Provisioning SDK **(COMING SOON)**
* Do service/management operations on the Azure IoT Device Provisioning Service
* Coming Soon: Azure IoT Device Provisioning Service Library
# How to install the SDKs
## Installing the libraries
```
pip install azure-iot-device
```
Pip installs are provided for all of the SDK libraries in this repo:
# Contributing
[Device libraries](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device#installation)
[IoTHub library](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/README.md#installation)
## Features
:heavy_check_mark: feature available :heavy_multiplication_x: feature planned but not yet supported :heavy_minus_sign: no support planned*
*Features that are not planned may be prioritized in a future release, but are not currently planned
### Device Client Library ([azure-iot-device](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device))
#### IoTHub Device Client
| Features | Status | Description |
|------------------------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Authentication](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-security-deployment) | :heavy_check_mark: | Connect your device to IoT Hub securely with supported authentication, including private key, SASToken, X-509 Self Signed and Certificate Authority (CA) Signed. |
| [Send device-to-cloud message](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-d2c) | :heavy_check_mark: | Send device-to-cloud messages (max 256KB) to IoT Hub with the option to add custom properties. |
| [Receive cloud-to-device messages](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_check_mark: | Receive cloud-to-device messages and read associated custom and system properties from IoT Hub, with the option to complete/reject/abandon C2D messages. |
| [Device Twins](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | IoT Hub persists a device twin for each device that you connect to IoT Hub. The device can perform operations like get twin tags, subscribe to desired properties. |
| [Direct Methods](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | IoT Hub gives you the ability to invoke direct methods on devices from the cloud. The SDK supports handler for method specific and generic operation. |
| [Connection Status and Error reporting](https://docs.microsoft.com/en-us/rest/api/iothub/common-error-codes) | :heavy_multiplication_x: | Error reporting for IoT Hub supported error code. *This SDK supports error reporting on authentication and Device Not Found. |
| Retry policies | :heavy_check_mark: | Retry policy for unsuccessful device-to-cloud messages. |
| [Upload file to Blob](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-file-upload) | :heavy_check_mark: | A device can initiate a file upload and notifies IoT Hub when the upload is complete. |
#### IoTHub Module Client
| Features | Status | Description |
|------------------------------------------------------------------------------------------------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Authentication](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-security-deployment) | :heavy_check_mark: | Connect your device to IoT Hub securely with supported authentication, including private key, SASToken, X-509 Self Signed and Certificate Authority (CA) Signed. |
| [Send device-to-cloud message](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-d2c) | :heavy_check_mark: | Send device-to-cloud messages (max 256KB) to IoT Hub with the option to add custom properties. |
| [Receive cloud-to-device messages](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_check_mark: | Receive cloud-to-device messages and read associated custom and system properties from IoT Hub, with the option to complete/reject/abandon C2D messages. |
| [Device Twins](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | IoT Hub persists a device twin for each device that you connect to IoT Hub. The device can perform operations like get twin tags, subscribe to desired properties. |
| [Direct Methods](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | IoT Hub gives you the ability to invoke direct methods on devices from the cloud. The SDK supports handler for method specific and generic operation. |
| [Connection Status and Error reporting](https://docs.microsoft.com/en-us/rest/api/iothub/common-error-codes) | :heavy_multiplication_x: | Error reporting for IoT Hub supported error code. *This SDK supports error reporting on authentication and Device Not Found. |
| Retry policies | :heavy_check_mark: | Retry policy for connecting disconnected devices and resubmitting messages. |
| Direct Invocation of Method on Modules | :heavy_check_mark: | Invoke method calls to another module using using the Edge Gateway. |
#### Provisioning Device Client
| Features | Status | Description |
|-----------------------------|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| TPM Individual Enrollment | :heavy_minus_sign: | Provisioning via [Trusted Platform Module](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#trusted-platform-module-tpm). |
| X.509 Individual Enrollment | :heavy_check_mark: | Provisioning via [X.509 root certificate](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#root-certificate). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_x509_and_send_telemetry.py) folder and this [quickstart](https://docs.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python) on how to create a device client. |
| X.509 Enrollment Group | :heavy_check_mark: | Provisioning via [X.509 leaf certificate](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-security#leaf-certificate)). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_x509_and_send_telemetry.py) folder on how to create a device client. |
| Symmetric Key Enrollment | :heavy_check_mark: | Provisioning via [Symmetric key attestation](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-symmetric-key-attestation)). Please review the [samples](./azure-iot-device/samples/async-hub-scenarios/provision_symmetric_key_and_send_telemetry.py) folder on how to create a device client. |
### IoTHub Service Library ([azure-iot-hub](https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-hub/azure/iot/hub/iothub_registry_manager.py))
#### Registry Manager
| Features | Status | Description |
|---------------------------------------------------------------------------------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------|
| [Identity registry (CRUD)](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry) | :heavy_check_mark: | Use your backend app to perform CRUD operation for individual device or in bulk. |
| [Cloud-to-device messaging](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-c2d) | :heavy_multiplication_x: | Use your backend app to send cloud-to-device messages, and set up cloud-to-device message receivers. |
| [Direct Methods operations](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-direct-methods) | :heavy_check_mark: | Use your backend app to invoke direct method on device. |
| [Device Twins operations](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins) | :heavy_check_mark: | Use your backend app to perform device twin operations. *Twin reported property update callback and replace twin are in progress. |
| [Query](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-query-language) | :heavy_multiplication_x: | Use your backend app to perform query for information. |
| [Jobs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-jobs) | :heavy_multiplication_x: | Use your backend app to perform job operation. |
### IoTHub Provisioning Service Library
Feature is Coming Soon
| Features | Status | Description |
|-----------------------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
| CRUD Operation with TPM Individual Enrollment | :heavy_multiplication_x: | Manage device enrollment using TPM with the service SDK. |
| Bulk CRUD Operation with TPM Individual Enrollment | :heavy_multiplication_x: | Bulk manage device enrollment using TPM with the service SDK. |
| CRUD Operation with X.509 Individual Enrollment | :heavy_multiplication_x: | Manages device enrollment using X.509 individual enrollment with the service SDK. |
| CRUD Operation with X.509 Group Enrollment | :heavy_multiplication_x: | Manages device enrollment using X.509 group enrollment with the service SDK. |
| Query enrollments | :heavy_multiplication_x: | Query registration states with the service SDK. |
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
@ -46,3 +111,4 @@ provided by the bot. You will only need to do this once across all repos using o
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

41
SECURITY.MD Normal file
Просмотреть файл

@ -0,0 +1,41 @@
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.3 BLOCK -->
# Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)) of a security vulnerability, please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->

Просмотреть файл

@ -1,7 +1,7 @@
[bumpversion]
current_version = 2.0.0-preview.10
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)-preview\.(?P<preview>\d+)
serialize = {major}.{minor}.{patch}-preview.{preview}
current_version = 2.1.0
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)
serialize = {major}.{minor}.{patch}
[bumpversion:part:preview]
[bumpversion:part]

Просмотреть файл

@ -1,20 +1,23 @@
# Azure IoT Device SDK
The Azure IoT Device SDK for Python provides functionality for communicating with the Azure IoT Hub for both Devices and Modules.
**Note that this SDK is currently in preview, and is subject to change.**
## Azure IoT Device Features
## Features
The SDK provides the following clients:
* ### Provisioning Device Client
* Creates a device identity on the Azure IoT Hub
* ### IoT Hub Device Client
* Send telemetry messages to Azure IoT Hub
* Receive Cloud-to-Device (C2D) messages from the Azure IoT Hub
* Receive and respond to direct method invocations from the Azure IoT Hub
* ### IoT Hub Module Client
* Supports Azure IoT Edge Hub and Azure IoT Hub
* Send telemetry messages to a Hub or to another Module
* Receive Input messages from a Hub or other Modules
@ -25,115 +28,29 @@ These clients are available with an asynchronous API, as well as a blocking sync
| Python Version | Asynchronous API | Synchronous API |
| -------------- | ---------------- | --------------- |
| Python 3.5.3+ | **YES** | **YES** |
| Python 3.4 | NO | **YES** |
| Python 2.7 | NO | **YES** |
## Installation
```
```Shell
pip install azure-iot-device
```
## Set up an IoT Hub and create a Device Identity
1. Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) (or use the [Azure Cloud Shell](https://shell.azure.com/)) and use it to [create an Azure IoT Hub](https://docs.microsoft.com/en-us/cli/azure/iot/hub?view=azure-cli-latest#az-iot-hub-create).
## Device Samples
```bash
az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
```
* Note that this operation make take a few minutes.
Check out the [samples repository](./azure-iot-device/samples) for example code showing how the SDK can be used in a variety of scenarios, including:
2. Add the IoT Extension to the Azure CLI, and then [register a device identity](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-create)
```bash
az extension add --name azure-cli-iot-ext
az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>
```
2. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI
```bash
az iot hub device-identity show-connection-string --device-id <your device id> --hub-name <your IoT Hub name>
```
It should be in the format:
```
HostName=<your IoT Hub name>.azure-devices.net;DeviceId=<your device id>;SharedAccessKey=<some value>
```
## Send a simple telemetry message
1. [Begin monitoring for telemetry](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-monitor-events) on your IoT Hub using the Azure CLI
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
2. On your device, set the Device Connection String as an enviornment variable called `IOTHUB_DEVICE_CONNECTION_STRING`.
### Windows
```cmd
set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
```
* Note that there are **NO** quotation marks around the connection string.
### Linux
```bash
export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
```
3. Copy the following code that sends a single message to the IoT Hub into a new python file on your device, and run it from the terminal or IDE (**requires Python 3.7+**):
```python
import asyncio
import os
from azure.iot.device.aio import IoTHubDeviceClient
async def main():
# Fetch the connection string from an enviornment variable
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create instance of the device client using the connection string
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str)
# Send a single message
print("Sending message...")
await device_client.send_message("This is a message that is being sent")
print("Message successfully sent!")
# finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
```
4. Check the Azure CLI output to verify that the message was received by the IoT Hub. You should see the following output:
```bash
Starting event monitor, use ctrl-c to stop...
event:
origin: <your Device name>
payload: This is a message that is being sent
```
5. Your device is now able to connect to Azure IoT Hub!
## Additional Samples
Check out the [samples repository](https://github.com/Azure/azure-iot-sdk-python-preview/tree/master/azure-iot-device/samples) for example code showing how the SDK can be used in a variety of scenarios, including:
* Sending multiple telemetry messages at once.
* Receiving Cloud-to-Device messages.
* Using Edge Modules with the Azure IoT Edge Hub.
* Send and receive updates to device twin
* Receive invocations to direct methods
* Register a device with the Device Provisioning Service
* Legacy scenarios for Python 2.7 and 3.4
## Getting help and finding API docs
Our SDK makes use of docstrings which means you cand find API documentation directly through Python with use of the [help](https://docs.python.org/3/library/functions.html#help) command:
```python
>>> from azure.iot.device import IoTHubDeviceClient
>>> help(IoTHubDeviceClient)

Просмотреть файл

@ -5,6 +5,6 @@ This package provides shared modules for use with various Azure IoT device-side
INTERNAL USAGE ONLY
"""
from .models import X509
from .models import X509, ProxyOptions
__all__ = ["X509"]
__all__ = ["X509", "ProxyOptions"]

Просмотреть файл

@ -7,6 +7,7 @@
import functools
import logging
import traceback
import azure.iot.device.common.asyncio_compat as asyncio_compat
logger = logging.getLogger(__name__)
@ -69,9 +70,9 @@ class AwaitableCallback(object):
result = None
if exception:
logger.error(
"Callback completed with error {}".format(exception), exc_info=exception
)
# Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
logger.error("Callback completed with error {}".format(exception))
logger.error(traceback.format_exception_only(type(exception), exception))
loop.call_soon_threadsafe(self.future.set_exception, exception)
else:
logger.debug("Callback completed with result {}".format(result))

Просмотреть файл

@ -0,0 +1,78 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import weakref
class CallableWeakMethod(object):
"""
Object which makes a weak reference to a method call. Similar to weakref.WeakMethod,
but works on Python 2.7 and returns an object which is callable.
This objet is used primarily for callbacks and it prevents circular references in the
garbage collector. It is used specifically in the scenario where object holds a
refernce to object b and b holds a callback into a (which creates a rererence
back into a)
By default, method references are _strong_, and we end up with we have a situation
where a has a _strong) reference to b and b has a _strong_ reference to a.
The Python 3.4+ garbage collectors handle this circular reference just fine, but the
2.7 garbage collector fails, but only when one of the objects has a finalizer method.
'''
# example of bad (strong) circular dependency:
class A(object):
def --init__(self):
self.b = B() # A objects now have a strong refernce to B objects
b.handler = a.method() # and B object have a strong reference back into A objects
def method(self):
pass
'''
In the example above, if a or B has a finalizer, that object will be considered uncollectable
(on 2.7) and both objects will leak
However, if we use this object, a will a _strong_ reference to b, and b will have a _weak_
reference =back to a, and the circular depenency chain is broken.
```
# example of better (weak) circular dependency:
class A(object):
def --init__(self):
self.b = B() # A objects now have a strong refernce to B objects
b.handler = CallableWeakMethod(a, "method") # and B objects have a WEAK reference back into A objects
def method(self):
pass
```
In this example, there is no circular reference, and the Python 2.7 garbage collector is able
to collect both objects, even if one of them has a finalizer.
When we reach the point where all supported interpreters implement PEP 442, we will
no longer need this object
ref: https://www.python.org/dev/peps/pep-0442/
"""
def __init__(self, object, method_name):
self.object_weakref = weakref.ref(object)
self.method_name = method_name
def _get_method(self):
return getattr(self.object_weakref(), self.method_name)
def __call__(self, *args, **kwargs):
return self._get_method()(*args, **kwargs)
def __eq__(self, other):
return self._get_method() == other
def __repr__(self):
if self.object_weakref():
return "CallableWeakMethod for {}".format(self._get_method())
else:
return "CallableWeakMethod for {} (DEAD)".format(self.method_name)

Просмотреть файл

@ -0,0 +1,24 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class ChainableException(Exception):
"""This exception stores a reference to a previous exception which has caused
the current one"""
def __init__(self, message=None, cause=None):
# By using .__cause__, this will allow typical stack trace behavior in Python 3,
# while still being able to operate in Python 2.
self.__cause__ = cause
super(ChainableException, self).__init__(message)
def __str__(self):
if self.__cause__:
return "{} caused by {}".format(
super(ChainableException, self).__repr__(), self.__cause__.__repr__()
)
else:
return super(ChainableException, self).__repr__()

Просмотреть файл

@ -1,189 +0,0 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class OperationCancelledError(Exception):
"""
Operation was cancelled.
"""
pass
class ConnectionFailedError(Exception):
"""
Connection failed to be established
"""
pass
class ConnectionDroppedError(Exception):
"""
Previously established connection was dropped
"""
pass
class ArgumentError(Exception):
"""
Service returned 400
"""
pass
class UnauthorizedError(Exception):
"""
Authorization failed or service returned 401
"""
pass
class QuotaExceededError(Exception):
"""
Service returned 403
"""
pass
class NotFoundError(Exception):
"""
Service returned 404
"""
pass
class DeviceTimeoutError(Exception):
"""
Service returned 408
"""
# TODO: is this a method call error? If so, do we retry?
pass
class DeviceAlreadyExistsError(Exception):
"""
Service returned 409
"""
pass
class InvalidEtagError(Exception):
"""
Service returned 412
"""
pass
class MessageTooLargeError(Exception):
"""
Service returned 413
"""
pass
class ThrottlingError(Exception):
"""
Service returned 429
"""
pass
class InternalServiceError(Exception):
"""
Service returned 500
"""
pass
class BadDeviceResponseError(Exception):
"""
Service returned 502
"""
# TODO: is this a method invoke thing?
pass
class ServiceUnavailableError(Exception):
"""
Service returned 503
"""
pass
class TimeoutError(Exception):
"""
Operation timed out or service returned 504
"""
pass
class FailedStatusCodeError(Exception):
"""
Service returned unknown status code
"""
pass
class ProtocolClientError(Exception):
"""
Error returned from protocol client library
"""
pass
class PipelineError(Exception):
"""
Error returned from transport pipeline
"""
pass
status_code_to_error = {
400: ArgumentError,
401: UnauthorizedError,
403: QuotaExceededError,
404: NotFoundError,
408: DeviceTimeoutError,
409: DeviceAlreadyExistsError,
412: InvalidEtagError,
413: MessageTooLargeError,
429: ThrottlingError,
500: InternalServiceError,
502: BadDeviceResponseError,
503: ServiceUnavailableError,
504: TimeoutError,
}
def error_from_status_code(status_code, message=None):
"""
Return an Error object from a failed status code
:param int status_code: Status code returned from failed operation
:returns: Error object
"""
if status_code in status_code_to_error:
return status_code_to_error[status_code](message)
else:
return FailedStatusCodeError(message)

Просмотреть файл

@ -6,6 +6,7 @@
import threading
import logging
import six
import traceback
logger = logging.getLogger(__name__)
@ -31,7 +32,6 @@ class EventedCallback(object):
def wrapping_callback(*args, **kwargs):
if "error" in kwargs and kwargs["error"]:
logger.error("Callback called with error {}".format(kwargs["error"]))
self.exception = kwargs["error"]
elif return_arg_name:
if return_arg_name in kwargs:
@ -44,10 +44,9 @@ class EventedCallback(object):
)
if self.exception:
logger.error(
"Callback completed with error {}".format(self.exception),
exc_info=self.exception,
)
# Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
logger.error("Callback completed with error {}".format(self.exception))
logger.error(traceback.format_exc())
else:
logger.debug("Callback completed with result {}".format(self.result))

Просмотреть файл

@ -4,11 +4,12 @@
# license information.
# --------------------------------------------------------------------------
import logging
import traceback
logger = logging.getLogger(__name__)
def exception_caught_in_background_thread(e):
def handle_background_exception(e):
"""
Function which handled exceptions that are caught in background thread. This is
typically called from the callback thread inside the pipeline. These exceptions
@ -24,4 +25,32 @@ def exception_caught_in_background_thread(e):
# @FUTURE: We should add a mechanism which allows applications to receive these
# exceptions so they can respond accordingly
logger.error(msg="Exception caught in background thread. Unable to handle.", exc_info=e)
logger.error(msg="Exception caught in background thread. Unable to handle.")
logger.error(traceback.format_exception_only(type(e), e))
def swallow_unraised_exception(e, log_msg=None, log_lvl="warning"):
"""Swallow and log an exception object.
Convenience function for logging, as exceptions can only be logged correctly from within a
except block.
:param Exception e: Exception object to be swallowed.
:param str log_msg: Optional message to use when logging.
:param str log_lvl: The log level to use for logging. Default "warning".
"""
try:
raise e
except Exception:
if log_lvl == "warning":
logger.warning(log_msg)
logger.warning(traceback.format_exc())
elif log_lvl == "error":
logger.error(log_msg)
logger.error(traceback.format_exc())
elif log_lvl == "info":
logger.info(log_msg)
logger.info(traceback.format_exc())
else:
logger.debug(log_msg)
logger.debug(traceback.format_exc())

Просмотреть файл

@ -0,0 +1,108 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import uuid
import threading
import json
import ssl
from . import transport_exceptions as exceptions
from .pipeline import pipeline_thread
from six.moves import http_client
logger = logging.getLogger(__name__)
class HTTPTransport(object):
"""
A wrapper class that provides an implementation-agnostic HTTP interface.
"""
def __init__(self, hostname, server_verification_cert=None, x509_cert=None, cipher=None):
"""
Constructor to instantiate an HTTP protocol wrapper.
:param str hostname: Hostname or IP address of the remote host.
:param str server_verification_cert: Certificate which can be used to validate a server-side TLS connection (optional).
:param x509_cert: Certificate which can be used to authenticate connection to a server in lieu of a password (optional).
"""
self._hostname = hostname
self._server_verification_cert = server_verification_cert
self._x509_cert = x509_cert
self._ssl_context = self._create_ssl_context()
def _create_ssl_context(self):
"""
This method creates the SSLContext object used to authenticate the connection. The generated context is used by the http_client and is necessary when authenticating using a self-signed X509 cert or trusted X509 cert
"""
logger.debug("creating a SSL context")
ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2)
if self._server_verification_cert:
ssl_context.load_verify_locations(cadata=self._server_verification_cert)
else:
ssl_context.load_default_certs()
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
if self._x509_cert is not None:
logger.debug("configuring SSL context with client-side certificate and key")
ssl_context.load_cert_chain(
self._x509_cert.certificate_file,
self._x509_cert.key_file,
self._x509_cert.pass_phrase,
)
return ssl_context
@pipeline_thread.invoke_on_http_thread_nowait
def request(self, method, path, callback, body="", headers={}, query_params=""):
"""
This method creates a connection to a remote host, sends a request to that host, and then waits for and reads the response from that request.
:param str method: The request method (e.g. "POST")
:param str path: The path for the URL
:param Function callback: The function that gets called when this operation is complete or has failed. The callback function must accept an error and a response dictionary, where the response dictionary contains a status code, a reason, and a response string.
:param str body: The body of the HTTP request to be sent following the headers.
:param dict headers: A dictionary that provides extra HTTP headers to be sent with the request.
:param str query_params: The optional query parameters to be appended at the end of the URL.
"""
# Sends a complete request to the server
logger.info("sending https request.")
try:
logger.debug("creating an https connection")
connection = http_client.HTTPSConnection(self._hostname, context=self._ssl_context)
logger.debug("connecting to host tcp socket")
connection.connect()
logger.debug("connection succeeded")
url = "https://{hostname}/{path}{query_params}".format(
hostname=self._hostname,
path=path,
query_params="?" + query_params if query_params else "",
)
logger.debug("Sending Request to HTTP URL: {}".format(url))
logger.debug("HTTP Headers: {}".format(headers))
logger.debug("HTTP Body: {}".format(body))
connection.request(method, url, body=body, headers=headers)
response = connection.getresponse()
status_code = response.status
reason = response.reason
response_string = response.read()
logger.debug("response received")
logger.debug("closing connection to https host")
connection.close()
logger.debug("connection closed")
logger.info("https request sent, and response received.")
response_obj = {"status_code": status_code, "reason": reason, "resp": response_string}
callback(response=response_obj)
except Exception as e:
logger.error("Error in HTTP Transport: {}".format(e))
callback(
error=exceptions.ProtocolClientError(
message="Unexpected HTTPS failure during connect", cause=e
)
)

Просмотреть файл

@ -4,3 +4,4 @@ This package provides object models for use within the Azure Provisioning Device
"""
from .x509 import X509
from .proxy_options import ProxyOptions

Просмотреть файл

@ -0,0 +1,53 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""
This module represents proxy options to enable sending traffic through proxy servers.
"""
class ProxyOptions(object):
"""
A class containing various options to send traffic through proxy servers by enabling
proxying of MQTT connection.
"""
def __init__(
self, proxy_type, proxy_addr, proxy_port, proxy_username=None, proxy_password=None
):
"""
Initializer for proxy options.
:param proxy_type: The type of the proxy server. This can be one of three possible choices:socks.HTTP, socks.SOCKS4, or socks.SOCKS5
:param proxy_addr: IP address or DNS name of proxy server
:param proxy_port: The port of the proxy server. Defaults to 1080 for socks and 8080 for http.
:param proxy_username: (optional) username for SOCKS5 proxy, or userid for SOCKS4 proxy.This parameter is ignored if an HTTP server is being used.
If it is not provided, authentication will not be used (servers may accept unauthenticated requests).
:param proxy_password: (optional) This parameter is valid only for SOCKS5 servers and specifies the respective password for the username provided.
"""
self._proxy_type = proxy_type
self._proxy_addr = proxy_addr
self._proxy_port = proxy_port
self._proxy_username = proxy_username
self._proxy_password = proxy_password
@property
def proxy_type(self):
return self._proxy_type
@property
def proxy_address(self):
return self._proxy_addr
@property
def proxy_port(self):
return self._proxy_port
@property
def proxy_username(self):
return self._proxy_username
@property
def proxy_password(self):
return self._proxy_password

Просмотреть файл

@ -7,61 +7,74 @@
import paho.mqtt.client as mqtt
import logging
import ssl
import sys
import threading
import traceback
from . import errors
import weakref
import socket
from . import transport_exceptions as exceptions
import socks
logger = logging.getLogger(__name__)
# mapping of Paho conack rc codes to Error object classes
paho_conack_rc_to_error = {
mqtt.CONNACK_REFUSED_PROTOCOL_VERSION: errors.ProtocolClientError,
mqtt.CONNACK_REFUSED_IDENTIFIER_REJECTED: errors.ProtocolClientError,
mqtt.CONNACK_REFUSED_SERVER_UNAVAILABLE: errors.ConnectionFailedError,
mqtt.CONNACK_REFUSED_BAD_USERNAME_PASSWORD: errors.UnauthorizedError,
mqtt.CONNACK_REFUSED_NOT_AUTHORIZED: errors.UnauthorizedError,
# Mapping of Paho CONNACK rc codes to Error object classes
# Used for connection callbacks
paho_connack_rc_to_error = {
mqtt.CONNACK_REFUSED_PROTOCOL_VERSION: exceptions.ProtocolClientError,
mqtt.CONNACK_REFUSED_IDENTIFIER_REJECTED: exceptions.ProtocolClientError,
mqtt.CONNACK_REFUSED_SERVER_UNAVAILABLE: exceptions.ConnectionFailedError,
mqtt.CONNACK_REFUSED_BAD_USERNAME_PASSWORD: exceptions.UnauthorizedError,
mqtt.CONNACK_REFUSED_NOT_AUTHORIZED: exceptions.UnauthorizedError,
}
# mapping of Paho rc codes to Error object classes
# Mapping of Paho rc codes to Error object classes
# Used for responses to Paho APIs and non-connection callbacks
paho_rc_to_error = {
mqtt.MQTT_ERR_NOMEM: errors.ProtocolClientError,
mqtt.MQTT_ERR_PROTOCOL: errors.ProtocolClientError,
mqtt.MQTT_ERR_INVAL: errors.ArgumentError,
mqtt.MQTT_ERR_NO_CONN: errors.ConnectionDroppedError,
mqtt.MQTT_ERR_CONN_REFUSED: errors.ConnectionFailedError,
mqtt.MQTT_ERR_NOT_FOUND: errors.ConnectionFailedError,
mqtt.MQTT_ERR_CONN_LOST: errors.ConnectionDroppedError,
mqtt.MQTT_ERR_TLS: errors.UnauthorizedError,
mqtt.MQTT_ERR_PAYLOAD_SIZE: errors.ProtocolClientError,
mqtt.MQTT_ERR_NOT_SUPPORTED: errors.ProtocolClientError,
mqtt.MQTT_ERR_AUTH: errors.UnauthorizedError,
mqtt.MQTT_ERR_ACL_DENIED: errors.UnauthorizedError,
mqtt.MQTT_ERR_UNKNOWN: errors.ProtocolClientError,
mqtt.MQTT_ERR_ERRNO: errors.ProtocolClientError,
mqtt.MQTT_ERR_QUEUE_SIZE: errors.ProtocolClientError,
mqtt.MQTT_ERR_NOMEM: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_PROTOCOL: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_INVAL: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_NO_CONN: exceptions.ConnectionDroppedError,
mqtt.MQTT_ERR_CONN_REFUSED: exceptions.ConnectionFailedError,
mqtt.MQTT_ERR_NOT_FOUND: exceptions.ConnectionFailedError,
mqtt.MQTT_ERR_CONN_LOST: exceptions.ConnectionDroppedError,
mqtt.MQTT_ERR_TLS: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_PAYLOAD_SIZE: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_NOT_SUPPORTED: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_AUTH: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_ACL_DENIED: exceptions.UnauthorizedError,
mqtt.MQTT_ERR_UNKNOWN: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_ERRNO: exceptions.ProtocolClientError,
mqtt.MQTT_ERR_QUEUE_SIZE: exceptions.ProtocolClientError,
}
# Default keepalive. Paho sends a PINGREQ using this interval
# to make sure the connection is still open.
DEFAULT_KEEPALIVE = 60
def _create_error_from_conack_rc_code(rc):
def _create_error_from_connack_rc_code(rc):
"""
Given a paho CONACK rc code, return an Exception that can be raised
Given a paho CONNACK rc code, return an Exception that can be raised
"""
message = mqtt.connack_string(rc)
if rc in paho_conack_rc_to_error:
return paho_conack_rc_to_error[rc](message)
if rc in paho_connack_rc_to_error:
return paho_connack_rc_to_error[rc](message)
else:
return errors.ProtocolClientError("Unknown CONACK rc={}".format(rc))
return exceptions.ProtocolClientError("Unknown CONNACK rc={}".format(rc))
def _create_error_from_rc_code(rc):
"""
Given a paho rc code, return an Exception that can be raised
"""
if rc == 1:
# Paho returns rc=1 to mean "something went wrong. stop". We manually translate this to a ConnectionDroppedError.
return exceptions.ConnectionDroppedError("Paho returned rc==1")
elif rc in paho_rc_to_error:
message = mqtt.error_string(rc)
if rc in paho_rc_to_error:
return paho_rc_to_error[rc](message)
else:
return errors.ProtocolClientError("Unknown CONACK rc={}".format(rc))
return exceptions.ProtocolClientError("Unknown CONNACK rc=={}".format(rc))
class MQTTTransport(object):
@ -78,21 +91,37 @@ class MQTTTransport(object):
:type on_mqtt_connection_failure_handler: Function
"""
def __init__(self, client_id, hostname, username, ca_cert=None, x509_cert=None):
def __init__(
self,
client_id,
hostname,
username,
server_verification_cert=None,
x509_cert=None,
websockets=False,
cipher=None,
proxy_options=None,
):
"""
Constructor to instantiate an MQTT protocol wrapper.
:param str client_id: The id of the client connecting to the broker.
:param str hostname: Hostname or IP address of the remote broker.
:param str username: Username for login to the remote broker.
:param str ca_cert: Certificate which can be used to validate a server-side TLS connection (optional).
:param str server_verification_cert: Certificate which can be used to validate a server-side TLS connection (optional).
:param x509_cert: Certificate which can be used to authenticate connection to a server in lieu of a password (optional).
:param bool websockets: Indicates whether or not to enable a websockets connection in the Transport.
:param str cipher: Cipher string in OpenSSL cipher list format
:param proxy_options: Options for sending traffic through proxy servers.
"""
self._client_id = client_id
self._hostname = hostname
self._username = username
self._mqtt_client = None
self._ca_cert = ca_cert
self._server_verification_cert = server_verification_cert
self._x509_cert = x509_cert
self._websockets = websockets
self._cipher = cipher
self._proxy_options = proxy_options
self.on_mqtt_connected_handler = None
self.on_mqtt_disconnected_handler = None
@ -109,25 +138,53 @@ class MQTTTransport(object):
"""
logger.info("creating mqtt client")
# Instantiate client
# Instaniate the client
if self._websockets:
logger.info("Creating client for connecting using MQTT over websockets")
mqtt_client = mqtt.Client(
client_id=self._client_id,
clean_session=False,
protocol=mqtt.MQTTv311,
transport="websockets",
)
mqtt_client.ws_set_options(path="/$iothub/websocket")
else:
logger.info("Creating client for connecting using MQTT over TCP")
mqtt_client = mqtt.Client(
client_id=self._client_id, clean_session=False, protocol=mqtt.MQTTv311
)
if self._proxy_options:
mqtt_client.proxy_set(
proxy_type=self._proxy_options.proxy_type,
proxy_addr=self._proxy_options.proxy_address,
proxy_port=self._proxy_options.proxy_port,
proxy_username=self._proxy_options.proxy_username,
proxy_password=self._proxy_options.proxy_password,
)
mqtt_client.enable_logger(logging.getLogger("paho"))
# Configure TLS/SSL
ssl_context = self._create_ssl_context()
mqtt_client.tls_set_context(context=ssl_context)
# Set event handlers
# Set event handlers. Use weak references back into this object to prevent
# leaks on Python 2.7. See callable_weak_method.py and PEP 442 for explanation.
#
# We don't use the CallableWeakMethod object here because these handlers
# are not methods.
self_weakref = weakref.ref(self)
def on_connect(client, userdata, flags, rc):
this = self_weakref()
logger.info("connected with result code: {}".format(rc))
if rc:
if self.on_mqtt_connection_failure_handler:
if rc: # i.e. if there is an error
if this.on_mqtt_connection_failure_handler:
try:
self.on_mqtt_connection_failure_handler(
_create_error_from_conack_rc_code(rc)
this.on_mqtt_connection_failure_handler(
_create_error_from_connack_rc_code(rc)
)
except Exception:
logger.error("Unexpected error calling on_mqtt_connection_failure_handler")
@ -136,9 +193,9 @@ class MQTTTransport(object):
logger.warning(
"connection failed, but no on_mqtt_connection_failure_handler handler callback provided"
)
elif self.on_mqtt_connected_handler:
elif this.on_mqtt_connected_handler:
try:
self.on_mqtt_connected_handler()
this.on_mqtt_connected_handler()
except Exception:
logger.error("Unexpected error calling on_mqtt_connected_handler")
logger.error(traceback.format_exc())
@ -146,15 +203,18 @@ class MQTTTransport(object):
logger.warning("No event handler callback set for on_mqtt_connected_handler")
def on_disconnect(client, userdata, rc):
this = self_weakref()
logger.info("disconnected with result code: {}".format(rc))
cause = None
if rc:
if rc: # i.e. if there is an error
logger.debug("".join(traceback.format_stack()))
cause = _create_error_from_rc_code(rc)
this._stop_automatic_reconnect()
if self.on_mqtt_disconnected_handler:
if this.on_mqtt_disconnected_handler:
try:
self.on_mqtt_disconnected_handler(cause)
this.on_mqtt_disconnected_handler(cause)
except Exception:
logger.error("Unexpected error calling on_mqtt_disconnected_handler")
logger.error(traceback.format_exc())
@ -162,29 +222,33 @@ class MQTTTransport(object):
logger.warning("No event handler callback set for on_mqtt_disconnected_handler")
def on_subscribe(client, userdata, mid, granted_qos):
this = self_weakref()
logger.info("suback received for {}".format(mid))
# subscribe failures are returned from the subscribe() call. This is just
# a notification that a SUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid)
this._op_manager.complete_operation(mid)
def on_unsubscribe(client, userdata, mid):
this = self_weakref()
logger.info("UNSUBACK received for {}".format(mid))
# unsubscribe failures are returned from the unsubscribe() call. This is just
# a notification that a SUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid)
this._op_manager.complete_operation(mid)
def on_publish(client, userdata, mid):
this = self_weakref()
logger.info("payload published for {}".format(mid))
# publish failures are returned from the publish() call. This is just
# a notification that a PUBACK was received, so there is no failure case here
self._op_manager.complete_operation(mid)
this._op_manager.complete_operation(mid)
def on_message(client, userdata, mqtt_message):
this = self_weakref()
logger.info("message received on {}".format(mqtt_message.topic))
if self.on_mqtt_message_received_handler:
if this.on_mqtt_message_received_handler:
try:
self.on_mqtt_message_received_handler(mqtt_message.topic, mqtt_message.payload)
this.on_mqtt_message_received_handler(mqtt_message.topic, mqtt_message.payload)
except Exception:
logger.error("Unexpected error calling on_mqtt_message_received_handler")
logger.error(traceback.format_exc())
@ -203,6 +267,40 @@ class MQTTTransport(object):
logger.debug("Created MQTT protocol client, assigned callbacks")
return mqtt_client
def _stop_automatic_reconnect(self):
"""
After disconnecting because of an error, Paho will attempt to reconnect (some of the time --
this isn't 100% reliable). We don't want Paho to reconnect because we want to control the
timing of the reconnect, so we force the connection closed.
We are relying on intimite knowledge of Paho behavior here. If this becomes a problem,
it may be necessary to write our own Paho thread and stop using thread_start()/thread_stop().
This is certainly supported by Paho, but the thread that Paho provides works well enough
(so far) and making our own would be more complex than is currently justified.
"""
logger.info("Forcing paho disconnect to prevent it from automatically reconnecting")
# Note: We are calling this inside our on_disconnect() handler, so we are inside the
# Paho thread at this point. This is perfectly valid. Comments in Paho's client.py
# loop_forever() function recomment calling disconnect() from a callback to exit the
# Paho thread/loop.
self._mqtt_client.disconnect()
# Calling disconnect() isn't enough. We also need to call loop_stop to make sure
# Paho is as clean as possible. Our call to disconnect() above is enough to stop the
# loop and exit the tread, but the call to loop_stop() is necessary to complete the cleanup.
self._mqtt_client.loop_stop()
# Finally, because of a bug in Paho, we need to null out the _thread pointer. This
# is necessary because the code that sets _thread to None only gets called if you
# call loop_stop from an external thread (and we're still inside the Paho thread here).
self._mqtt_client._thread = None
logger.debug("Done forcing paho disconnect")
def _create_ssl_context(self):
"""
This method creates the SSLContext object used by Paho to authenticate the connection.
@ -210,12 +308,17 @@ class MQTTTransport(object):
logger.debug("creating a SSL context")
ssl_context = ssl.SSLContext(protocol=ssl.PROTOCOL_TLSv1_2)
if self._ca_cert:
ssl_context.load_verify_locations(cadata=self._ca_cert)
if self._server_verification_cert:
ssl_context.load_verify_locations(cadata=self._server_verification_cert)
else:
ssl_context.load_default_certs()
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
if self._cipher:
try:
ssl_context.set_ciphers(self._cipher)
except ssl.SSLError as e:
# TODO: custom error with more detail?
raise e
if self._x509_cert is not None:
logger.debug("configuring SSL context with client-side certificate and key")
@ -225,6 +328,9 @@ class MQTTTransport(object):
self._x509_cert.pass_phrase,
)
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.check_hostname = True
return ssl_context
def connect(self, password=None):
@ -235,45 +341,118 @@ class MQTTTransport(object):
The password is not required if the transport was instantiated with an x509 certificate.
If MQTT connection has been proxied, connection will take a bit longer to allow negotiation
with the proxy server. Any errors in the proxy connection process will trigger exceptions
:param str password: The password for connecting with the MQTT broker (Optional).
:raises: ConnectionFailedError if connection could not be established.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: UnauthorizedError if there is an error authenticating.
:raises: ProtocolClientError if there is some other client error.
"""
logger.info("connecting to mqtt broker")
self._mqtt_client.username_pw_set(username=self._username, password=password)
rc = self._mqtt_client.connect(host=self._hostname, port=8883)
try:
if self._websockets:
logger.info("Connect using port 443 (websockets)")
rc = self._mqtt_client.connect(
host=self._hostname, port=443, keepalive=DEFAULT_KEEPALIVE
)
else:
logger.info("Connect using port 8883 (TCP)")
rc = self._mqtt_client.connect(
host=self._hostname, port=8883, keepalive=DEFAULT_KEEPALIVE
)
except socket.error as e:
# Only this type will raise a special error
# To stop it from retrying.
if (
isinstance(e, ssl.SSLError)
and e.strerror is not None
and "CERTIFICATE_VERIFY_FAILED" in e.strerror
):
raise exceptions.TlsExchangeAuthError(cause=e)
elif isinstance(e, socks.ProxyError):
if isinstance(e, socks.SOCKS5AuthError):
# TODO This is the only I felt like specializing
raise exceptions.UnauthorizedError(cause=e)
else:
raise exceptions.ProtocolProxyError(cause=e)
else:
# If the socket can't open (e.g. using iptables REJECT), we get a
# socket.error. Convert this into ConnectionFailedError so we can retry
raise exceptions.ConnectionFailedError(cause=e)
except socks.ProxyError as pe:
if isinstance(pe, socks.SOCKS5AuthError):
raise exceptions.UnauthorizedError(cause=pe)
else:
raise exceptions.ProtocolProxyError(cause=pe)
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during connect", cause=e
)
logger.debug("_mqtt_client.connect returned rc={}".format(rc))
if rc:
raise _create_error_from_rc_code(rc)
self._mqtt_client.loop_start()
def reconnect(self, password=None):
def reauthorize_connection(self, password=None):
"""
Reconnect to the MQTT broker, using username set at instantiation.
Reauthorize with the MQTT broker, using username set at instantiation.
Connect should have previously been called in order to use this function.
The password is not required if the transport was instantiated with an x509 certificate.
:param str password: The password for reconnecting with the MQTT broker (Optional).
:param str password: The password for reauthorizing with the MQTT broker (Optional).
:raises: ConnectionFailedError if connection could not be established.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: UnauthorizedError if there is an error authenticating.
:raises: ProtocolClientError if there is some other client error.
"""
logger.info("reconnecting MQTT client")
logger.info("reauthorizing MQTT client")
self._mqtt_client.username_pw_set(username=self._username, password=password)
try:
rc = self._mqtt_client.reconnect()
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during reconnect", cause=e
)
logger.debug("_mqtt_client.reconnect returned rc={}".format(rc))
if rc:
# This could result in ConnectionFailedError, ConnectionDroppedError, UnauthorizedError
# or ProtocolClientError
raise _create_error_from_rc_code(rc)
def disconnect(self):
"""
Disconnect from the MQTT broker.
:raises: ProtocolClientError if there is some client error.
"""
logger.info("disconnecting MQTT client")
try:
rc = self._mqtt_client.disconnect()
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during disconnect", cause=e
)
logger.debug("_mqtt_client.disconnect returned rc={}".format(rc))
self._mqtt_client.loop_stop()
if rc:
raise _create_error_from_rc_code(rc)
# This could result in ConnectionDroppedError or ProtocolClientError
err = _create_error_from_rc_code(rc)
# If we get a ConnectionDroppedError, swallow it, because we have successfully disconnected!
if type(err) is exceptions.ConnectionDroppedError:
logger.warning("Dropped connection while disconnecting - swallowing error")
pass
else:
raise err
def subscribe(self, topic, qos=1, callback=None):
"""
@ -283,14 +462,25 @@ class MQTTTransport(object):
:param int qos: the desired quality of service level for the subscription. Defaults to 1.
:param callback: A callback to be triggered upon completion (Optional).
:return: message ID for the subscribe request
:raises: ValueError if qos is not 0, 1 or 2
:raises: ValueError if topic is None or has zero string length
:return: message ID for the subscribe request.
:raises: ValueError if qos is not 0, 1 or 2.
:raises: ValueError if topic is None or has zero string length.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
"""
logger.info("subscribing to {} with qos {}".format(topic, qos))
try:
(rc, mid) = self._mqtt_client.subscribe(topic, qos=qos)
except ValueError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during subscribe", cause=e
)
logger.debug("_mqtt_client.subscribe returned rc={}".format(rc))
if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback)
@ -301,12 +491,22 @@ class MQTTTransport(object):
:param str topic: a single string which is the subscription topic to unsubscribe from.
:param callback: A callback to be triggered upon completion (Optional).
:raises: ValueError if topic is None or has zero string length
:raises: ValueError if topic is None or has zero string length.
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
"""
logger.info("unsubscribing from {}".format(topic))
try:
(rc, mid) = self._mqtt_client.unsubscribe(topic)
except ValueError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during unsubscribe", cause=e
)
logger.debug("_mqtt_client.unsubscribe returned rc={}".format(rc))
if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback)
@ -315,7 +515,8 @@ class MQTTTransport(object):
Send a message via the MQTT broker.
:param str topic: topic: The topic that the message should be published on.
:param str payload: The actual message to send.
:param payload: The actual message to send.
:type payload: str, bytes, int, float or None
:param int qos: the desired quality of service level for the subscription. Defaults to 1.
:param callback: A callback to be triggered upon completion (Optional).
@ -323,11 +524,24 @@ class MQTTTransport(object):
:raises: ValueError if topic is None or has zero string length
:raises: ValueError if topic contains a wildcard ("+")
:raises: ValueError if the length of the payload is greater than 268435455 bytes
:raises: TypeError if payload is not a valid type
:raises: ConnectionDroppedError if connection is dropped during execution.
:raises: ProtocolClientError if there is some other client error.
"""
logger.info("publishing on {}".format(topic))
try:
(rc, mid) = self._mqtt_client.publish(topic=topic, payload=payload, qos=qos)
except ValueError:
raise
except TypeError:
raise
except Exception as e:
raise exceptions.ProtocolClientError(
message="Unexpected Paho failure during publish", cause=e
)
logger.debug("_mqtt_client.publish returned rc={}".format(rc))
if rc:
# This could result in ConnectionDroppedError or ProtocolClientError
raise _create_error_from_rc_code(rc)
self._op_manager.establish_operation(mid, callback)
@ -385,7 +599,7 @@ class OperationManager(object):
logger.error("Unexpected error calling callback for MID: {}".format(mid))
logger.error(traceback.format_exc())
else:
logger.warning("No callback for MID: {}".format(mid))
logger.exception("No callback for MID: {}".format(mid))
def complete_operation(self, mid):
"""Complete an operation identified by MID and trigger the associated completion callback.

Просмотреть файл

@ -7,3 +7,4 @@ INTERNAL USAGE ONLY
from .pipeline_events_base import PipelineEvent
from .pipeline_ops_base import PipelineOperation
from .pipeline_stages_base import PipelineStage
from .pipeline_exceptions import OperationCancelled

Просмотреть файл

@ -0,0 +1,47 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six
import abc
logger = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class BasePipelineConfig(object):
"""A base class for storing all configurations/options shared across the Azure IoT Python Device Client Library.
More specific configurations such as those that only apply to the IoT Hub Client will be found in the respective
config files.
"""
def __init__(self, websockets=False, cipher="", proxy_options=None):
"""Initializer for BasePipelineConfig
:param bool websockets: Enabling/disabling websockets in MQTT. This feature is relevant
if a firewall blocks port 8883 from use.
:param cipher: Optional cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
"""
self.websockets = websockets
self.cipher = self._sanitize_cipher(cipher)
self.proxy_options = proxy_options
@staticmethod
def _sanitize_cipher(cipher):
"""Sanitize the cipher input and convert to a string in OpenSSL list format
"""
if isinstance(cipher, list):
cipher = ":".join(cipher)
if isinstance(cipher, str):
cipher = cipher.upper()
cipher = cipher.replace("_", "-")
else:
raise TypeError("Invalid type for 'cipher'")
return cipher

Просмотреть файл

@ -1,137 +0,0 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import sys
from . import pipeline_thread
from azure.iot.device.common import unhandled_exceptions
from six.moves import queue
logger = logging.getLogger(__name__)
@pipeline_thread.runs_on_pipeline_thread
def delegate_to_different_op(stage, original_op, new_op):
"""
Continue an operation using a new operation. This means that the new operation
will be passed down the pipeline (starting at the next stage). When that new
operation completes, the original operation will also complete. In this way,
a stage can accept one type of operation and, effectively, change that operation
into a different type of operation before passing it to the next stage.
This is useful when a generic operation (such as "enable feature") needs to be
converted into a more specific operation (such as "subscribe to mqtt topic").
In that case, a stage's _execute_op function would call this function passing in
the original "enable feature" op and the new "subscribe to mqtt topic"
op. This function will pass the "subscribe" down. When the "subscribe" op
is completed, this function will cause the original op to complete.
This function is only really useful if there is no data returned in the
new_op that that needs to be copied back into the original_op before
completing it. If data needs to be copied this way, some other method needs
to be used. (or a "copy data back" function needs to be added to this function
as an optional parameter.)
:param PipelineStage stage: stage to delegate the operation to
:param PipelineOperation original_op: Operation that is being continued using a
different op. This is most likely the operation that is currently being handled
by the stage. This operation is not actually continued, in that it is not
actually passed down the pipeline. Instead, the original_op operation is
effectively paused while we wait for the new_op operation to complete. When
the new_op operation completes, the original_op operation will also be completed.
:param PipelineOperation new_op: Operation that is being passed down the pipeline
to effectively continue the work represented by original_op. This is most likely
a different type of operation that is able to accomplish the intention of the
original_op in a way that is more specific than the original_op.
"""
logger.debug("{}({}): continuing with {} op".format(stage.name, original_op.name, new_op.name))
@pipeline_thread.runs_on_pipeline_thread
def new_op_complete(op):
logger.debug(
"{}({}): completing with result from {}".format(
stage.name, original_op.name, new_op.name
)
)
original_op.error = new_op.error
complete_op(stage, original_op)
new_op.callback = new_op_complete
pass_op_to_next_stage(stage, new_op)
@pipeline_thread.runs_on_pipeline_thread
def pass_op_to_next_stage(stage, op):
"""
Helper function to continue a given operation by passing it to the next stage
in the pipeline. If there is no next stage in the pipeline, this function
will fail the operation and call complete_op to return the failure back up the
pipeline. If the operation is already in an error state, this function will
complete the operation in order to return that error to the caller.
:param PipelineStage stage: stage that the operation is being passed from
:param PipelineOperation op: Operation which is being passed on
"""
if op.error:
logger.error("{}({}): op has error. completing.".format(stage.name, op.name))
complete_op(stage, op)
elif not stage.next:
logger.error("{}({}): no next stage. completing with error".format(stage.name, op.name))
op.error = NotImplementedError(
"{} not handled after {} stage with no next stage".format(op.name, stage.name)
)
complete_op(stage, op)
else:
logger.debug("{}({}): passing to next stage.".format(stage.name, op.name))
stage.next.run_op(op)
@pipeline_thread.runs_on_pipeline_thread
def complete_op(stage, op):
"""
Helper function to complete an operation by calling its callback function thus
returning the result of the operation back up the pipeline. This is perferred to
calling the operation's callback directly as it provides several layers of protection
(such as a try/except wrapper) which are strongly advised.
"""
if op.error:
logger.error("{}({}): completing with error {}".format(stage.name, op.name, op.error))
else:
logger.debug("{}({}): completing without error".format(stage.name, op.name))
try:
op.callback(op)
except Exception as e:
_, e, _ = sys.exc_info()
logger.error(
msg="Unhandled error calling back inside {}.complete_op() after {} complete".format(
stage.name, op.name
),
exc_info=e,
)
unhandled_exceptions.exception_caught_in_background_thread(e)
@pipeline_thread.runs_on_pipeline_thread
def pass_event_to_previous_stage(stage, event):
"""
Helper function to pass an event to the previous stage of the pipeline. This is the default
behavior of events while traveling through the pipeline. They start somewhere (maybe the
bottom) and move up the pipeline until they're handled or until they error out.
"""
if stage.previous:
logger.debug(
"{}({}): pushing event up to {}".format(stage.name, event.name, stage.previous.name)
)
stage.previous.handle_pipeline_event(event)
else:
logger.error("{}({}): Error: unhandled event".format(stage.name, event.name))
error = NotImplementedError(
"{} unhandled at {} stage with no previous stage".format(event.name, stage.name)
)
unhandled_exceptions.exception_caught_in_background_thread(error)

Просмотреть файл

@ -33,34 +33,49 @@ class PipelineEvent(object):
self.name = self.__class__.__name__
class IotResponseEvent(PipelineEvent):
class ResponseEvent(PipelineEvent):
"""
A PipelineEvent object which is the second part of an SendIotRequestAndWaitForResponseOperation operation
(the response). The SendIotRequestAndWaitForResponseOperation represents the common operation of sending
A PipelineEvent object which is the second part of an RequestAndResponseOperation operation
(the response). The RequestAndResponseOperation represents the common operation of sending
a request to iothub with a request_id ($rid) value and waiting for a response with
the same $rid value. This convention is used by both Twin and Provisioning features.
The response represented by this event has not yet been matched to the corresponding
SendIotRequestOperation operation. That matching is done by the CoordinateRequestAndResponseStage
stage which takes the contents of this event and puts it into the SendIotRequestAndWaitForResponseOperation
RequestOperation operation. That matching is done by the CoordinateRequestAndResponseStage
stage which takes the contents of this event and puts it into the RequestAndResponseOperation
operation with the matching $rid value.
:ivar request_id: The request ID which will eventually be used to match a RequestOperation
operation to this event.
:type request_id: str
:ivar status_code: The status code returned by the response. Any value under 300 is
considered success.
:type status_code: int
:ivar request_id: The request ID which will eventually be used to match a SendIotRequestOperation
operation to this event.
:type request: str
:ivar response_body: The body of the response.
:type request_body: str
:ivar status_code:
:type status: int
:ivar respons_body:
:type response_body: str
:ivar retry_after: A retry interval value that was extracted from the topic.
:type retry_after: int
"""
def __init__(self, request_id, status_code, response_body):
super(IotResponseEvent, self).__init__()
def __init__(self, request_id, status_code, response_body, retry_after=None):
super(ResponseEvent, self).__init__()
self.request_id = request_id
self.status_code = status_code
self.response_body = response_body
self.retry_after = retry_after
class ConnectedEvent(PipelineEvent):
"""
A PipelineEvent object indicating a connection has been established.
"""
pass
class DisconnectedEvent(PipelineEvent):
"""
A PipelineEvent object indicating a connection has been dropped.
"""
pass

Просмотреть файл

@ -0,0 +1,40 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines exceptions that may be raised from a pipeline"""
from azure.iot.device.common.chainable_exception import ChainableException
class PipelineException(ChainableException):
"""Generic pipeline exception"""
pass
class OperationCancelled(PipelineException):
"""Operation was cancelled"""
pass
class OperationError(PipelineException):
"""Error while executing an Operation"""
pass
class PipelineTimeoutError(PipelineException):
"""
Pipeline operation timed out
"""
pass
class PipelineError(PipelineException):
"""Error caused by incorrect pipeline configuration"""
pass

Просмотреть файл

@ -3,6 +3,14 @@
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import sys
import logging
import traceback
from . import pipeline_exceptions
from . import pipeline_thread
from azure.iot.device.common import handle_exceptions
logger = logging.getLogger(__name__)
class PipelineOperation(object):
@ -22,7 +30,7 @@ class PipelineOperation(object):
successfully or with a failure.
:type callback: Function
:ivar needs_connection: This is an attribute that indicates whether a particular operation
requires a connection to operate. This is currently used by the EnsureConnectionStage
requires a connection to operate. This is currently used by the AutoConnectStage
stage, but this functionality will be revamped shortly.
:type needs_connection: Boolean
:ivar error: The presence of a value in the error attribute indicates that the operation failed,
@ -30,7 +38,7 @@ class PipelineOperation(object):
:type error: Error
"""
def __init__(self, callback=None):
def __init__(self, callback):
"""
Initializer for PipelineOperation objects.
@ -43,10 +51,171 @@ class PipelineOperation(object):
"Cannot instantiate PipelineOperation object. You need to use a derived class"
)
self.name = self.__class__.__name__
self.callback = callback
self.callback_stack = []
self.needs_connection = False
self.completed = False # Operation has been fully completed
self.completing = False # Operation is in the process of completing
self.error = None # Error associated with Operation completion
self.add_callback(callback)
def add_callback(self, callback):
"""Adds a callback to the Operation that will be triggered upon Operation completion.
When an Operation is completed, all callbacks will be resolved in LIFO order.
Callbacks cannot be added to an already completed operation, or an operation that is
currently undergoing a completion process.
:param callback: The callback to add to the operation.
:raises: OperationError if the operation is already completed, or is in the process of
completing.
"""
if self.completed:
raise pipeline_exceptions.OperationError(
"{}: Attempting to add a callback to an already-completed operation!".format(
self.name
)
)
if self.completing:
raise pipeline_exceptions.OperationError(
"{}: Attempting to add a callback to a operation with completion in progress!".format(
self.name
)
)
else:
self.callback_stack.append(callback)
@pipeline_thread.runs_on_pipeline_thread
def complete(self, error=None):
""" Complete the operation, and trigger all callbacks in LIFO order.
The operation is completed successfully be default, or completed unsucessfully if an error
is provided.
An operation that is already fully completed, or in the process of completion cannot be
completed again.
This process can be halted if a callback for the operation invokes the .halt_completion()
method on this Operation.
:param error: Optionally provide an Exception object indicating the error that caused
the completion. Providing an error indicates that the operation was unsucessful.
"""
if error:
logger.error("{}: completing with error {}".format(self.name, error))
else:
logger.debug("{}: completing without error".format(self.name))
if self.completed or self.completing:
logger.error("{}: has already been completed!".format(self.name))
e = pipeline_exceptions.OperationError(
"Attempting to complete an already-completed operation: {}".format(self.name)
)
# This could happen in a foreground or background thread, so err on the side of caution
# and send it to the background handler.
handle_exceptions.handle_background_exception(e)
else:
# Operation is now in the process of completing
self.completing = True
self.error = error
while self.callback_stack:
if not self.completing:
logger.debug("{}: Completion halted!".format(self.name))
break
if self.completed:
# This block should never be reached - this is an invalid state.
# If this block is reached, there is a bug in the code.
logger.error(
"{}: Invalid State! Operation completed while resolving completion".format(
self.name
)
)
e = pipeline_exceptions.OperationError(
"Operation reached fully completed state while still resolving completion: {}".format(
self.name
)
)
handle_exceptions.handle_background_exception(e)
break
callback = self.callback_stack.pop()
try:
callback(op=self, error=error)
except Exception as e:
logger.error(
"Unhandled error while triggering callback for {}".format(self.name)
)
logger.error(traceback.format_exc())
# This could happen in a foreground or background thread, so err on the side of caution
# and send it to the background handler.
handle_exceptions.handle_background_exception(e)
if self.completing:
# Operation is now completed, no longer in the process of completing
self.completing = False
self.completed = True
@pipeline_thread.runs_on_pipeline_thread
def halt_completion(self):
"""Halt the completion of an operation that is currently undergoing a completion process
as a result of a call to .complete().
Completion cannot be halted if there is no currently ongoing completion process. The only
way to successfully invoke this method is from within a callback on the Operation in
question.
This method will leave any yet-untriggered callbacks on the Operation to be triggered upon
a later completion.
This method will clear any error associated with the currently ongoing completion process
from the Operation.
"""
if not self.completing:
logger.error("{}: is not currently in the process of completion!".format(self.name))
e = pipeline_exceptions.OperationError(
"Attempting to halt completion of an operation not in the process of completion: {}".format(
self.name
)
)
handle_exceptions.handle_background_exception(e)
else:
logger.debug("{}: Halting completion...".format(self.name))
self.completing = False
self.error = None
@pipeline_thread.runs_on_pipeline_thread
def spawn_worker_op(self, worker_op_type, **kwargs):
"""Create and return a new operation, which, when completed, will complete the operation
it was spawned from.
:param worker_op_type: The type (class) of the new worker operation.
:param **kwargs: The arguments to instantiate the new worker operation with. Note that a
callback is not required, but if provided, will be triggered prior to completing the
operation that spawned the worker operation.
:returns: A new worker operation of the type specified in the worker_op_type parameter.
"""
logger.debug("{}: creating worker op of type {}".format(self.name, worker_op_type.__name__))
@pipeline_thread.runs_on_pipeline_thread
def on_worker_op_complete(op, error):
logger.debug("{}: Worker op ({}) has been completed".format(self.name, op.name))
self.complete(error=error)
if "callback" in kwargs:
provided_callback = kwargs["callback"]
kwargs["callback"] = on_worker_op_complete
worker_op = worker_op_type(**kwargs)
worker_op.add_callback(provided_callback)
else:
kwargs["callback"] = on_worker_op_complete
worker_op = worker_op_type(**kwargs)
return worker_op
class ConnectOperation(PipelineOperation):
"""
@ -57,17 +226,19 @@ class ConnectOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
"""
pass
def __init__(self, callback):
self.retry_timer = None
super(ConnectOperation, self).__init__(callback)
class ReconnectOperation(PipelineOperation):
class ReauthorizeConnectionOperation(PipelineOperation):
"""
A PipelineOperation object which tells the pipeline to reconnect to whatever service it is connected to.
A PipelineOperation object which tells the pipeline to reauthorize the connection to whatever service it is connected to.
Clients will most-likely submit a Reconnect operation when some credential (such as a sas token) has changed and the protocol client
Clients will most-likely submit a ReauthorizeConnectionOperation when some credential (such as a sas token) has changed and the protocol client
needs to re-establish the connection to refresh the credentials
This operation is in the group of base operations because reconnecting is a common operation that many clients might need to do.
This operation is in the group of base operations because reauthorizinging is a common operation that many clients might need to do.
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
"""
@ -101,7 +272,7 @@ class EnableFeatureOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
"""
def __init__(self, feature_name, callback=None):
def __init__(self, feature_name, callback):
"""
Initializer for EnableFeatureOperation objects.
@ -129,7 +300,7 @@ class DisableFeatureOperation(PipelineOperation):
Even though this is an base operation, it will most likely be handled by a more specific stage (such as an IoTHub or MQTT stage).
"""
def __init__(self, feature_name, callback=None):
def __init__(self, feature_name, callback):
"""
Initializer for DisableFeatureOperation objects.
@ -154,7 +325,7 @@ class UpdateSasTokenOperation(PipelineOperation):
(such as IoTHub or MQTT stages).
"""
def __init__(self, sas_token, callback=None):
def __init__(self, sas_token, callback):
"""
Initializer for UpdateSasTokenOperation objects.
@ -168,7 +339,7 @@ class UpdateSasTokenOperation(PipelineOperation):
self.sas_token = sas_token
class SendIotRequestAndWaitForResponseOperation(PipelineOperation):
class RequestAndResponseOperation(PipelineOperation):
"""
A PipelineOperation object which wraps the common operation of sending a request to iothub with a request_id ($rid)
value and waiting for a response with the same $rid value. This convention is used by both Twin and Provisioning
@ -185,11 +356,15 @@ class SendIotRequestAndWaitForResponseOperation(PipelineOperation):
:type status_code: int
:ivar response_body: The body of the response.
:type response_body: Undefined
:ivar query_params: Any query parameters that need to be sent with the request.
Example is the id of the operation as returned by the initial provisioning request.
"""
def __init__(self, request_type, method, resource_location, request_body, callback=None):
def __init__(
self, request_type, method, resource_location, request_body, callback, query_params=None
):
"""
Initializer for SendIotRequestAndWaitForResponseOperation objects
Initializer for RequestAndResponseOperation objects
:param str request_type: The type of request. This is a string which is used by protocol-specific stages to
generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert
@ -204,29 +379,37 @@ class SendIotRequestAndWaitForResponseOperation(PipelineOperation):
failed. The callback function must accept A PipelineOperation object which indicates
the specific operation which has completed or failed.
"""
super(SendIotRequestAndWaitForResponseOperation, self).__init__(callback=callback)
super(RequestAndResponseOperation, self).__init__(callback=callback)
self.request_type = request_type
self.method = method
self.resource_location = resource_location
self.request_body = request_body
self.status_code = None
self.response_body = None
self.query_params = query_params
class SendIotRequestOperation(PipelineOperation):
class RequestOperation(PipelineOperation):
"""
A PipelineOperation object which is the first part of an SendIotRequestAndWaitForResponseOperation operation (the request). The second
part of the SendIotRequestAndWaitForResponseOperation operation (the response) is returned via an IotResponseEvent event.
A PipelineOperation object which is the first part of an RequestAndResponseOperation operation (the request). The second
part of the RequestAndResponseOperation operation (the response) is returned via an ResponseEvent event.
Even though this is an base operation, it will most likely be generated and also handled by more specifics stages
(such as IoTHub or MQTT stages).
"""
def __init__(
self, request_type, method, resource_location, request_body, request_id, callback=None
self,
request_type,
method,
resource_location,
request_body,
request_id,
callback,
query_params=None,
):
"""
Initializer for SendIotRequestOperation objects
Initializer for RequestOperation objects
:param str request_type: The type of request. This is a string which is used by protocol-specific stages to
generate the actual request. For example, if request_type is "twin", then the iothub_mqtt stage will convert
@ -240,10 +423,13 @@ class SendIotRequestOperation(PipelineOperation):
:param Function callback: The function that gets called when this operation is complete or has
failed. The callback function must accept A PipelineOperation object which indicates
the specific operation which has completed or failed.
:type query_params: Any query parameters that need to be sent with the request.
Example is the id of the operation as returned by the initial provisioning request.
"""
super(SendIotRequestOperation, self).__init__(callback=callback)
super(RequestOperation, self).__init__(callback=callback)
self.method = method
self.resource_location = resource_location
self.request_type = request_type
self.request_body = request_body
self.request_id = request_id
self.query_params = query_params

Просмотреть файл

@ -0,0 +1,65 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from . import PipelineOperation
class SetHTTPConnectionArgsOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to connect to a server using the HTTP protocol.
This operation is in the group of HTTP operations because its attributes are very specific to the HTTP protocol.
"""
def __init__(
self, hostname, callback, server_verification_cert=None, client_cert=None, sas_token=None
):
"""
Initializer for SetHTTPConnectionArgsOperation objects.
:param str hostname: The hostname of the HTTP server we will eventually connect to
:param str server_verification_cert: (Optional) The server verification certificate to use
if the HTTP server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the HTTP service
:param str sas_token: The token string which will be used to authenticate with the service
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(SetHTTPConnectionArgsOperation, self).__init__(callback=callback)
self.hostname = hostname
self.server_verification_cert = server_verification_cert
self.client_cert = client_cert
self.sas_token = sas_token
class HTTPRequestAndResponseOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to connect to a server using the HTTP protocol.
This operation is in the group of HTTP operations because its attributes are very specific to the HTTP protocol.
"""
def __init__(self, method, path, headers, body, query_params, callback):
"""
Initializer for HTTPPublishOperation objects.
:param str method: The HTTP method used in the request
:param str path: The path to be used in the request url
:param dict headers: The headers to be used in the HTTP request
:param str body: The body to be provided with the HTTP request
:param str query_params: The query parameters to be used in the request url
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(HTTPRequestAndResponseOperation, self).__init__(callback=callback)
self.method = method
self.path = path
self.headers = headers
self.body = body
self.query_params = query_params
self.status_code = None
self.response_body = None
self.reason = None

Просмотреть файл

@ -18,10 +18,10 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
client_id,
hostname,
username,
ca_cert=None,
callback,
server_verification_cert=None,
client_cert=None,
sas_token=None,
callback=None,
):
"""
Initializer for SetMQTTConnectionArgsOperation objects.
@ -29,8 +29,8 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
:param str client_id: The client identifier to use when connecting to the MQTT server
:param str hostname: The hostname of the MQTT server we will eventually connect to
:param str username: The username to use when connecting to the MQTT server
:param str ca_cert: (Optional) The CA certificate to use if the MQTT server that we're going to
connect to uses server-side TLS
:param str server_verification_cert: (Optional) The server verification certificate to use
if the MQTT server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the MQTT service
:param str sas_token: The token string which will be used to authenticate with the service
@ -42,7 +42,7 @@ class SetMQTTConnectionArgsOperation(PipelineOperation):
self.client_id = client_id
self.hostname = hostname
self.username = username
self.ca_cert = ca_cert
self.server_verification_cert = server_verification_cert
self.client_cert = client_cert
self.sas_token = sas_token
@ -54,7 +54,7 @@ class MQTTPublishOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
"""
def __init__(self, topic, payload, callback=None):
def __init__(self, topic, payload, callback):
"""
Initializer for MQTTPublishOperation objects.
@ -68,6 +68,7 @@ class MQTTPublishOperation(PipelineOperation):
self.topic = topic
self.payload = payload
self.needs_connection = True
self.retry_timer = None
class MQTTSubscribeOperation(PipelineOperation):
@ -77,7 +78,7 @@ class MQTTSubscribeOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
"""
def __init__(self, topic, callback=None):
def __init__(self, topic, callback):
"""
Initializer for MQTTSubscribeOperation objects.
@ -89,6 +90,8 @@ class MQTTSubscribeOperation(PipelineOperation):
super(MQTTSubscribeOperation, self).__init__(callback=callback)
self.topic = topic
self.needs_connection = True
self.timeout_timer = None
self.retry_timer = None
class MQTTUnsubscribeOperation(PipelineOperation):
@ -98,7 +101,7 @@ class MQTTUnsubscribeOperation(PipelineOperation):
This operation is in the group of MQTT operations because its attributes are very specific to the MQTT protocol.
"""
def __init__(self, topic, callback=None):
def __init__(self, topic, callback):
"""
Initializer for MQTTUnsubscribeOperation objects.
@ -110,3 +113,5 @@ class MQTTUnsubscribeOperation(PipelineOperation):
super(MQTTUnsubscribeOperation, self).__init__(callback=callback)
self.topic = topic
self.needs_connection = True
self.timeout_timer = None
self.retry_timer = None

Просмотреть файл

@ -7,13 +7,19 @@
import logging
import abc
import six
import sys
import time
import traceback
import uuid
import weakref
from six.moves import queue
import threading
from . import pipeline_events_base
from . import pipeline_ops_base
from . import operation_flow
from . import pipeline_ops_base, pipeline_ops_mqtt
from . import pipeline_thread
from azure.iot.device.common import unhandled_exceptions
from . import pipeline_exceptions
from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__)
@ -43,11 +49,11 @@ class PipelineStage(object):
(use an auth provider) and converts it into something more generic (here is your device_id, etc, and use
this SAS token when connecting).
An example of a generic-to-specific stage is IoTHubMQTTConverterStage which converts IoTHub operations
An example of a generic-to-specific stage is IoTHubMQTTTranslationStage which converts IoTHub operations
(such as SendD2CMessageOperation) to MQTT operations (such as Publish).
Each stage should also work in the broadest domain possible. For example a generic stage (say
"EnsureConnectionStage") that initiates a connection if any arbitrary operation needs a connection is more useful
"AutoConnectStage") that initiates a connection if any arbitrary operation needs a connection is more useful
than having some MQTT-specific code that re-connects to the MQTT broker if the user calls Publish and
there's no connection.
@ -81,7 +87,7 @@ class PipelineStage(object):
def run_op(self, op):
"""
Run the given operation. This is the public function that outside callers would call to run an
operation. Derived classes should override the private _execute_op function to implement
operation. Derived classes should override the private _run_op function to implement
stage-specific behavior. When run_op returns, that doesn't mean that the operation has executed
to completion. Rather, it means that the pipeline has done something that will cause the
operation to eventually execute to completion. That might mean that something was sent over
@ -92,29 +98,29 @@ class PipelineStage(object):
:param PipelineOperation op: The operation to run.
"""
logger.debug("{}({}): running".format(self.name, op.name))
try:
self._execute_op(op)
self._run_op(op)
except Exception as e:
# This path is ONLY for unexpected errors. Expected errors should cause a fail completion
# within ._execute_op()
logger.error(msg="Unexpected error in {}._execute_op() call".format(self), exc_info=e)
op.error = e
operation_flow.complete_op(self, op)
# within ._run_op()
@abc.abstractmethod
def _execute_op(self, op):
# Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
logger.error(msg="Unexpected error in {}._run_op() call".format(self))
logger.error(traceback.format_exc())
op.complete(error=e)
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
"""
Abstract method to run the actual operation. This function is implemented in derived classes
and performs the actual work that any operation expects. The default behavior for this function
should be to forward the event to the next stage using operation_flow.pass_op_to_next_stage for any
operations that a particular stage might not operate on.
Implementation of the stage-specific function of .run_op(). Override this method instead of
.run_op() in child classes in order to change how a stage behaves when running an operation.
See the description of the run_op method for more discussion on what it means to "run" an operation.
See the description of the .run_op() method for more discussion on what it means to "run"
an operation.
:param PipelineOperation op: The operation to run.
"""
pass
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def handle_pipeline_event(self, event):
@ -129,10 +135,10 @@ class PipelineStage(object):
try:
self._handle_pipeline_event(event)
except Exception as e:
logger.error(
msg="Unexpected error in {}._handle_pipeline_event() call".format(self), exc_info=e
)
unhandled_exceptions.exception_caught_in_background_thread(e)
# Do not use exc_info parameter on logger.error. This casuses pytest to save the traceback which saves stack frames which shows up as a leak
logger.error(msg="Unexpected error in {}._handle_pipeline_event() call".format(self))
logger.error(traceback.format_exc())
handle_exceptions.handle_background_exception(e)
@pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event):
@ -143,23 +149,42 @@ class PipelineStage(object):
:param PipelineEvent event: The event that is being passed back up the pipeline
"""
operation_flow.pass_event_to_previous_stage(self, event)
self.send_event_up(event)
@pipeline_thread.runs_on_pipeline_thread
def on_connected(self):
def send_op_down(self, op):
"""
Called by lower layers when the protocol client connects
Helper function to continue a given operation by passing it to the next stage
in the pipeline. If there is no next stage in the pipeline, this function
will fail the operation and call complete_op to return the failure back up the
pipeline.
:param PipelineOperation op: Operation which is being passed on
"""
if self.previous:
self.previous.on_connected()
if not self.next:
logger.error("{}({}): no next stage. completing with error".format(self.name, op.name))
error = pipeline_exceptions.PipelineError(
"{} not handled after {} stage with no next stage".format(op.name, self.name)
)
op.complete(error=error)
else:
self.next.run_op(op)
@pipeline_thread.runs_on_pipeline_thread
def on_disconnected(self):
def send_event_up(self, event):
"""
Called by lower layers when the protocol client disconnects
Helper function to pass an event to the previous stage of the pipeline. This is the default
behavior of events while traveling through the pipeline. They start somewhere (maybe the
bottom) and move up the pipeline until they're handled or until they error out.
"""
if self.previous:
self.previous.on_disconnected()
self.previous.handle_pipeline_event(event)
else:
logger.error("{}({}): Error: unhandled event".format(self.name, event.name))
error = pipeline_exceptions.PipelineError(
"{} unhandled at {} stage with no previous stage".format(event.name, self.name)
)
handle_exceptions.handle_background_exception(error)
class PipelineRootStage(PipelineStage):
@ -181,42 +206,36 @@ class PipelineRootStage(PipelineStage):
:type on_disconnected_handler: Function
"""
def __init__(self):
def __init__(self, pipeline_configuration):
super(PipelineRootStage, self).__init__()
self.on_pipeline_event_handler = None
self.on_connected_handler = None
self.on_disconnected_handler = None
self.connected = False
self.pipeline_configuration = pipeline_configuration
def run_op(self, op):
op.callback = pipeline_thread.invoke_on_callback_thread_nowait(op.callback)
# CT-TODO: make this more elegant
op.callback_stack[0] = pipeline_thread.invoke_on_callback_thread_nowait(
op.callback_stack[0]
)
pipeline_thread.invoke_on_pipeline_thread(super(PipelineRootStage, self).run_op)(op)
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
"""
run the operation. At the root, the only thing to do is to pass the operation
to the next stage.
:param PipelineOperation op: Operation to run.
"""
operation_flow.pass_op_to_next_stage(self, op)
def append_stage(self, new_next_stage):
def append_stage(self, new_stage):
"""
Add the next stage to the end of the pipeline. This is the function that callers
use to build the pipeline by appending stages. This function returns the root of
the pipeline so that calls to this function can be chained together.
:param PipelineStage new_next_stage: Stage to add to the end of the pipeline
:param PipelineStage new_stage: Stage to add to the end of the pipeline
:returns: The root of the pipeline.
"""
old_tail = self
while old_tail.next:
old_tail = old_tail.next
old_tail.next = new_next_stage
new_next_stage.previous = old_tail
new_next_stage.pipeline_root = self
old_tail.next = new_stage
new_stage.previous = old_tail
new_stage.pipeline_root = self
return self
@pipeline_thread.runs_on_pipeline_thread
@ -229,42 +248,39 @@ class PipelineRootStage(PipelineStage):
:param PipelineEvent event: Event to be handled, i.e. returned to the caller
through the handle_pipeline_event (if provided).
"""
if self.on_pipeline_event_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_pipeline_event_handler)(event)
else:
logger.warning("incoming pipeline event with no handler. dropping.")
@pipeline_thread.runs_on_pipeline_thread
def on_connected(self):
if isinstance(event, pipeline_events_base.ConnectedEvent):
logger.debug(
"{}: on_connected. on_connected_handler={}".format(
self.name, self.on_connected_handler
)
"{}: ConnectedEvent received. Calling on_connected_handler".format(self.name)
)
self.connected = True
if self.on_connected_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_connected_handler)()
@pipeline_thread.runs_on_pipeline_thread
def on_disconnected(self):
elif isinstance(event, pipeline_events_base.DisconnectedEvent):
logger.debug(
"{}: on_disconnected. on_disconnected_handler={}".format(
self.name, self.on_disconnected_handler
)
"{}: DisconnectedEvent received. Calling on_disconnected_handler".format(self.name)
)
self.connected = False
if self.on_disconnected_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_disconnected_handler)()
else:
if self.on_pipeline_event_handler:
pipeline_thread.invoke_on_callback_thread_nowait(self.on_pipeline_event_handler)(
event
)
else:
logger.warning("incoming pipeline event with no handler. dropping.")
class EnsureConnectionStage(PipelineStage):
class AutoConnectStage(PipelineStage):
"""
This stage is responsible for ensuring that the protocol is connected when
it needs to be connected.
"""
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
# Any operation that requires a connection can trigger a connection if
# we're not connected.
if op.needs_connection and not self.pipeline_root.connected:
@ -278,90 +294,95 @@ class EnsureConnectionStage(PipelineStage):
# Finally, if this stage doesn't need to do anything else with this operation,
# it just passes it down.
else:
operation_flow.pass_op_to_next_stage(self, op)
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _do_connect(self, op):
"""
Start connecting the transport in response to some operation
"""
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_needs_complete = op
# function that gets called after we're connected.
@pipeline_thread.runs_on_pipeline_thread
def on_connect_op_complete(op_connect):
if op_connect.error:
def on_connect_op_complete(op, error):
if error:
logger.error(
"{}({}): Connection failed. Completing with failure because of connection failure: {}".format(
self.name, op.name, op_connect.error
self.name, op_needs_complete.name, error
)
)
op.error = op_connect.error
operation_flow.complete_op(stage=self, op=op)
op_needs_complete.complete(error=error)
else:
logger.debug(
"{}({}): connection is complete. Continuing with op".format(self.name, op.name)
"{}({}): connection is complete. Continuing with op".format(
self.name, op_needs_complete.name
)
operation_flow.pass_op_to_next_stage(stage=self, op=op)
)
self.send_op_down(op_needs_complete)
# call down to the next stage to connect.
logger.debug("{}({}): calling down with Connect operation".format(self.name, op.name))
operation_flow.pass_op_to_next_stage(
self, pipeline_ops_base.ConnectOperation(callback=on_connect_op_complete)
)
self.send_op_down(pipeline_ops_base.ConnectOperation(callback=on_connect_op_complete))
class SerializeConnectOpsStage(PipelineStage):
class ConnectionLockStage(PipelineStage):
"""
This stage is responsible for serializing connect, disconnect, and reconnect ops on
This stage is responsible for serializing connect, disconnect, and reauthorize ops on
the pipeline, such that only a single one of these ops can go past this stage at a
time. This way, we don't have to worry about cases like "what happens if we try to
disconnect if we're in the middle of reconnecting." This stage will wait for the
reconnect to complete before letting the disconnect past.
disconnect if we're in the middle of reauthorizing." This stage will wait for the
reauthorize to complete before letting the disconnect past.
"""
def __init__(self):
super(SerializeConnectOpsStage, self).__init__()
super(ConnectionLockStage, self).__init__()
self.queue = queue.Queue()
self.blocked = False
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
# If this stage is currently blocked (because we're waiting for a connection, etc,
# to complete), we queue up all operations until after the connect completes.
if self.blocked:
logger.info(
"{}({}): pipeline is blocked waiting for a prior connect/disconnect/reconnect to complete. queueing.".format(
"{}({}): pipeline is blocked waiting for a prior connect/disconnect/reauthorize to complete. queueing.".format(
self.name, op.name
)
)
self.queue.put_nowait(op)
elif isinstance(op, pipeline_ops_base.ConnectOperation) and self.pipeline_root.connected:
logger.info("{}({}): Transport is connected. Completing.".format(self.name, op.name))
operation_flow.complete_op(stage=self, op=op)
logger.info(
"{}({}): Transport is already connected. Completing.".format(self.name, op.name)
)
op.complete()
elif (
isinstance(op, pipeline_ops_base.DisconnectOperation)
and not self.pipeline_root.connected
):
logger.info(
"{}({}): Transport is disconnected. Completing.".format(self.name, op.name)
"{}({}): Transport is already disconnected. Completing.".format(self.name, op.name)
)
operation_flow.complete_op(stage=self, op=op)
op.complete()
elif (
isinstance(op, pipeline_ops_base.DisconnectOperation)
or isinstance(op, pipeline_ops_base.ConnectOperation)
or isinstance(op, pipeline_ops_base.ReconnectOperation)
or isinstance(op, pipeline_ops_base.ReauthorizeConnectionOperation)
):
self._block(op)
old_callback = op.callback
@pipeline_thread.runs_on_pipeline_thread
def on_operation_complete(op):
if op.error:
def on_operation_complete(op, error):
if error:
logger.error(
"{}({}): op failed. Unblocking queue with error: {}".format(
self.name, op.name, op.error
self.name, op.name, error
)
)
else:
@ -369,25 +390,18 @@ class SerializeConnectOpsStage(PipelineStage):
"{}({}): op succeeded. Unblocking queue".format(self.name, op.name)
)
op.callback = old_callback
self._unblock(op, op.error)
logger.debug(
"{}({}): unblock is complete. completing op that caused unblock".format(
self.name, op.name
)
)
operation_flow.complete_op(stage=self, op=op)
self._unblock(op, error)
op.callback = on_operation_complete
operation_flow.pass_op_to_next_stage(stage=self, op=op)
op.add_callback(on_operation_complete)
self.send_op_down(op)
else:
operation_flow.pass_op_to_next_stage(stage=self, op=op)
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _block(self, op):
"""
block this stage while we're waiting for the connect/disconnect/reconnect operation to complete.
block this stage while we're waiting for the connect/disconnect/reauthorize operation to complete.
"""
logger.debug("{}({}): blocking".format(self.name, op.name))
self.blocked = True
@ -395,7 +409,7 @@ class SerializeConnectOpsStage(PipelineStage):
@pipeline_thread.runs_on_pipeline_thread
def _unblock(self, op, error):
"""
Unblock this stage after the connect/disconnect/reconnect operation is complete. This also means
Unblock this stage after the connect/disconnect/reauthorize operation is complete. This also means
releasing all the operations that were queued up.
"""
logger.debug("{}({}): unblocking and releasing queued ops.".format(self.name, op.name))
@ -418,21 +432,20 @@ class SerializeConnectOpsStage(PipelineStage):
self.name, op.name, op_to_release.name
)
)
op_to_release.error = error
operation_flow.complete_op(self, op_to_release)
op_to_release.complete(error=error)
else:
logger.debug(
"{}({}): releasing {} op.".format(self.name, op.name, op_to_release.name)
)
# call run_op directly here so operations go through this stage again (especiall connect/disconnect ops)
# call run_op directly here so operations go through this stage again (especially connect/disconnect ops)
self.run_op(op_to_release)
class CoordinateRequestAndResponseStage(PipelineStage):
"""
Pipeline stage which is responsible for coordinating SendIotRequestAndWaitForResponseOperation operations. For each
SendIotRequestAndWaitForResponseOperation operation, this stage passes down a SendIotRequestOperation operation and waits for
an IotResponseEvent event. All other events are passed down unmodified.
Pipeline stage which is responsible for coordinating RequestAndResponseOperation operations. For each
RequestAndResponseOperation operation, this stage passes down a RequestOperation operation and waits for
an ResponseEvent event. All other events are passed down unmodified.
"""
def __init__(self):
@ -440,31 +453,38 @@ class CoordinateRequestAndResponseStage(PipelineStage):
self.pending_responses = {}
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
if isinstance(op, pipeline_ops_base.SendIotRequestAndWaitForResponseOperation):
# Convert SendIotRequestAndWaitForResponseOperation operation into a SendIotRequestOperation operation
# and send it down. A lower level will convert the SendIotRequestOperation into an
# actual protocol client operation. The SendIotRequestAndWaitForResponseOperation operation will be
def _run_op(self, op):
if isinstance(op, pipeline_ops_base.RequestAndResponseOperation):
# Convert RequestAndResponseOperation operation into a RequestOperation operation
# and send it down. A lower level will convert the RequestOperation into an
# actual protocol client operation. The RequestAndResponseOperation operation will be
# completed when the corresponding IotResponse event is received in this stage.
request_id = str(uuid.uuid4())
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_waiting_for_response = op
@pipeline_thread.runs_on_pipeline_thread
def on_send_request_done(send_request_op):
def on_send_request_done(op, error):
logger.debug(
"{}({}): Finished sending {} request to {} resource {}".format(
self.name, op.name, op.request_type, op.method, op.resource_location
self.name,
op_waiting_for_response.name,
op_waiting_for_response.request_type,
op_waiting_for_response.method,
op_waiting_for_response.resource_location,
)
)
if send_request_op.error:
op.error = send_request_op.error
if error:
logger.debug(
"{}({}): removing request {} from pending list".format(
self.name, op.name, request_id
self.name, op_waiting_for_response.name, request_id
)
)
del (self.pending_responses[request_id])
operation_flow.complete_op(self, op)
op_waiting_for_response.complete(error=error)
else:
# request sent. Nothing to do except wait for the response
pass
@ -480,23 +500,24 @@ class CoordinateRequestAndResponseStage(PipelineStage):
)
self.pending_responses[request_id] = op
new_op = pipeline_ops_base.SendIotRequestOperation(
new_op = pipeline_ops_base.RequestOperation(
method=op.method,
resource_location=op.resource_location,
request_body=op.request_body,
request_id=request_id,
request_type=op.request_type,
callback=on_send_request_done,
query_params=op.query_params,
)
operation_flow.pass_op_to_next_stage(self, new_op)
self.send_op_down(new_op)
else:
operation_flow.pass_op_to_next_stage(self, op)
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event):
if isinstance(event, pipeline_events_base.IotResponseEvent):
# match IotResponseEvent events to the saved dictionary of SendIotRequestAndWaitForResponseOperation
if isinstance(event, pipeline_events_base.ResponseEvent):
# match ResponseEvent events to the saved dictionary of RequestAndResponseOperation
# operations which have not received responses yet. If the operation is found,
# complete it.
@ -510,6 +531,7 @@ class CoordinateRequestAndResponseStage(PipelineStage):
del (self.pending_responses[event.request_id])
op.status_code = event.status_code
op.response_body = event.response_body
op.retry_after = event.retry_after
logger.debug(
"{}({}): Completing {} request to {} resource {} with status {}".format(
self.name,
@ -520,7 +542,7 @@ class CoordinateRequestAndResponseStage(PipelineStage):
op.status_code,
)
)
operation_flow.complete_op(self, op)
op.complete()
else:
logger.warning(
"{}({}): request_id {} not found in pending list. Nothing to do. Dropping".format(
@ -528,4 +550,399 @@ class CoordinateRequestAndResponseStage(PipelineStage):
)
)
else:
operation_flow.pass_event_to_previous_stage(self, event)
self.send_event_up(event)
class OpTimeoutStage(PipelineStage):
"""
The purpose of the timeout stage is to add timeout errors to select operations
The timeout_intervals attribute contains a list of operations to track along with
their timeout values. Right now this list is hard-coded but the operations and
intervals will eventually become a parameter.
For each operation that needs a timeout check, this stage will add a timer to
the operation. If the timer elapses, this stage will fail the operation with
a PipelineTimeoutError. The intention is that a higher stage will know what to
do with that error and act accordingly (either return the error to the user or
retry).
This stage currently assumes that all timed out operation are just "lost".
It does not attempt to cancel the operation, as Paho doesn't have a way to
cancel an operation, and with QOS=1, sending a pub or sub twice is not
catastrophic.
Also, as a long-term plan, the operations that need to be watched for timeout
will become an initialization parameter for this stage so that differet
instances of this stage can watch for timeouts on different operations.
This will be done because we want a lower-level timeout stage which can watch
for timeouts at the MQTT level, and we want a higher-level timeout stage which
can watch for timeouts at the iothub level. In this way, an MQTT operation that
times out can be retried as an MQTT operation and a higher-level IoTHub operation
which times out can be retried as an IoTHub operation (which might necessitate
redoing multiple MQTT operations).
"""
def __init__(self):
super(OpTimeoutStage, self).__init__()
# use a fixed list and fixed intervals for now. Later, this info will come in
# as an init param or a retry poicy
self.timeout_intervals = {
pipeline_ops_mqtt.MQTTSubscribeOperation: 10,
pipeline_ops_mqtt.MQTTUnsubscribeOperation: 10,
}
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if type(op) in self.timeout_intervals:
# Create a timer to watch for operation timeout on this op and attach it
# to the op.
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_timeout():
this = self_weakref()
logger.info("{}({}): returning timeout error".format(this.name, op.name))
op.complete(
error=pipeline_exceptions.PipelineTimeoutError(
"operation timed out before protocol client could respond"
)
)
logger.debug("{}({}): Creating timer".format(self.name, op.name))
op.timeout_timer = threading.Timer(self.timeout_intervals[type(op)], on_timeout)
op.timeout_timer.start()
# Send the op down, but intercept the return of the op so we can
# remove the timer when the op is done
op.add_callback(self._clear_timer)
logger.debug("{}({}): Sending down".format(self.name, op.name))
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _clear_timer(self, op, error):
# When an op comes back, delete the timer and pass it right up.
if op.timeout_timer:
logger.debug("{}({}): Cancelling timer".format(self.name, op.name))
op.timeout_timer.cancel()
op.timeout_timer = None
class RetryStage(PipelineStage):
"""
The purpose of the retry stage is to watch specific operations for specific
errors and retry the operations as appropriate.
Unlike the OpTimeoutStage, this stage will never need to worry about cancelling
failed operations. When an operation is retried at this stage, it is already
considered "failed", so no cancellation needs to be done.
"""
def __init__(self):
super(RetryStage, self).__init__()
# Retry intervals are hardcoded for now. Later, they come in as an
# init param, probably via retry policy.
self.retry_intervals = {
pipeline_ops_mqtt.MQTTSubscribeOperation: 20,
pipeline_ops_mqtt.MQTTUnsubscribeOperation: 20,
pipeline_ops_mqtt.MQTTPublishOperation: 20,
}
self.ops_waiting_to_retry = []
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
"""
Send all ops down and intercept their return to "watch for retry"
"""
if self._should_watch_for_retry(op):
op.add_callback(self._do_retry_if_necessary)
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _should_watch_for_retry(self, op):
"""
Return True if this op needs to be watched for retry. This can be
called before the op runs.
"""
return type(op) in self.retry_intervals
@pipeline_thread.runs_on_pipeline_thread
def _should_retry(self, op, error):
"""
Return True if this op needs to be retried. This must be called after
the op completes.
"""
if error:
if self._should_watch_for_retry(op):
if isinstance(error, pipeline_exceptions.PipelineTimeoutError):
return True
return False
@pipeline_thread.runs_on_pipeline_thread
def _do_retry_if_necessary(self, op, error):
"""
Handler which gets called when operations are complete. This function
is where we check to see if a retry is necessary and set a "retry timer"
which can be used to send the op down again.
"""
if self._should_retry(op, error):
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_retry():
this = self_weakref()
logger.info("{}({}): retrying".format(this.name, op.name))
op.retry_timer.cancel()
op.retry_timer = None
this.ops_waiting_to_retry.remove(op)
# Don't just send it down directly. Instead, go through run_op so we get
# retry functionality this time too
this.run_op(op)
interval = self.retry_intervals[type(op)]
logger.warning(
"{}({}): Op needs retry with interval {} because of {}. Setting timer.".format(
self.name, op.name, interval, error
)
)
# if we don't keep track of this op, it might get collected.
op.halt_completion()
self.ops_waiting_to_retry.append(op)
op.retry_timer = threading.Timer(self.retry_intervals[type(op)], do_retry)
op.retry_timer.start()
else:
if op.retry_timer:
op.retry_timer.cancel()
op.retry_timer = None
transient_connect_errors = [
pipeline_exceptions.OperationCancelled,
pipeline_exceptions.PipelineTimeoutError,
pipeline_exceptions.OperationError,
transport_exceptions.ConnectionFailedError,
transport_exceptions.ConnectionDroppedError,
]
class ReconnectState(object):
"""
Class which holds reconenct states as class variables. Created to make code that reads like an enum without using an enum.
NEVER_CONNECTED: Ttransport has never been conencted. This state is necessary because some errors might be fatal or transient,
depending on wether the transport has been connceted. For example, a failed conenction is a transient error if we've connected
before, but it's fatal if we've never conencted.
WAITING_TO_RECONNECT: This stage is in a waiting period before reconnecting.
CONNECTED_OR_DISCONNECTED: The transport is either connected or disconencted. This stage doesn't really care which one, so
it doesn't keep track.
"""
NEVER_CONNECTED = "NEVER_CONNECTED"
WAITING_TO_RECONNECT = "WAITING_TO_RECONNECT"
CONNECTED_OR_DISCONNECTED = "CONNECTED_OR_DISCONNECTED"
class ReconnectStage(PipelineStage):
def __init__(self):
super(ReconnectStage, self).__init__()
self.reconnect_timer = None
self.state = ReconnectState.NEVER_CONNECTED
# connect delay is hardcoded for now. Later, this comes from a retry policy
self.reconnect_delay = 10
self.waiting_connect_ops = []
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_base.ConnectOperation):
if self.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): State is {}. Adding to wait list".format(
self.name, op.name, self.state
)
)
self.waiting_connect_ops.append(op)
else:
logger.info(
"{}({}): State is {}. Adding to wait list and sending new connect op down".format(
self.name, op.name, self.state
)
)
self.waiting_connect_ops.append(op)
self._send_new_connect_op_down()
elif isinstance(op, pipeline_ops_base.DisconnectOperation):
if self.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): State is {}. Canceling waiting ops and sending disconnect down.".format(
self.name, op.name, self.state
)
)
self._clear_reconnect_timer()
self._complete_waiting_connect_ops(
pipeline_exceptions.OperationCancelled("Explicit disconnect invoked")
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
op.complete()
else:
logger.info(
"{}({}): State is {}. Sending op down.".format(self.name, op.name, self.state)
)
self.send_op_down(op)
else:
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event):
if isinstance(event, pipeline_events_base.DisconnectedEvent):
if self.pipeline_root.connected:
logger.info(
"{}({}): State is {}. Triggering reconnect timer".format(
self.name, event.name, self.state
)
)
self.state = ReconnectState.WAITING_TO_RECONNECT
self._start_reconnect_timer()
else:
logger.info(
"{}({}): State is {}. Doing nothing".format(self.name, event.name, self.state)
)
self.send_event_up(event)
else:
self.send_event_up(event)
@pipeline_thread.runs_on_pipeline_thread
def _send_new_connect_op_down(self):
self_weakref = weakref.ref(self)
@pipeline_thread.runs_on_pipeline_thread
def on_connect_complete(op, error):
this = self_weakref()
if this:
if error:
if this.state == ReconnectState.NEVER_CONNECTED:
logger.info(
"{}({}): error on first connection. Not triggering reconnection".format(
this.name, op.name
)
)
this._complete_waiting_connect_ops(error)
elif type(error) in transient_connect_errors:
logger.info(
"{}({}): State is {}. Connect failed with transient error. Triggering reconnect timer".format(
self.name, op.name, self.state
)
)
self.state = ReconnectState.WAITING_TO_RECONNECT
self._start_reconnect_timer()
elif this.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}({}): non-tranient error. Failing all waiting ops.n".format(
this.name, op.name
)
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
self._clear_reconnect_timer()
this._complete_waiting_connect_ops(error)
else:
logger.info(
"{}({}): State is {}. Connection failed. Not triggering reconnection".format(
this.name, op.name, this.state
)
)
this._complete_waiting_connect_ops(error)
else:
logger.info(
"{}({}): State is {}. Connection succeeded".format(
this.name, op.name, this.state
)
)
self.state = ReconnectState.CONNECTED_OR_DISCONNECTED
self._clear_reconnect_timer()
self._complete_waiting_connect_ops()
logger.info("{}: sending new connect op down".format(self.name))
op = pipeline_ops_base.ConnectOperation(callback=on_connect_complete)
self.send_op_down(op)
@pipeline_thread.runs_on_pipeline_thread
def _start_reconnect_timer(self):
"""
Set a timer to reconnect after some period of time
"""
logger.info("{}: State is {}. Starting reconnect timer".format(self.name, self.state))
self._clear_reconnect_timer()
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_reconnect_timer_expired():
this = self_weakref()
this.reconnect_timer = None
if this.state == ReconnectState.WAITING_TO_RECONNECT:
logger.info(
"{}: State is {}. Reconnect timer expired. Sending connect op down".format(
this.name, this.state
)
)
this.state = ReconnectState.CONNECTED_OR_DISCONNECTED
this._send_new_connect_op_down()
else:
logger.info(
"{}: State is {}. Reconnect timer expired. Doing nothing".format(
this.name, this.state
)
)
self.reconnect_timer = threading.Timer(self.reconnect_delay, on_reconnect_timer_expired)
self.reconnect_timer.start()
@pipeline_thread.runs_on_pipeline_thread
def _clear_reconnect_timer(self):
"""
Clear any previous reconnect timer
"""
if self.reconnect_timer:
logger.info("{}: clearing reconnect timer".format(self.name))
self.reconnect_timer.cancel()
self.reconnect_timer = None
@pipeline_thread.runs_on_pipeline_thread
def _complete_waiting_connect_ops(self, error=None):
"""
A note of explanation: when we are waiting to reconnect, we need to keep a list of
all connect ops that come through here. We do this for 2 reasons:
1. We don't want to pass them down immediately because we want to honor the waiting
period. If we passed them down immediately, we'd try to reconnect immediately
instead of waiting until reconnect_timer fires.
2. When we're retrying, there are new ConnectOperation ops sent down regularly.
Any of the ops could be the one that succeeds. When that happens, we need a
way to to complete all of the ops that are patiently waiting for the connection.
Right now, we only need to do this with ConnectOperation ops because these are the
only ops that need to wait because these are the only ops that cause a connection
to be established. Other ops pass through this stage, and might fail in later
stages, but that's OK. If they needed a connection, the AutoConnectStage before
this stage should be taking care of that.
"""
logger.info("{}: completing waiting ops with error={}".format(self.name, error))
list_copy = self.waiting_connect_ops
self.waiting_connect_ops = []
for op in list_copy:
op.complete(error)

Просмотреть файл

@ -0,0 +1,102 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six
import traceback
import copy
from . import (
pipeline_ops_base,
PipelineStage,
pipeline_ops_http,
pipeline_thread,
pipeline_exceptions,
)
from azure.iot.device.common.http_transport import HTTPTransport
from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__)
class HTTPTransportStage(PipelineStage):
"""
PipelineStage object which is responsible for interfacing with the HTTP protocol wrapper object.
This stage handles all HTTP operations that are not specific to IoT Hub.
"""
def __init__(self):
super(HTTPTransportStage, self).__init__()
# The sas_token will be set when Connetion Args are received
self.sas_token = None
# The transport will be instantiated when Connection Args are received
self.transport = None
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_http.SetHTTPConnectionArgsOperation):
# pipeline_ops_http.SetHTTPConenctionArgsOperation is used to create the HTTPTransport object and set all of it's properties.
logger.debug("{}({}): got connection args".format(self.name, op.name))
self.sas_token = op.sas_token
self.transport = HTTPTransport(
hostname=op.hostname,
server_verification_cert=op.server_verification_cert,
x509_cert=op.client_cert,
)
self.pipeline_root.transport = self.transport
op.complete()
elif isinstance(op, pipeline_ops_base.UpdateSasTokenOperation):
logger.debug("{}({}): saving sas token and completing".format(self.name, op.name))
self.sas_token = op.sas_token
op.complete()
elif isinstance(op, pipeline_ops_http.HTTPRequestAndResponseOperation):
# This will call down to the HTTP Transport with a request and also created a request callback. Because the HTTP Transport will run on the http transport thread, this call should be non-blocking to the pipline thread.
logger.debug(
"{}({}): Generating HTTP request and setting callback before completing.".format(
self.name, op.name
)
)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_request_completed(error=None, response=None):
if error:
logger.error(
"{}({}): Error passed to on_request_completed. Error={}".format(
self.name, op.name, error
)
)
op.complete(error=error)
else:
logger.debug(
"{}({}): Request completed. Completing op.".format(self.name, op.name)
)
logger.debug("HTTP Response Status: {}".format(response["status_code"]))
logger.debug("HTTP Response: {}".format(response["resp"].decode("utf-8")))
op.response_body = response["resp"]
op.status_code = response["status_code"]
op.reason = response["reason"]
op.complete()
# A deepcopy is necessary here since otherwise the manipulation happening to http_headers will affect the op.headers, which would be an unintended side effect and not a good practice.
http_headers = copy.deepcopy(op.headers)
if self.sas_token:
http_headers["Authorization"] = self.sas_token
self.transport.request(
method=op.method,
path=op.path,
headers=http_headers,
query_params=op.query_params,
body=op.body,
callback=on_request_completed,
)
else:
self.send_op_down(op)

Просмотреть файл

@ -6,16 +6,19 @@
import logging
import six
import traceback
from . import (
pipeline_ops_base,
PipelineStage,
pipeline_ops_mqtt,
pipeline_events_mqtt,
operation_flow,
pipeline_thread,
pipeline_exceptions,
pipeline_events_base,
)
from azure.iot.device.common.mqtt_transport import MQTTTransport
from azure.iot.device.common import unhandled_exceptions, errors
from azure.iot.device.common import handle_exceptions, transport_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
logger = logging.getLogger(__name__)
@ -27,66 +30,83 @@ class MQTTTransportStage(PipelineStage):
is not in the MQTT group of operations, but can only be run at the protocol level.
"""
def __init__(self):
super(MQTTTransportStage, self).__init__()
# The sas_token will be set when Connetion Args are received
self.sas_token = None
# The transport will be instantiated when Connection Args are received
self.transport = None
self._pending_connection_op = None
@pipeline_thread.runs_on_pipeline_thread
def _cancel_pending_connection_op(self):
"""
Cancel any running connect, disconnect or reconnect op. Since our ability to "cancel" is fairly limited,
Cancel any running connect, disconnect or reauthorize_connection op. Since our ability to "cancel" is fairly limited,
all this does (for now) is to fail the operation
"""
op = self._pending_connection_op
if op:
# TODO: should this actually run a cancel call on the op?
op.error = errors.PipelineError(
"Cancelling because new ConnectOperation, DisconnectOperation, or ReconnectOperation was issued"
)
operation_flow.complete_op(stage=self, op=op)
# NOTE: This code path should NOT execute in normal flow. There should never already be a pending
# connection op when another is added, due to the SerializeConnectOps stage.
# If this block does execute, there is a bug in the codebase.
error = pipeline_exceptions.OperationCancelled(
"Cancelling because new ConnectOperation, DisconnectOperation, or ReauthorizeConnectionOperation was issued"
) # TODO: should this actually somehow cancel the operation?
op.complete(error=error)
self._pending_connection_op = None
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
if isinstance(op, pipeline_ops_mqtt.SetMQTTConnectionArgsOperation):
# pipeline_ops_mqtt.SetMQTTConnectionArgsOperation is where we create our MQTTTransport object and set
# all of its properties.
logger.debug("{}({}): got connection args".format(self.name, op.name))
self.hostname = op.hostname
self.username = op.username
self.client_id = op.client_id
self.ca_cert = op.ca_cert
self.sas_token = op.sas_token
self.client_cert = op.client_cert
self.transport = MQTTTransport(
client_id=self.client_id,
hostname=self.hostname,
username=self.username,
ca_cert=self.ca_cert,
x509_cert=self.client_cert,
client_id=op.client_id,
hostname=op.hostname,
username=op.username,
server_verification_cert=op.server_verification_cert,
x509_cert=op.client_cert,
websockets=self.pipeline_root.pipeline_configuration.websockets,
cipher=self.pipeline_root.pipeline_configuration.cipher,
proxy_options=self.pipeline_root.pipeline_configuration.proxy_options,
)
self.transport.on_mqtt_connected_handler = CallableWeakMethod(
self, "_on_mqtt_connected"
)
self.transport.on_mqtt_connection_failure_handler = CallableWeakMethod(
self, "_on_mqtt_connection_failure"
)
self.transport.on_mqtt_disconnected_handler = CallableWeakMethod(
self, "_on_mqtt_disconnected"
)
self.transport.on_mqtt_message_received_handler = CallableWeakMethod(
self, "_on_mqtt_message_received"
)
self.transport.on_mqtt_connected_handler = self._on_mqtt_connected
self.transport.on_mqtt_connection_failure_handler = self._on_mqtt_connection_failure
self.transport.on_mqtt_disconnected_handler = self._on_mqtt_disconnected
self.transport.on_mqtt_message_received_handler = self._on_mqtt_message_received
# There can only be one pending connection operation (Connect, Reconnect, Disconnect)
# There can only be one pending connection operation (Connect, ReauthorizeConnection, Disconnect)
# at a time. The existing one must be completed or canceled before a new one is set.
# Currently, this means that if, say, a connect operation is the pending op and is executed
# but another connection op is begins by the time the CONACK is received, the original
# operation will be cancelled, but the CONACK for it will still be received, and complete the
# but another connection op is begins by the time the CONNACK is received, the original
# operation will be cancelled, but the CONNACK for it will still be received, and complete the
# NEW operation. This is not desirable, but it is how things currently work.
# We are however, checking the type, so the CONACK from a cancelled Connect, cannot successfully
# We are however, checking the type, so the CONNACK from a cancelled Connect, cannot successfully
# complete a Disconnect operation.
self._pending_connection_op = None
self.pipeline_root.transport = self.transport
operation_flow.complete_op(self, op)
op.complete()
elif isinstance(op, pipeline_ops_base.UpdateSasTokenOperation):
logger.debug("{}({}): saving sas token and completing".format(self.name, op.name))
self.sas_token = op.sas_token
operation_flow.complete_op(self, op)
op.complete()
elif isinstance(op, pipeline_ops_base.ConnectOperation):
logger.info("{}({}): connecting".format(self.name, op.name))
@ -96,24 +116,24 @@ class MQTTTransportStage(PipelineStage):
try:
self.transport.connect(password=self.sas_token)
except Exception as e:
logger.error("transport.connect raised error", exc_info=True)
logger.error("transport.connect raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None
op.error = e
operation_flow.complete_op(self, op)
op.complete(error=e)
elif isinstance(op, pipeline_ops_base.ReconnectOperation):
logger.info("{}({}): reconnecting".format(self.name, op.name))
elif isinstance(op, pipeline_ops_base.ReauthorizeConnectionOperation):
logger.info("{}({}): reauthorizing".format(self.name, op.name))
# We set _active_connect_op here because a reconnect is the same as a connect for "active operation" tracking purposes.
# We set _active_connect_op here because reauthorizing the connection is the same as a connect for "active operation" tracking purposes.
self._cancel_pending_connection_op()
self._pending_connection_op = op
try:
self.transport.reconnect(password=self.sas_token)
self.transport.reauthorize_connection(password=self.sas_token)
except Exception as e:
logger.error("transport.reconnect raised error", exc_info=True)
logger.error("transport.reauthorize_connection raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None
op.error = e
operation_flow.complete_op(self, op)
op.complete(error=e)
elif isinstance(op, pipeline_ops_base.DisconnectOperation):
logger.info("{}({}): disconnecting".format(self.name, op.name))
@ -123,10 +143,10 @@ class MQTTTransportStage(PipelineStage):
try:
self.transport.disconnect()
except Exception as e:
logger.error("transport.disconnect raised error", exc_info=True)
logger.error("transport.disconnect raised error")
logger.error(traceback.format_exc())
self._pending_connection_op = None
op.error = e
operation_flow.complete_op(self, op)
op.complete(error=e)
elif isinstance(op, pipeline_ops_mqtt.MQTTPublishOperation):
logger.info("{}({}): publishing on {}".format(self.name, op.name, op.topic))
@ -134,7 +154,7 @@ class MQTTTransportStage(PipelineStage):
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_published():
logger.debug("{}({}): PUBACK received. completing op.".format(self.name, op.name))
operation_flow.complete_op(self, op)
op.complete()
self.transport.publish(topic=op.topic, payload=op.payload, callback=on_published)
@ -144,7 +164,7 @@ class MQTTTransportStage(PipelineStage):
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_subscribed():
logger.debug("{}({}): SUBACK received. completing op.".format(self.name, op.name))
operation_flow.complete_op(self, op)
op.complete()
self.transport.subscribe(topic=op.topic, callback=on_subscribed)
@ -156,12 +176,14 @@ class MQTTTransportStage(PipelineStage):
logger.debug(
"{}({}): UNSUBACK received. completing op.".format(self.name, op.name)
)
operation_flow.complete_op(self, op)
op.complete()
self.transport.unsubscribe(topic=op.topic, callback=on_unsubscribed)
else:
operation_flow.pass_op_to_next_stage(self, op)
# This code block should not be reached in correct program flow.
# This will raise an error when executed.
self.send_op_down(op)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_message_received(self, topic, payload):
@ -169,9 +191,8 @@ class MQTTTransportStage(PipelineStage):
Handler that gets called by the protocol library when an incoming message arrives.
Convert that message into a pipeline event and pass it up for someone to handle.
"""
operation_flow.pass_event_to_previous_stage(
stage=self,
event=pipeline_events_mqtt.IncomingMQTTMessageEvent(topic=topic, payload=payload),
self.send_event_up(
pipeline_events_mqtt.IncomingMQTTMessageEvent(topic=topic, payload=payload)
)
@pipeline_thread.invoke_on_pipeline_thread_nowait
@ -180,22 +201,24 @@ class MQTTTransportStage(PipelineStage):
Handler that gets called by the transport when it connects.
"""
logger.info("_on_mqtt_connected called")
# self.on_connected() tells other pipeline stages that we're connected. Do this before
# Send an event to tell other pipeline stages that we're connected. Do this before
# we do anything else (in case upper stages have any "are we connected" logic.
self.on_connected()
self.send_event_up(pipeline_events_base.ConnectedEvent())
if isinstance(
self._pending_connection_op, pipeline_ops_base.ConnectOperation
) or isinstance(self._pending_connection_op, pipeline_ops_base.ReconnectOperation):
) or isinstance(
self._pending_connection_op, pipeline_ops_base.ReauthorizeConnectionOperation
):
logger.debug("completing connect op")
op = self._pending_connection_op
self._pending_connection_op = None
operation_flow.complete_op(stage=self, op=op)
op.complete()
else:
# This should indicate something odd is going on.
# If this occurs, either a connect was completed while there was no pending op,
# OR that a connect was completed while a disconnect op was pending
logger.warning("Connection was unexpected")
logger.info("Connection was unexpected")
@pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_connection_failure(self, cause):
@ -205,19 +228,22 @@ class MQTTTransportStage(PipelineStage):
:param Exception cause: The Exception that caused the connection failure.
"""
logger.error("{}: _on_mqtt_connection_failure called: {}".format(self.name, cause))
logger.info("{}: _on_mqtt_connection_failure called: {}".format(self.name, cause))
if isinstance(
self._pending_connection_op, pipeline_ops_base.ConnectOperation
) or isinstance(self._pending_connection_op, pipeline_ops_base.ReconnectOperation):
) or isinstance(
self._pending_connection_op, pipeline_ops_base.ReauthorizeConnectionOperation
):
logger.debug("{}: failing connect op".format(self.name))
op = self._pending_connection_op
self._pending_connection_op = None
op.error = cause
operation_flow.complete_op(stage=self, op=op)
op.complete(error=cause)
else:
logger.warning("{}: Connection failure was unexpected".format(self.name))
unhandled_exceptions.exception_caught_in_background_thread(cause)
logger.info("{}: Connection failure was unexpected".format(self.name))
handle_exceptions.swallow_unraised_exception(
cause, log_msg="Unexpected connection failure. Safe to ignore.", log_lvl="info"
)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def _on_mqtt_disconnected(self, cause=None):
@ -227,31 +253,47 @@ class MQTTTransportStage(PipelineStage):
:param Exception cause: The Exception that caused the disconnection, if any (optional)
"""
if cause:
logger.error("{}: _on_mqtt_disconnect called: {}".format(self.name, cause))
logger.info("{}: _on_mqtt_disconnect called: {}".format(self.name, cause))
else:
logger.info("{}: _on_mqtt_disconnect called".format(self.name))
# self.on_disconnected() tells other pipeilne stages that we're disconnected. Do this before
# we do anything else (in case upper stages have any "are we connected" logic.
self.on_disconnected()
# Send an event to tell other pipeilne stages that we're disconnected. Do this before
# we do anything else (in case upper stages have any "are we connected" logic.)
self.send_event_up(pipeline_events_base.DisconnectedEvent())
if isinstance(self._pending_connection_op, pipeline_ops_base.DisconnectOperation):
logger.debug("{}: completing disconnect op".format(self.name))
if self._pending_connection_op:
# on_mqtt_disconnected will cause any pending connect op to complete. This is how Paho
# behaves when there is a connection error, and it also makes sense that on_mqtt_disconnected
# would cause a pending connection op to fail.
logger.debug(
"{}: completing pending {} op".format(self.name, self._pending_connection_op.name)
)
op = self._pending_connection_op
self._pending_connection_op = None
if isinstance(op, pipeline_ops_base.DisconnectOperation):
# Swallow any errors if we intended to disconnect - even if something went wrong, we
# got to the state we wanted to be in!
if cause:
# Only create a ConnnectionDroppedError if there is a cause,
# i.e. unexpected disconnect.
try:
six.raise_from(errors.ConnectionDroppedError, cause)
except errors.ConnectionDroppedError as e:
op.error = e
operation_flow.complete_op(stage=self, op=op)
handle_exceptions.swallow_unraised_exception(
cause,
log_msg="Unexpected disconnect with error while disconnecting - swallowing error",
)
op.complete()
else:
logger.warning("{}: disconnection was unexpected".format(self.name))
# Regardless of cause, it is now a ConnectionDroppedError
try:
six.raise_from(errors.ConnectionDroppedError, cause)
except errors.ConnectionDroppedError as e:
unhandled_exceptions.exception_caught_in_background_thread(e)
if cause:
op.complete(error=cause)
else:
op.complete(
error=transport_exceptions.ConnectionDroppedError("transport disconnected")
)
else:
logger.info("{}: disconnection was unexpected".format(self.name))
# Regardless of cause, it is now a ConnectionDroppedError. log it and swallow it.
# Higher layers will see that we're disconencted and reconnect as necessary.
e = transport_exceptions.ConnectionDroppedError(cause=cause)
handle_exceptions.swallow_unraised_exception(
e,
log_msg="Unexpected disconnection. Safe to ignore since other stages will reconnect.",
log_lvl="info",
)

Просмотреть файл

@ -9,7 +9,7 @@ import threading
import traceback
from multiprocessing.pool import ThreadPool
from concurrent.futures import ThreadPoolExecutor
from azure.iot.device.common import unhandled_exceptions
from azure.iot.device.common import handle_exceptions
logger = logging.getLogger(__name__)
@ -113,7 +113,7 @@ def _invoke_on_executor_thread(func, thread_name, block=True):
return func(*args, **kwargs)
except Exception as e:
if not block:
unhandled_exceptions.exception_caught_in_background_thread(e)
handle_exceptions.handle_background_exception(e)
else:
raise
except BaseException:
@ -166,6 +166,15 @@ def invoke_on_callback_thread_nowait(func):
return _invoke_on_executor_thread(func=func, thread_name="callback", block=False)
def invoke_on_http_thread_nowait(func):
"""
Run the decorated function on the callback thread, but don't wait for it to complete
"""
# TODO: Refactor this since this is not in the pipeline thread anymore, so we need to pull this into common.
# Also, the max workers eventually needs to be a bigger number, so that needs to be fixed to allow for more than one HTTP Request a a time.
return _invoke_on_executor_thread(func=func, thread_name="azure_iot_http", block=False)
def _assert_executor_thread(func, thread_name):
"""
Decorator which asserts that the given function only gets called inside the given
@ -196,3 +205,10 @@ def runs_on_pipeline_thread(func):
Decorator which marks a function as only running inside the pipeline thread.
"""
return _assert_executor_thread(func=func, thread_name="pipeline")
def runs_on_http_thread(func):
"""
Decorator which marks a function as only running inside the http thread.
"""
return _assert_executor_thread(func=func, thread_name="azure_iot_http")

Просмотреть файл

@ -0,0 +1,58 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines errors that may be raised from a transport"""
from .chainable_exception import ChainableException
class ConnectionFailedError(ChainableException):
"""
Connection failed to be established
"""
pass
class ConnectionDroppedError(ChainableException):
"""
Previously established connection was dropped
"""
pass
class UnauthorizedError(ChainableException):
"""
Authorization was rejected
"""
pass
class ProtocolClientError(ChainableException):
"""
Error returned from protocol client library
"""
pass
class TlsExchangeAuthError(ChainableException):
"""
Error returned when transport layer exchanges
result in a SSLCertVerification error.
"""
pass
class ProtocolProxyError(ChainableException):
"""
All proxy-related errors.
TODO : Not sure what to name it here. There is a class called Proxy Error already in Pysocks
"""
pass

Просмотреть файл

@ -6,8 +6,10 @@
"""This module defines constants for use across the azure-iot-device package
"""
VERSION = "2.0.0-preview.10"
USER_AGENT = "py-azure-iot-device/{version}".format(version=VERSION)
VERSION = "2.1.0"
IOTHUB_IDENTIFIER = "azure-iot-device-iothub-py"
PROVISIONING_IDENTIFIER = "azure-iot-device-provisioning-py"
IOTHUB_API_VERSION = "2018-06-30"
PROVISIONING_API_VERSION = "2019-03-31"
SECURITY_MESSAGE_INTERFACE_ID = "urn:azureiot:Security:SecurityAgent:1"
TELEMETRY_MESSAGE_SIZE_LIMIT = 262144

Просмотреть файл

@ -0,0 +1,175 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the azure.iot.device library API"""
from azure.iot.device.common.chainable_exception import ChainableException
# Currently, we are redefining many lower level exceptions in this file, in order to present an API
# surface that will be consistent and unchanging (even though lower level exceptions may change).
# Potentially, this could be somewhat relaxed in the future as the design solidifies.
# ~~~ EXCEPTIONS ~~~
class OperationCancelled(ChainableException):
"""An operation was cancelled"""
pass
# ~~~ CLIENT ERRORS ~~~
class ClientError(ChainableException):
"""Generic error for a client"""
pass
class ConnectionFailedError(ClientError):
"""Failed to establish a connection"""
pass
class ConnectionDroppedError(ClientError):
"""Lost connection while executing operation"""
pass
class CredentialError(ClientError):
"""Could not connect client using given credentials"""
pass
# ~~~ SERVICE ERRORS ~~~
class ServiceError(ChainableException):
"""Error received from an Azure IoT service"""
pass
# NOTE: These are not (yet) in use.
# Because of this they have been commented out to prevent confusion.
# class ArgumentError(ServiceError):
# """Service returned 400"""
# pass
# class UnauthorizedError(ServiceError):
# """Service returned 401"""
# pass
# class QuotaExceededError(ServiceError):
# """Service returned 403"""
# pass
# class NotFoundError(ServiceError):
# """Service returned 404"""
# pass
# class DeviceTimeoutError(ServiceError):
# """Service returned 408"""
# # TODO: is this a method call error? If so, do we retry?
# pass
# class DeviceAlreadyExistsError(ServiceError):
# """Service returned 409"""
# pass
# class InvalidEtagError(ServiceError):
# """Service returned 412"""
# pass
# class MessageTooLargeError(ServiceError):
# """Service returned 413"""
# pass
# class ThrottlingError(ServiceError):
# """Service returned 429"""
# pass
# class InternalServiceError(ServiceError):
# """Service returned 500"""
# pass
# class BadDeviceResponseError(ServiceError):
# """Service returned 502"""
# # TODO: is this a method invoke thing?
# pass
# class ServiceUnavailableError(ServiceError):
# """Service returned 503"""
# pass
# class ServiceTimeoutError(ServiceError):
# """Service returned 504"""
# pass
# class FailedStatusCodeError(ServiceError):
# """Service returned unknown status code"""
# pass
# status_code_to_error = {
# 400: ArgumentError,
# 401: UnauthorizedError,
# 403: QuotaExceededError,
# 404: NotFoundError,
# 408: DeviceTimeoutError,
# 409: DeviceAlreadyExistsError,
# 412: InvalidEtagError,
# 413: MessageTooLargeError,
# 429: ThrottlingError,
# 500: InternalServiceError,
# 502: BadDeviceResponseError,
# 503: ServiceUnavailableError,
# 504: ServiceTimeoutError,
# }
# def error_from_status_code(status_code, message=None):
# """
# Return an Error object from a failed status code
# :param int status_code: Status code returned from failed operation
# :returns: Error object
# """
# if status_code in status_code_to_error:
# return status_code_to_error[status_code](message)
# else:
# return FailedStatusCodeError(message)

Просмотреть файл

@ -5,7 +5,6 @@ as a Device or Module.
"""
from .sync_clients import IoTHubDeviceClient, IoTHubModuleClient
from .sync_inbox import InboxEmpty
from .models import Message, MethodResponse
from .models import Message, MethodRequest, MethodResponse
__all__ = ["IoTHubDeviceClient", "IoTHubModuleClient", "Message", "InboxEmpty", "MethodResponse"]
__all__ = ["IoTHubDeviceClient", "IoTHubModuleClient", "Message", "MethodRequest", "MethodResponse"]

Просмотреть файл

@ -14,7 +14,6 @@ import io
from . import auth
from . import pipeline
logger = logging.getLogger(__name__)
# A note on implementation:
@ -23,57 +22,102 @@ logger = logging.getLogger(__name__)
# pipeline configuration to be specifically tailored to the method of instantiation.
# For instance, .create_from_connection_string and .create_from_edge_envrionment both can use
# SymmetricKeyAuthenticationProviders to instantiate pipeline(s), but only .create_from_edge_environment
# should use it to instantiate an EdgePipeline. If the initializer accepted an auth provider, and then
# should use it to instantiate an HTTPPipeline. If the initializer accepted an auth provider, and then
# used it to create pipelines, this detail would be lost, as there would be no way to tell if a
# SymmetricKeyAuthenticationProvider was intended to be part of an Edge scenario or not.
def _validate_kwargs(**kwargs):
"""Helper function to validate user provided kwargs.
Raises TypeError if an invalid option has been provided"""
valid_kwargs = [
"product_info",
"websockets",
"cipher",
"server_verification_cert",
"proxy_options",
]
for kwarg in kwargs:
if kwarg not in valid_kwargs:
raise TypeError("Got an unexpected keyword argument '{}'".format(kwarg))
def _get_pipeline_config_kwargs(**kwargs):
"""Helper function to get a subset of user provided kwargs relevant to IoTHubPipelineConfig"""
new_kwargs = {}
if "product_info" in kwargs:
new_kwargs["product_info"] = kwargs["product_info"]
if "websockets" in kwargs:
new_kwargs["websockets"] = kwargs["websockets"]
if "cipher" in kwargs:
new_kwargs["cipher"] = kwargs["cipher"]
if "proxy_options" in kwargs:
new_kwargs["proxy_options"] = kwargs["proxy_options"]
return new_kwargs
@six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubClient(object):
"""A superclass representing a generic client. This class needs to be extended for specific clients."""
""" A superclass representing a generic IoTHub client.
This class needs to be extended for specific clients.
"""
def __init__(self, iothub_pipeline):
def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a generic client.
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
"""
self._iothub_pipeline = iothub_pipeline
self._edge_pipeline = None
self._http_pipeline = http_pipeline
@classmethod
def create_from_connection_string(cls, connection_string, ca_cert=None):
def create_from_connection_string(cls, connection_string, **kwargs):
"""
Instantiate the client from a IoTHub device or module connection string.
:param str connection_string: The connection string for the IoTHub you wish to connect to.
:param str ca_cert: (OPTIONAL) The trusted certificate chain. Necessary when using a
connection string with a GatewayHostName parameter.
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:param proxy_options: Options for sending traffic through proxy servers.
:type ProxyOptions: :class:`azure.iot.device.common.proxy_options`
:raises: ValueError if given an invalid connection_string.
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses a connection string for authentication.
"""
# TODO: Make this device/module specific and reject non-matching connection strings.
# This will require refactoring of the auth package to use common objects (e.g. ConnectionString)
# in order to differentiate types of connection strings.
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
if cls.__name__ == "IoTHubDeviceClient":
pipeline_configuration.blob_upload = True
# Auth Provider setup
authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse(connection_string)
authentication_provider.ca_cert = ca_cert # TODO: make this part of the instantiation
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
@classmethod
def create_from_shared_access_signature(cls, sas_token):
"""
Instantiate the client from a Shared Access Signature (SAS) token.
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
This method of instantiation is not recommended for general usage.
:param str sas_token: The string representation of a SAS token.
:raises: ValueError if given an invalid sas_token
"""
authentication_provider = auth.SharedAccessSignatureAuthenticationProvider.parse(sas_token)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
return cls(iothub_pipeline, http_pipeline)
@abc.abstractmethod
def connect(self):
@ -107,25 +151,109 @@ class AbstractIoTHubClient(object):
def receive_twin_desired_properties_patch(self):
pass
@property
def connected(self):
"""
Read-only property to indicate if the transport is connected or not.
"""
return self._iothub_pipeline.connected
@six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubDeviceClient(AbstractIoTHubClient):
@classmethod
def create_from_x509_certificate(cls, x509, hostname, device_id):
def create_from_x509_certificate(cls, x509, hostname, device_id, **kwargs):
"""
Instantiate a client which using X509 certificate authentication.
:param hostname: Host running the IotHub. Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates).
:param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object.
To use the certificate the enrollment object needs to contain cert
(either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded.
:type x509: X509
:param device_id: The ID is used to uniquely identify a device in the IoTHub
:return: A IoTHubClient which can use X509 authentication.
:type x509: :class:`azure.iot.device.X509`
:param str device_id: The ID used to uniquely identify a device in the IoTHub
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:param proxy_options: Options for sending traffic through proxy servers.
:type ProxyOptions: :class:`azure.iot.device.common.proxy_options`
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses an X509 certificate for authentication.
"""
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.blob_upload = True # Blob Upload is a feature on Device Clients
# Auth Provider setup
authentication_provider = auth.X509AuthenticationProvider(
x509=x509, hostname=hostname, device_id=device_id
)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@classmethod
def create_from_symmetric_key(cls, symmetric_key, hostname, device_id, **kwargs):
"""
Instantiate a client using symmetric key authentication.
:param symmetric_key: The symmetric key.
:param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param device_id: The device ID
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: TypeError if given an unrecognized parameter.
:return: An instance of an IoTHub client that uses a symmetric key for authentication.
"""
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.blob_upload = True # Blob Upload is a feature on Device Clients
# Auth Provider setup
authentication_provider = auth.SymmetricKeyAuthenticationProvider(
hostname=hostname, device_id=device_id, module_id=None, shared_access_key=symmetric_key
)
authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@abc.abstractmethod
def receive_message(self):
@ -134,28 +262,42 @@ class AbstractIoTHubDeviceClient(AbstractIoTHubClient):
@six.add_metaclass(abc.ABCMeta)
class AbstractIoTHubModuleClient(AbstractIoTHubClient):
def __init__(self, iothub_pipeline, edge_pipeline=None):
def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a module client.
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint.
:type edge_pipeline: EdgePipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
"""
super(AbstractIoTHubModuleClient, self).__init__(iothub_pipeline)
self._edge_pipeline = edge_pipeline
super(AbstractIoTHubModuleClient, self).__init__(iothub_pipeline, http_pipeline)
@classmethod
def create_from_edge_environment(cls):
def create_from_edge_environment(cls, **kwargs):
"""
Instantiate the client from the IoT Edge environment.
This method can only be run from inside an IoT Edge container, or in a debugging
environment configured for Edge development (e.g. Visual Studio, Visual Studio Code)
:raises: IoTEdgeError if the IoT Edge container is not configured correctly.
:raises: ValueError if debug variables are invalid
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: OSError if the IoT Edge container is not configured correctly.
:raises: ValueError if debug variables are invalid.
:returns: An instance of an IoTHub client that uses the IoT Edge environment for
authentication.
"""
_validate_kwargs(**kwargs)
if kwargs.get("server_verification_cert"):
raise TypeError(
"'server_verification_cert' is not supported by clients using an IoT Edge environment"
)
# First try the regular Edge container variables
try:
hostname = os.environ["IOTEDGE_IOTHUBHOSTNAME"]
@ -172,16 +314,16 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
try:
connection_string = os.environ["EdgeHubConnectionString"]
ca_cert_filepath = os.environ["EdgeModuleCACertificateFile"]
except KeyError:
# TODO: consider using a different error here. (OSError?)
raise auth.IoTEdgeError("IoT Edge environment not configured correctly")
# TODO: variant ca_cert file vs data object that would remove the need for this fopen
except KeyError as e:
new_err = OSError("IoT Edge environment not configured correctly")
new_err.__cause__ = e
raise new_err
# TODO: variant server_verification_cert file vs data object that would remove the need for this fopen
# Read the certificate file to pass it on as a string
try:
with io.open(ca_cert_filepath, mode="r") as ca_cert_file:
ca_cert = ca_cert_file.read()
except (OSError, IOError):
server_verification_cert = ca_cert_file.read()
except (OSError, IOError) as e:
# In Python 2, a non-existent file raises IOError, and an invalid file raises an IOError.
# In Python 3, a non-existent file raises FileNotFoundError, and an invalid file raises an OSError.
# However, FileNotFoundError inherits from OSError, and IOError has been turned into an alias for OSError,
@ -189,14 +331,20 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
# Unfortunately, we can't distinguish cause of error from error type, so the raised ValueError has a generic
# message. If, in the future, we want to add detail, this could be accomplished by inspecting the e.errno
# attribute
raise ValueError("Invalid CA certificate file")
new_err = ValueError("Invalid CA certificate file")
new_err.__cause__ = e
raise new_err
# Use Symmetric Key authentication for local dev experience.
try:
authentication_provider = auth.SymmetricKeyAuthenticationProvider.parse(
connection_string
)
authentication_provider.ca_cert = ca_cert
except ValueError:
raise
authentication_provider.server_verification_cert = server_verification_cert
else:
# Use an HSM for authentication in the general case
try:
authentication_provider = auth.IoTEdgeAuthenticationProvider(
hostname=hostname,
device_id=device_id,
@ -206,27 +354,70 @@ class AbstractIoTHubModuleClient(AbstractIoTHubClient):
workload_uri=workload_uri,
api_version=api_version,
)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
edge_pipeline = pipeline.EdgePipeline(authentication_provider)
return cls(iothub_pipeline, edge_pipeline=edge_pipeline)
except auth.IoTEdgeError as e:
new_err = OSError("Unexpected failure in IoTEdge")
new_err.__cause__ = e
raise new_err
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
pipeline_configuration.method_invoke = (
True
) # Method Invoke is allowed on modules created from edge environment
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@classmethod
def create_from_x509_certificate(cls, x509, hostname, device_id, module_id):
def create_from_x509_certificate(cls, x509, hostname, device_id, module_id, **kwargs):
"""
Instantiate a client which using X509 certificate authentication.
:param hostname: Host running the IotHub. Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates).
:param str hostname: Host running the IotHub.
Can be found in the Azure portal in the Overview tab as the string hostname.
:param x509: The complete x509 certificate object.
To use the certificate the enrollment object needs to contain cert
(either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded.
:type x509: X509
:param device_id: The ID is used to uniquely identify a device in the IoTHub
:param module_id : The ID of the module to uniquely identify a module on a device on the IoTHub.
:return: A IoTHubClient which can use X509 authentication.
:type x509: :class:`azure.iot.device.X509`
:param str device_id: The ID used to uniquely identify a device in the IoTHub
:param str module_id: The ID used to uniquely identify a module on a device on the IoTHub.
:param str server_verification_cert: Configuration Option. The trusted certificate chain.
Necessary when using connecting to an endpoint which has a non-standard root of trust,
such as a protocol gateway.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:param str product_info: Configuration Option. Default is empty string. The string contains
arbitrary product info which is appended to the user agent string.
:raises: TypeError if given an unrecognized parameter.
:returns: An instance of an IoTHub client that uses an X509 certificate for authentication.
"""
_validate_kwargs(**kwargs)
# Pipeline Config setup
pipeline_config_kwargs = _get_pipeline_config_kwargs(**kwargs)
pipeline_configuration = pipeline.IoTHubPipelineConfig(**pipeline_config_kwargs)
# Auth Provider setup
authentication_provider = auth.X509AuthenticationProvider(
x509=x509, hostname=hostname, device_id=device_id, module_id=module_id
)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider)
return cls(iothub_pipeline)
authentication_provider.server_verification_cert = kwargs.get("server_verification_cert")
# Pipeline setup
http_pipeline = pipeline.HTTPPipeline(authentication_provider, pipeline_configuration)
iothub_pipeline = pipeline.IoTHubPipeline(authentication_provider, pipeline_configuration)
return cls(iothub_pipeline, http_pipeline)
@abc.abstractmethod
def send_message_to_output(self, message, output_name):

Просмотреть файл

@ -16,12 +16,38 @@ from azure.iot.device.iothub.abstract_clients import (
)
from azure.iot.device.iothub.models import Message
from azure.iot.device.iothub.pipeline import constant
from azure.iot.device.iothub.pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.iothub.inbox_manager import InboxManager
from .async_inbox import AsyncClientInbox
from azure.iot.device import constant as device_constant
logger = logging.getLogger(__name__)
async def handle_result(callback):
try:
return await callback.completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except pipeline_exceptions.TlsExchangeAuthError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client due to TLS exchanges.", cause=e
)
except pipeline_exceptions.ProtocolProxyError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client raised due to proxy connections.", cause=e
)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class GenericIoTHubClient(AbstractIoTHubClient):
"""A super class representing a generic asynchronous client.
This class needs to be extended for specific clients.
@ -33,8 +59,10 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
TODO: How to document kwargs?
Possible values: iothub_pipeline, edge_pipeline
:param iothub_pipeline: The IoTHubPipeline used for the client
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param http_pipeline: The HTTPPipeline used for the client
:type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
"""
# Depending on the subclass calling this __init__, there could be different arguments,
# and the super() call could call a different class, due to the different MROs
@ -62,25 +90,37 @@ class GenericIoTHubClient(AbstractIoTHubClient):
The destination is chosen based on the credentials passed via the auth_provider parameter
that was provided when this object was initialized.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Connecting to Hub...")
connect_async = async_adapter.emulate_async(self._iothub_pipeline.connect)
callback = async_adapter.AwaitableCallback()
await connect_async(callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully connected to Hub")
async def disconnect(self):
"""Disconnect the client from the Azure IoT Hub or Azure IoT Edge Hub instance.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Disconnecting from Hub...")
disconnect_async = async_adapter.emulate_async(self._iothub_pipeline.disconnect)
callback = async_adapter.AwaitableCallback()
await disconnect_async(callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully disconnected from Hub")
@ -92,16 +132,30 @@ class GenericIoTHubClient(AbstractIoTHubClient):
:param message: The actual message to send. Anything passed that is not an instance of the
Message class will be converted to Message object.
:type message: :class:`azure.iot.device.Message` or str
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
"""
if not isinstance(message, Message):
message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of telemetry message can not exceed 256 KB.")
logger.info("Sending message to Hub...")
send_message_async = async_adapter.emulate_async(self._iothub_pipeline.send_message)
callback = async_adapter.AwaitableCallback()
await send_message_async(message, callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully sent message to Hub")
@ -115,6 +169,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
a different call to receive_method will be received.
:returns: MethodRequest object representing the received method request.
:rtype: `azure.iot.device.MethodRequest`
"""
if not self._iothub_pipeline.feature_enabled[constant.METHODS]:
await self._enable_feature(constant.METHODS)
@ -133,6 +188,16 @@ class GenericIoTHubClient(AbstractIoTHubClient):
function will open the connection before sending the event.
:param method_response: The MethodResponse to send
:type method_response: :class:`azure.iot.device.MethodResponse`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Sending method response to Hub...")
send_method_response_async = async_adapter.emulate_async(
@ -143,7 +208,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
# TODO: maybe consolidate method_request, result and status into a new object
await send_method_response_async(method_response, callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully sent method response to Hub")
@ -158,7 +223,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback()
await enable_feature_async(feature_name, callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully enabled feature:" + feature_name)
@ -166,7 +231,17 @@ class GenericIoTHubClient(AbstractIoTHubClient):
"""
Gets the device or module twin from the Azure IoT Hub or Azure IoT Edge Hub service.
:returns: Twin object which was retrieved from the hub
:returns: Complete Twin as a JSON dict
:rtype: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Getting twin")
@ -177,7 +252,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback(return_arg_name="twin")
await get_twin_async(callback=callback)
twin = await callback.completion()
twin = await handle_result(callback)
logger.info("Successfully retrieved twin")
return twin
@ -188,8 +263,17 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If the service returns an error on the patch operation, this function will raise the
appropriate error.
:param reported_properties_patch:
:type reported_properties_patch: dict, str, int, float, bool, or None (JSON compatible values)
:param reported_properties_patch: Twin Reported Properties patch as a JSON dict
:type reported_properties_patch: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Patching twin reported properties")
@ -202,7 +286,7 @@ class GenericIoTHubClient(AbstractIoTHubClient):
callback = async_adapter.AwaitableCallback()
await patch_twin_async(patch=reported_properties_patch, callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully sent twin patch")
@ -212,7 +296,8 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If no method request is yet available, will wait until it is available.
:returns: desired property patch. This can be dict, str, int, float, bool, or None (JSON compatible values)
:returns: Twin Desired Properties patch as a JSON dict
:rtype: dict
"""
if not self._iothub_pipeline.feature_enabled[constant.TWIN_PATCHES]:
await self._enable_feature(constant.TWIN_PATCHES)
@ -223,6 +308,48 @@ class GenericIoTHubClient(AbstractIoTHubClient):
logger.info("twin patch received")
return patch
async def get_storage_info_for_blob(self, blob_name):
"""Sends a POST request over HTTP to an IoTHub endpoint that will return information for uploading via the Azure Storage Account linked to the IoTHub your device is connected to.
:param str blob_name: The name in string format of the blob that will be uploaded using the storage API. This name will be used to generate the proper credentials for Storage, and needs to match what will be used with the Azure Storage SDK to perform the blob upload.
:returns: A JSON-like (dictionary) object from IoT Hub that will contain relevant information including: correlationId, hostName, containerName, blobName, sasToken.
"""
get_storage_info_for_blob_async = async_adapter.emulate_async(
self._http_pipeline.get_storage_info_for_blob
)
callback = async_adapter.AwaitableCallback(return_arg_name="storage_info")
await get_storage_info_for_blob_async(blob_name=blob_name, callback=callback)
storage_info = await handle_result(callback)
logger.info("Successfully retrieved storage_info")
return storage_info
async def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description
):
"""When the upload is complete, the device sends a POST request to the IoT Hub endpoint with information on the status of an upload to blob attempt. This is used by IoT Hub to notify listening clients.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
"""
notify_blob_upload_status_async = async_adapter.emulate_async(
self._http_pipeline.notify_blob_upload_status
)
callback = async_adapter.AwaitableCallback()
await notify_blob_upload_status_async(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=callback,
)
await handle_result(callback)
logger.info("Successfully notified blob upload status")
class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
"""An asynchronous device client that connects to an Azure IoT Hub instance.
@ -230,16 +357,16 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
Intended for usage with Python 3.5.3+
"""
def __init__(self, iothub_pipeline):
def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a IoTHubDeviceClient.
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
"""
super().__init__(iothub_pipeline=iothub_pipeline)
super().__init__(iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline)
self._iothub_pipeline.on_c2d_message_received = self._inbox_manager.route_c2d_message
async def receive_message(self):
@ -248,6 +375,7 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
If no message is yet available, will wait until an item is available.
:returns: Message that was sent from the Azure IoT Hub.
:rtype: :class:`azure.iot.device.Message`
"""
if not self._iothub_pipeline.feature_enabled[constant.C2D_MSG]:
await self._enable_feature(constant.C2D_MSG)
@ -265,18 +393,16 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
Intended for usage with Python 3.5.3+
"""
def __init__(self, iothub_pipeline, edge_pipeline=None):
def __init__(self, iothub_pipeline, http_pipeline):
"""Intializer for a IoTHubModuleClient.
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint.
:type edge_pipeline: EdgePipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
"""
super().__init__(iothub_pipeline=iothub_pipeline, edge_pipeline=edge_pipeline)
super().__init__(iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline)
self._iothub_pipeline.on_input_message_received = self._inbox_manager.route_input_message
async def send_message_to_output(self, message, output_name):
@ -287,13 +413,27 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If the connection to the service has not previously been opened by a call to connect, this
function will open the connection before sending the event.
:param message: message to send to the given output. Anything passed that is not an instance of the
Message class will be converted to Message object.
:param output_name: Name of the output to send the event to.
:param message: Message to send to the given output. Anything passed that is not an
instance of the Message class will be converted to Message object.
:type message: :class:`azure.iot.device.Message` or str
:param str output_name: Name of the output to send the event to.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
"""
if not isinstance(message, Message):
message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of message can not exceed 256 KB.")
message.output_name = output_name
logger.info("Sending message to output:" + output_name + "...")
@ -303,7 +443,7 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
callback = async_adapter.AwaitableCallback()
await send_output_event_async(message, callback=callback)
await callback.completion()
await handle_result(callback)
logger.info("Successfully sent message to output: " + output_name)
@ -313,7 +453,9 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If no message is yet available, will wait until an item is available.
:param str input_name: The input name to receive a message on.
:returns: Message that was sent to the specified input.
:rtype: :class:`azure.iot.device.Message`
"""
if not self._iothub_pipeline.feature_enabled[constant.INPUT_MSG]:
await self._enable_feature(constant.INPUT_MSG)
@ -323,3 +465,21 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
message = await inbox.get()
logger.info("Input message received on: " + input_name)
return message
async def invoke_method(self, method_params, device_id, module_id=None):
"""Invoke a method from your client onto a device or module client, and receive the response to the method call.
:param dict method_params: Should contain a method_name, payload, connect_timeout_in_seconds, response_timeout_in_seconds.
:param str device_id: Device ID of the target device where the method will be invoked.
:param str module_id: Module ID of the target module where the method will be invoked. (Optional)
:returns: method_result should contain a status, and a payload
:rtype: dict
"""
invoke_method_async = async_adapter.emulate_async(self._http_pipeline.invoke_method)
callback = async_adapter.AwaitableCallback(return_arg_name="invoke_method_response")
await invoke_method_async(device_id, method_params, callback=callback, module_id=module_id)
method_response = await handle_result(callback)
logger.info("Successfully invoked method")
return method_response

Просмотреть файл

@ -10,6 +10,7 @@ import abc
import logging
import math
import six
import weakref
from threading import Timer
import six.moves.urllib as urllib
from .authentication_provider import AuthenticationProvider
@ -60,10 +61,9 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
self._token_update_timer = None
self.shared_access_key_name = None
self.sas_token_str = None
self.on_sas_token_updated_handler = None
self.on_sas_token_updated_handler_list = []
def disconnect(self):
"""Cancel updates to the SAS Token"""
def __del__(self):
self._cancel_token_update_timer()
def generate_new_sas_token(self):
@ -81,14 +81,14 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
If self.token_udpate_callback is set, this callback will be called to notify the
pipeline that a new token is available. The pipeline is responsible for doing
whatever is necessary to leverage the new token when the on_sas_token_updated_handler
whatever is necessary to leverage the new token when the on_sas_token_updated_handler_list
function is called.
The token that is generated expires at some point in the future, based on the token
renewal interval and the token renewal margin. When a token is first generated, the
authorization provider object will set a timer which will be responsible for renewing
the token before the it expires. When this timer fires, it will automatically generate
a new sas token and notify the pipeline by calling self.on_sas_token_updated_handler.
a new sas token and notify the pipeline by calling self.on_sas_token_updated_handler_list.
The token update timer is set based on two numbers: self.token_validity_period and
self.token_renewal_margin
@ -144,7 +144,11 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
t = self._token_update_timer
self._token_update_timer = None
if t:
logger.debug("Canceling token update timer for (%s,%s)", self.device_id, self.module_id)
logger.debug(
"Canceling token update timer for (%s,%s)",
self.device_id,
self.module_id if self.module_id else "",
)
t.cancel()
def _schedule_token_update(self, seconds_until_update):
@ -160,9 +164,30 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
seconds_until_update,
)
# It's important to use a weak reference to self inside this timer function
# because we don't want the timer to prevent this object (`self`) from being collected.
#
# We want `self` to get collected when the pipeline gets collected, and
# we want the pipeline to get collected when the client object gets collected.
# This way, everything gets cleaned up when the user is done with the client object,
# as expected.
#
# If timerfunc used `self` directly, that would be a strong reference, and that strong
# reference would prevent `self` from being collected as long as the timer existed.
#
# If this isn't collected when the client is collected, then the object that implements the
# on_sas_token_updated_hndler doesn't get collected. Since that object is part of the
# pipeline, a major part of the pipeline ends up staying around, probably orphaned from
# the client. Since that orphaned part of the pipeline contains Paho, bad things can happen
# if we don't clean up Paho correctly. This is especially noticable if one process
# destroys a client object and creates a new one.
#
self_weakref = weakref.ref(self)
def timerfunc():
logger.debug("Timed SAS update for (%s,%s)", self.device_id, self.module_id)
self.generate_new_sas_token()
this = self_weakref()
logger.debug("Timed SAS update for (%s,%s)", this.device_id, this.module_id)
this.generate_new_sas_token()
self._token_update_timer = Timer(seconds_until_update, timerfunc)
self._token_update_timer.daemon = True
@ -173,14 +198,15 @@ class BaseRenewableTokenAuthenticationProvider(AuthenticationProvider):
In response to this event, clients should re-initiate their connection in order to use
the updated sas token.
"""
if self.on_sas_token_updated_handler:
if bool(len(self.on_sas_token_updated_handler_list)):
logger.debug(
"sending token update notification for (%s, %s)", self.device_id, self.module_id
)
self.on_sas_token_updated_handler()
for x in self.on_sas_token_updated_handler_list:
x()
else:
logger.warning(
"_notify_token_updated: on_sas_token_updated_handler not set. Doing nothing."
"_notify_token_updated: on_sas_token_updated_handler_list not set. Doing nothing."
)
def get_current_sas_token(self):

Просмотреть файл

@ -12,14 +12,15 @@ import requests
import requests_unixsocket
import logging
from .base_renewable_token_authentication_provider import BaseRenewableTokenAuthenticationProvider
from azure.iot.device import constant
from azure.iot.device.common.chainable_exception import ChainableException
from azure.iot.device.product_info import ProductInfo
requests_unixsocket.monkeypatch()
logger = logging.getLogger(__name__)
class IoTEdgeError(Exception):
class IoTEdgeError(ChainableException):
pass
@ -56,7 +57,7 @@ class IoTEdgeAuthenticationProvider(BaseRenewableTokenAuthenticationProvider):
workload_uri=workload_uri,
)
self.gateway_hostname = gateway_hostname
self.ca_cert = self.hsm.get_trust_bundle()
self.server_verification_cert = self.hsm.get_trust_bundle()
# TODO: reconsider this design when refactoring the BaseRenewableToken auth parent
# TODO: Consider handling the quoting within this function, and renaming quoted_resource_uri to resource_uri
@ -107,7 +108,7 @@ class IoTEdgeHsm(object):
Return the trust bundle that can be used to validate the server-side SSL
TLS connection that we use to talk to edgeHub.
:return: The CA certificate to use for connections to the Azure IoT Edge
:return: The server verification certificate to use for connections to the Azure IoT Edge
instance, as a PEM certificate in string form.
:raises: IoTEdgeError if unable to retrieve the certificate.
@ -115,23 +116,23 @@ class IoTEdgeHsm(object):
r = requests.get(
self.workload_uri + "trust-bundle",
params={"api-version": self.api_version},
headers={"User-Agent": urllib.parse.quote_plus(constant.USER_AGENT)},
headers={"User-Agent": urllib.parse.quote_plus(ProductInfo.get_iothub_user_agent())},
)
# Validate that the request was successful
try:
r.raise_for_status()
except requests.exceptions.HTTPError:
raise IoTEdgeError("Unable to get trust bundle from EdgeHub")
except requests.exceptions.HTTPError as e:
raise IoTEdgeError(message="Unable to get trust bundle from EdgeHub", cause=e)
# Decode the trust bundle
try:
bundle = r.json()
except ValueError:
raise IoTEdgeError("Unable to decode trust bundle")
except ValueError as e:
raise IoTEdgeError(message="Unable to decode trust bundle", cause=e)
# Retrieve the certificate
try:
cert = bundle["certificate"]
except KeyError:
raise IoTEdgeError("No certificate in trust bundle")
except KeyError as e:
raise IoTEdgeError(message="No certificate in trust bundle", cause=e)
return cert
def sign(self, data_str):
@ -161,21 +162,21 @@ class IoTEdgeHsm(object):
r = requests.post( # TODO: can we use json field instead of data?
url=path,
params={"api-version": self.api_version},
headers={"User-Agent": urllib.parse.quote_plus(constant.USER_AGENT)},
headers={"User-Agent": urllib.parse.quote_plus(ProductInfo.get_iothub_user_agent())},
data=json.dumps(sign_request),
)
try:
r.raise_for_status()
except requests.exceptions.HTTPError:
raise IoTEdgeError("Unable to sign data")
except requests.exceptions.HTTPError as e:
raise IoTEdgeError(message="Unable to sign data", cause=e)
try:
sign_response = r.json()
except ValueError:
raise IoTEdgeError("Unable to decode signed data")
except ValueError as e:
raise IoTEdgeError(message="Unable to decode signed data", cause=e)
try:
signed_data_str = sign_response["digest"]
except KeyError:
raise IoTEdgeError("No signed data received")
except KeyError as e:
raise IoTEdgeError(message="No signed data received", cause=e)
return urllib.parse.quote(signed_data_str)

Просмотреть файл

@ -64,7 +64,7 @@ class SymmetricKeyAuthenticationProvider(BaseRenewableTokenAuthenticationProvide
self.shared_access_key = shared_access_key
self.shared_access_key_name = shared_access_key_name
self.gateway_hostname = gateway_hostname
self.ca_cert = None
self.server_verification_cert = None
@staticmethod
def parse(connection_string):

Просмотреть файл

@ -6,6 +6,7 @@
"""This module contains a class representing messages that are sent or received.
"""
from azure.iot.device import constant
import sys
# TODO: Revise this class. Does all of this REALLY need to be here?
@ -15,7 +16,7 @@ class Message(object):
:ivar data: The data that constitutes the payload
:ivar custom_properties: Dictionary of custom message properties
:ivar lock_token: Used by receiver to abandon, reject or complete the message
:ivar message id: A user-settlable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}
:ivar message id: A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}
:ivar sequence_number: A number (unique per device-queue) assigned by IoT Hub to each message
:ivar to: A destination specified for Cloud-to-Device (C2D) messages
:ivar expiry_time_utc: Date and time of message expiration in UTC format
@ -36,8 +37,8 @@ class Message(object):
:param data: The data that constitutes the payload
:param str message_id: A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + {'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}
:param str content_encoding: Content encoding of the message data. Can be 'utf-8', 'utf-16' or 'utf-32'
:param str content_type: Content type property used to routes with the message body. Can be 'application/json'
:param str content_encoding: Content encoding of the message data. Other values can be utf-16' or 'utf-32'
:param str content_type: Content type property used to routes with the message body.
:param str output_name: Name of the output that the is being sent to.
"""
self.data = data
@ -70,3 +71,16 @@ class Message(object):
def __str__(self):
return str(self.data)
def get_size(self):
total = 0
total = total + sum(
sys.getsizeof(v)
for v in self.__dict__.values()
if v is not None and v is not self.custom_properties
)
if self.custom_properties:
total = total + sum(
sys.getsizeof(v) for v in self.custom_properties.values() if v is not None
)
return total

Просмотреть файл

@ -6,4 +6,5 @@ INTERNAL USAGE ONLY
"""
from .iothub_pipeline import IoTHubPipeline
from .edge_pipeline import EdgePipeline
from .http_pipeline import HTTPPipeline
from .config import IoTHubPipelineConfig

Просмотреть файл

@ -0,0 +1,30 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
from azure.iot.device.common.pipeline.config import BasePipelineConfig
logger = logging.getLogger(__name__)
class IoTHubPipelineConfig(BasePipelineConfig):
"""A class for storing all configurations/options for IoTHub clients in the Azure IoT Python Device Client Library.
"""
def __init__(self, product_info="", **kwargs):
"""Initializer for IoTHubPipelineConfig which passes all unrecognized keyword-args down to BasePipelineConfig
to be evaluated. This stacked options setting is to allow for unique configuration options to exist between the
IoTHub Client and the Provisioning Client, while maintaining a base configuration class with shared config options.
:param str product_info: A custom identification string for the type of device connecting to Azure IoT Hub.
"""
super(IoTHubPipelineConfig, self).__init__(**kwargs)
self.product_info = product_info
# Now, the parameters below are not exposed to the user via kwargs. They need to be set by manipulating the IoTHubPipelineConfig object.
# They are not in the BasePipelineConfig because these do not apply to the provisioning client.
self.blob_upload = False
self.method_invoke = False

Просмотреть файл

@ -0,0 +1,22 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the pipeline API"""
# For now, present relevant transport errors as part of the Pipeline API surface
# so that they do not have to be duplicated at this layer.
from azure.iot.device.common.pipeline.pipeline_exceptions import *
from azure.iot.device.common.transport_exceptions import (
ConnectionFailedError,
ConnectionDroppedError,
# TODO: UnauthorizedError (the one from transport) should probably not surface out of
# the pipeline due to confusion with the higher level service UnauthorizedError. It
# should probably get turned into some other error instead (e.g. ConnectionFailedError).
# But for now, this is a stopgap.
UnauthorizedError,
ProtocolClientError,
TlsExchangeAuthError,
ProtocolProxyError,
)

Просмотреть файл

@ -0,0 +1,67 @@
def translate_error(sc, reason):
"""
Codes_SRS_NODE_IOTHUB_REST_API_CLIENT_16_012: [Any error object returned by translate_error shall inherit from the generic Error Javascript object and have 3 properties:
- response shall contain the IncomingMessage object returned by the HTTP layer.
- reponseBody shall contain the content of the HTTP response.
- message shall contain a human-readable error message.]
"""
message = "Error: {}".format(reason)
if sc == 400:
# translate_error shall return an ArgumentError if the HTTP response status code is 400.
error = "ArgumentError({})".format(message)
elif sc == 401:
# translate_error shall return an UnauthorizedError if the HTTP response status code is 401.
error = "UnauthorizedError({})".format(message)
elif sc == 403:
# translate_error shall return an TooManyDevicesError if the HTTP response status code is 403.
error = "TooManyDevicesError({})".format(message)
elif sc == 404:
if reason == "Device Not Found":
# translate_error shall return an DeviceNotFoundError if the HTTP response status code is 404 and if the error code within the body of the error response is DeviceNotFound.
error = "DeviceNotFoundError({})".format(message)
elif reason == "IoTHub Not Found":
# translate_error shall return an IotHubNotFoundError if the HTTP response status code is 404 and if the error code within the body of the error response is IotHubNotFound.
error = "IotHubNotFoundError({})".format(message)
else:
error = "Error('Not found')"
elif sc == 408:
# translate_error shall return a DeviceTimeoutError if the HTTP response status code is 408.
error = "DeviceTimeoutError({})".format(message)
elif sc == 409:
# translate_error shall return an DeviceAlreadyExistsError if the HTTP response status code is 409.
error = "DeviceAlreadyExistsError({})".format(message)
elif sc == 412:
# translate_error shall return an InvalidEtagError if the HTTP response status code is 412.
error = "InvalidEtagError({})".format(message)
elif sc == 429:
# translate_error shall return an ThrottlingError if the HTTP response status code is 429.]
error = "ThrottlingError({})".format(message)
elif sc == 500:
# translate_error shall return an InternalServerError if the HTTP response status code is 500.
error = "InternalServerError({})".format(message)
elif sc == 502:
# translate_error shall return a BadDeviceResponseError if the HTTP response status code is 502.
error = "BadDeviceResponseError({})".format(message)
elif sc == 503:
# translate_error shall return an ServiceUnavailableError if the HTTP response status code is 503.
error = "ServiceUnavailableError({})".format(message)
elif sc == 504:
# translate_error shall return a GatewayTimeoutError if the HTTP response status code is 504.
error = "GatewayTimeoutError({})".format(message)
else:
# If the HTTP error code is unknown, translate_error should return a generic Javascript Error object.
error = "Error({})".format(message)
return error

Просмотреть файл

@ -0,0 +1,44 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import six.moves.urllib as urllib
logger = logging.getLogger(__name__)
def get_method_invoke_path(device_id, module_id=None):
"""
:return: The path for invoking methods from one module to a device or module. It is of the format
twins/uri_encode($device_id)/modules/uri_encode($module_id)/methods
"""
if module_id:
return "twins/{device_id}/modules/{module_id}/methods".format(
device_id=urllib.parse.quote_plus(device_id),
module_id=urllib.parse.quote_plus(module_id),
)
else:
return "twins/{device_id}/methods".format(device_id=urllib.parse.quote_plus(device_id))
def get_storage_info_for_blob_path(device_id):
"""
This does not take a module_id since get_storage_info_for_blob_path should only ever be invoked on device clients.
:return: The path for getting the storage sdk credential information from IoT Hub. It is of the format
devices/uri_encode($device_id)/files
"""
return "devices/{}/files".format(urllib.parse.quote_plus(device_id))
def get_notify_blob_upload_status_path(device_id):
"""
This does not take a module_id since get_notify_blob_upload_status_path should only ever be invoked on device clients.
:return: The path for getting the storage sdk credential information from IoT Hub. It is of the format
devices/uri_encode($device_id)/files/notifications
"""
return "devices/{}/files/notifications".format(urllib.parse.quote_plus(device_id))

Просмотреть файл

@ -0,0 +1,170 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import sys
from azure.iot.device.common.evented_callback import EventedCallback
from azure.iot.device.common.pipeline import (
pipeline_stages_base,
pipeline_ops_base,
pipeline_stages_http,
)
from azure.iot.device.iothub.pipeline import exceptions as pipeline_exceptions
from . import (
constant,
pipeline_stages_iothub,
pipeline_ops_iothub,
pipeline_ops_iothub_http,
pipeline_stages_iothub_http,
)
from azure.iot.device.iothub.auth.x509_authentication_provider import X509AuthenticationProvider
logger = logging.getLogger(__name__)
class HTTPPipeline(object):
"""Pipeline to communicate with Edge.
Uses HTTP.
"""
def __init__(self, auth_provider, pipeline_configuration):
"""
Constructor for instantiating a pipeline adapter object.
:param auth_provider: The authentication provider
:param pipeline_configuration: The configuration generated based on user inputs
"""
self._pipeline = (
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
.append_stage(pipeline_stages_iothub.UseAuthProviderStage())
.append_stage(pipeline_stages_iothub_http.IoTHubHTTPTranslationStage())
.append_stage(pipeline_stages_http.HTTPTransportStage())
)
callback = EventedCallback()
if isinstance(auth_provider, X509AuthenticationProvider):
op = pipeline_ops_iothub.SetX509AuthProviderOperation(
auth_provider=auth_provider, callback=callback
)
else: # Currently everything else goes via this block.
op = pipeline_ops_iothub.SetAuthProviderOperation(
auth_provider=auth_provider, callback=callback
)
self._pipeline.run_op(op)
callback.wait_for_completion()
def invoke_method(self, device_id, method_params, callback, module_id=None):
"""
Send a request to the service to invoke a method on a target device or module.
:param device_id: The target device id
:param method_params: The method parameters to be invoked on the target client
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None.
On failure, this callback is called with error set to the cause of the failure.
:param module_id: The target module id
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline invoke_method called")
if not self._pipeline.pipeline_configuration.method_invoke:
# If this parameter is not set, that means that the pipeline was not generated by the edge environment. Method invoke only works for clients generated using the edge environment.
error = pipeline_exceptions.PipelineError(
"invoke_method called, but it is only supported on module clients generated from an edge environment. If you are not using a module generated from an edge environment, you cannot use invoke_method"
)
return callback(error=error)
def on_complete(op, error):
callback(error=error, invoke_method_response=op.method_response)
self._pipeline.run_op(
pipeline_ops_iothub_http.MethodInvokeOperation(
target_device_id=device_id,
target_module_id=module_id,
method_params=method_params,
callback=on_complete,
)
)
def get_storage_info_for_blob(self, blob_name, callback):
"""
Sends a POST request to the IoT Hub service endpoint to retrieve an object that contains information for uploading via the Storage SDK.
:param blob_name: The name of the blob that will be uploaded via the Azure Storage SDK.
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None, and the storage_info set to the information JSON received from the service.
On failure, this callback is called with error set to the cause of the failure, and the storage_info=None.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline get_storage_info_for_blob called")
if not self._pipeline.pipeline_configuration.blob_upload:
# If this parameter is not set, that means this is not a device client. Upload to blob is not supported on module clients.
error = pipeline_exceptions.PipelineError(
"get_storage_info_for_blob called, but it is only supported for use with device clients. Ensure you are using a device client."
)
return callback(error=error)
def on_complete(op, error):
callback(error=error, storage_info=op.storage_info)
self._pipeline.run_op(
pipeline_ops_iothub_http.GetStorageInfoOperation(
blob_name=blob_name, callback=on_complete
)
)
def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description, callback
):
"""
Sends a POST request to a IoT Hub service endpoint to notify the status of the Storage SDK call for a blob upload.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
:param callback: callback which is called when request has been fulfilled.
On success, this callback is called with the error=None.
On failure, this callback is called with error set to the cause of the failure.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline notify_blob_upload_status called")
if not self._pipeline.pipeline_configuration.blob_upload:
# If this parameter is not set, that means this is not a device client. Upload to blob is not supported on module clients.
error = pipeline_exceptions.PipelineError(
"notify_blob_upload_status called, but it is only supported for use with device clients. Ensure you are using a device client."
)
return callback(error=error)
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub_http.NotifyBlobUploadStatusOperation(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=on_complete,
)
)

Просмотреть файл

@ -25,11 +25,13 @@ logger = logging.getLogger(__name__)
class IoTHubPipeline(object):
def __init__(self, auth_provider):
def __init__(self, auth_provider, pipeline_configuration):
"""
Constructor for instantiating a pipeline adapter object
:param auth_provider: The authentication provider
:param pipeline_configuration: The configuration generated based on user inputs
"""
self.feature_enabled = {
constant.C2D_MSG: False,
constant.INPUT_MSG: False,
@ -46,14 +48,70 @@ class IoTHubPipeline(object):
self.on_method_request_received = None
self.on_twin_patch_received = None
# Currently a single timeout stage and a single retry stage for MQTT retry only.
# Later, a higher level timeout and a higher level retry stage.
self._pipeline = (
pipeline_stages_base.PipelineRootStage()
#
# The root is always the root. By definition, it's the first stage in the pipeline.
#
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
#
# UseAuthProviderStage comes near the root by default because it doesn't need to be after
# anything, but it does need to be before IoTHubMQTTTranslationStage.
#
.append_stage(pipeline_stages_iothub.UseAuthProviderStage())
.append_stage(pipeline_stages_iothub.HandleTwinOperationsStage())
#
# TwinRequestResponseStage comes near the root by default because it doesn't need to be
# after anything
#
.append_stage(pipeline_stages_iothub.TwinRequestResponseStage())
#
# CoordinateRequestAndResponseStage needs to be after TwinRequestResponseStage because
# TwinRequestResponseStage creates the request ops that CoordinateRequestAndResponseStage
# is coordinating. It needs to be before IoTHubMQTTTranslationStage because that stage
# operates on ops that CoordinateRequestAndResponseStage produces
#
.append_stage(pipeline_stages_base.CoordinateRequestAndResponseStage())
.append_stage(pipeline_stages_iothub_mqtt.IoTHubMQTTConverterStage())
.append_stage(pipeline_stages_base.EnsureConnectionStage())
.append_stage(pipeline_stages_base.SerializeConnectOpsStage())
#
# IoTHubMQTTTranslationStage comes here because this is the point where we can translate
# all operations directly into MQTT. After this stage, only pipeline_stages_base stages
# are allowed because IoTHubMQTTTranslationStage removes all the IoTHub-ness from the ops
#
.append_stage(pipeline_stages_iothub_mqtt.IoTHubMQTTTranslationStage())
#
# AutoConnectStage comes here because only MQTT ops have the need_connection flag set
# and this is the first place in the pipeline wherer we can guaranetee that all network
# ops are MQTT ops.
#
.append_stage(pipeline_stages_base.AutoConnectStage())
#
# ReconnectStage needs to be after AutoConnectStage because ReconnectStage sets/clears
# the virtually_conencted flag and we want an automatic connection op to set this flag so
# we can reconnect autoconnect operations. This is important, for example, if a
# send_message causes the transport to automatically connect, but that connection fails.
# When that happens, the ReconenctState will hold onto the ConnectOperation until it
# succeeds, and only then will return success to the AutoConnectStage which will
# allow the publish to continue.
#
.append_stage(pipeline_stages_base.ReconnectStage())
#
# ConnectionLockStage needs to be after ReconnectStage because we want any ops that
# ReconnectStage creates to go through the ConnectionLockStage gate
#
.append_stage(pipeline_stages_base.ConnectionLockStage())
#
# RetryStage needs to be near the end because it's retrying low-level MQTT operations.
#
.append_stage(pipeline_stages_base.RetryStage())
#
# OpTimeoutStage needs to be after RetryStage because OpTimeoutStage returns the timeout
# errors that RetryStage is watching for.
#
.append_stage(pipeline_stages_base.OpTimeoutStage())
#
# MQTTTransportStage needs to be at the very end of the pipeline because this is where
# operations turn into network traffic
#
.append_stage(pipeline_stages_mqtt.MQTTTransportStage())
)
@ -110,23 +168,25 @@ class IoTHubPipeline(object):
self._pipeline.run_op(op)
callback.wait_for_completion()
if op.error:
logger.error("{} failed: {}".format(op.name, op.error))
raise op.error
def connect(self, callback):
"""
Connect to the service.
:param callback: callback which is called when the connection to the service is complete.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("Starting ConnectOperation on the pipeline")
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=on_complete))
@ -135,14 +195,16 @@ class IoTHubPipeline(object):
Disconnect from the service.
:param callback: callback which is called when the connection to the service has been disconnected
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("Starting DisconnectOperation on the pipeline")
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=on_complete))
@ -152,13 +214,18 @@ class IoTHubPipeline(object):
:param message: message to send.
:param callback: callback which is called when the message publish has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub.SendD2CMessageOperation(message=message, callback=on_complete)
@ -170,13 +237,18 @@ class IoTHubPipeline(object):
:param message: message to send.
:param callback: callback which is called when the message publish has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub.SendOutputEventOperation(message=message, callback=on_complete)
@ -188,14 +260,19 @@ class IoTHubPipeline(object):
:param method_response: the method response to send
:param callback: callback which is called when response has been acknowledged by the service
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("IoTHubPipeline send_method_response called")
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub.SendMethodResponseOperation(
@ -208,12 +285,22 @@ class IoTHubPipeline(object):
Send a request for a full twin to the service.
:param callback: callback which is called when request has been acknowledged by the service.
This callback should have one parameter, which will contain the requested twin when called.
This callback should have two parameters. On success, this callback is called with the
requested twin and error=None. On failure, this callback is called with None for the requested
twin and error set to the cause of the failure.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
def on_complete(op):
if op.error:
callback(error=op.error, twin=None)
def on_complete(op, error):
if error:
callback(error=error, twin=None)
else:
callback(twin=op.twin)
@ -225,13 +312,18 @@ class IoTHubPipeline(object):
:param patch: the reported properties patch to send
:param callback: callback which is called when request has been acknowledged by the service.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_iothub.PatchTwinReportedPropertiesOperation(
@ -253,11 +345,8 @@ class IoTHubPipeline(object):
raise ValueError("Invalid feature_name")
self.feature_enabled[feature_name] = True
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_base.EnableFeatureOperation(
@ -279,14 +368,18 @@ class IoTHubPipeline(object):
raise ValueError("Invalid feature_name")
self.feature_enabled[feature_name] = False
def on_complete(op):
if op.error:
callback(error=op.error)
else:
callback()
def on_complete(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_base.DisableFeatureOperation(
feature_name=feature_name, callback=on_complete
)
)
@property
def connected(self):
"""
Read-only property to indicate if the transport is connected or not.
"""
return self._pipeline.connected

Просмотреть файл

@ -18,7 +18,7 @@ class SetX509AuthProviderOperation(PipelineOperation):
very IoTHub-specific
"""
def __init__(self, auth_provider, callback=None):
def __init__(self, auth_provider, callback):
"""
Initializer for SetAuthProviderOperation objects.
@ -42,7 +42,7 @@ class SetAuthProviderOperation(PipelineOperation):
very IoTHub-specific
"""
def __init__(self, auth_provider, callback=None):
def __init__(self, auth_provider, callback):
"""
Initializer for SetAuthProviderOperation objects.
@ -69,12 +69,12 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
self,
device_id,
hostname,
callback,
module_id=None,
gateway_hostname=None,
ca_cert=None,
server_verification_cert=None,
client_cert=None,
sas_token=None,
callback=None,
):
"""
Initializer for SetIoTHubConnectionArgsOperation objects.
@ -85,8 +85,8 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
for the module we are connecting.
:param str gateway_hostname: (optional) If we are going through a gateway host, this is the
hostname for the gateway
:param str ca_cert: (Optional) The CA certificate to use if the server that we're going to
connect to uses server-side TLS
:param str server_verification_cert: (Optional) The server verification certificate to use
if the server that we're going to connect to uses server-side TLS
:param X509 client_cert: (Optional) The x509 object containing a client certificate and key used to connect
to the service
:param str sas_token: The token string which will be used to authenticate with the service
@ -99,7 +99,7 @@ class SetIoTHubConnectionArgsOperation(PipelineOperation):
self.module_id = module_id
self.hostname = hostname
self.gateway_hostname = gateway_hostname
self.ca_cert = ca_cert
self.server_verification_cert = server_verification_cert
self.client_cert = client_cert
self.sas_token = sas_token
@ -111,7 +111,7 @@ class SendD2CMessageOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client
"""
def __init__(self, message, callback=None):
def __init__(self, message, callback):
"""
Initializer for SendD2CMessageOperation objects.
@ -131,7 +131,7 @@ class SendOutputEventOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client
"""
def __init__(self, message, callback=None):
def __init__(self, message, callback):
"""
Initializer for SendOutputEventOperation objects.
@ -152,7 +152,7 @@ class SendMethodResponseOperation(PipelineOperation):
This operation is in the group of IoTHub operations because it is very specific to the IoTHub client.
"""
def __init__(self, method_response, callback=None):
def __init__(self, method_response, callback):
"""
Initializer for SendMethodResponseOperation objects.
@ -176,7 +176,7 @@ class GetTwinOperation(PipelineOperation):
:type twin: Twin
"""
def __init__(self, callback=None):
def __init__(self, callback):
"""
Initializer for GetTwinOperation objects.
"""
@ -190,7 +190,7 @@ class PatchTwinReportedPropertiesOperation(PipelineOperation):
IoT Hub or Azure IoT Edge Hub service.
"""
def __init__(self, patch, callback=None):
def __init__(self, patch, callback):
"""
Initializer for PatchTwinReportedPropertiesOperation object

Просмотреть файл

@ -0,0 +1,79 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device.common.pipeline import PipelineOperation
class MethodInvokeOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to send a method invoke to an IoTHub or EdgeHub server.
This operation is in the group of EdgeHub operations because it is very specific to the EdgeHub client.
"""
def __init__(self, target_device_id, target_module_id, method_params, callback):
"""
Initializer for MethodInvokeOperation objects.
:param str target_device_id: The device id of the target device/module
:param str target_module_id: The module id of the target module
:param method_params: The parameters used to invoke the method, as defined by the IoT Hub specification.
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
"""
super(MethodInvokeOperation, self).__init__(callback=callback)
self.target_device_id = target_device_id
self.target_module_id = target_module_id
self.method_params = method_params
self.method_response = None
class GetStorageInfoOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to get the storage information from IoT Hub.
"""
def __init__(self, blob_name, callback):
"""
Initializer for GetStorageInfo objects.
:param str blob_name: The name of the blob that will be created in Azure Storage
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
:ivar dict storage_info: Upon completion, this contains the storage information which was retrieved from the service.
"""
super(GetStorageInfoOperation, self).__init__(callback=callback)
self.blob_name = blob_name
self.storage_info = None
class NotifyBlobUploadStatusOperation(PipelineOperation):
"""
A PipleineOperation object which contains arguments used to get the storage information from IoT Hub.
"""
def __init__(self, correlation_id, is_success, status_code, status_description, callback):
"""
Initializer for GetStorageInfo objects.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int request_status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
:param callback: The function that gets called when this operation is complete or has failed.
The callback function must accept a PipelineOperation object which indicates the specific operation has which
has completed or failed.
:type callback: Function/callable
"""
super(NotifyBlobUploadStatusOperation, self).__init__(callback=callback)
self.correlation_id = correlation_id
self.is_success = is_success
self.request_status_code = status_code
self.status_description = status_description

Просмотреть файл

@ -6,13 +6,10 @@
import json
import logging
from azure.iot.device.common.pipeline import (
pipeline_ops_base,
PipelineStage,
operation_flow,
pipeline_thread,
)
from azure.iot.device.common import unhandled_exceptions
from azure.iot.device.common.pipeline import pipeline_ops_base, PipelineStage, pipeline_thread
from azure.iot.device import exceptions
from azure.iot.device.common import handle_exceptions
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
from . import pipeline_ops_iothub
from . import constant
@ -32,138 +29,146 @@ class UseAuthProviderStage(PipelineStage):
"""
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetAuthProviderOperation):
self.auth_provider = op.auth_provider
self.auth_provider.on_sas_token_updated_handler = self.on_sas_token_updated
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation(
# Here we append rather than just add it to the handler value because otherwise it
# would overwrite the handler from another pipeline that might be using the same auth provider.
self.auth_provider.on_sas_token_updated_handler_list.append(
CallableWeakMethod(self, "_on_sas_token_updated")
)
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation,
device_id=self.auth_provider.device_id,
module_id=getattr(self.auth_provider, "module_id", None),
module_id=self.auth_provider.module_id,
hostname=self.auth_provider.hostname,
gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None),
ca_cert=getattr(self.auth_provider, "ca_cert", None),
sas_token=self.auth_provider.get_current_sas_token(),
server_verification_cert=getattr(
self.auth_provider, "server_verification_cert", None
),
sas_token=self.auth_provider.get_current_sas_token(),
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub.SetX509AuthProviderOperation):
self.auth_provider = op.auth_provider
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation(
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_iothub.SetIoTHubConnectionArgsOperation,
device_id=self.auth_provider.device_id,
module_id=getattr(self.auth_provider, "module_id", None),
module_id=self.auth_provider.module_id,
hostname=self.auth_provider.hostname,
gateway_hostname=getattr(self.auth_provider, "gateway_hostname", None),
ca_cert=getattr(self.auth_provider, "ca_cert", None),
client_cert=self.auth_provider.get_x509_certificate(),
server_verification_cert=getattr(
self.auth_provider, "server_verification_cert", None
),
client_cert=self.auth_provider.get_x509_certificate(),
)
self.send_op_down(worker_op)
else:
operation_flow.pass_op_to_next_stage(self, op)
super(UseAuthProviderStage, self)._run_op(op)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def on_sas_token_updated(self):
def _on_sas_token_updated(self):
logger.info(
"{}: New sas token received. Passing down UpdateSasTokenOperation.".format(self.name)
)
@pipeline_thread.runs_on_pipeline_thread
def on_token_update_complete(op):
if op.error:
def on_token_update_complete(op, error):
if error:
logger.error(
"{}({}): token update operation failed. Error={}".format(
self.name, op.name, op.error
self.name, op.name, error
)
)
unhandled_exceptions.exception_caught_in_background_thread(op.error)
handle_exceptions.handle_background_exception(error)
else:
logger.debug(
"{}({}): token update operation is complete".format(self.name, op.name)
)
operation_flow.pass_op_to_next_stage(
stage=self,
op=pipeline_ops_base.UpdateSasTokenOperation(
self.send_op_down(
pipeline_ops_base.UpdateSasTokenOperation(
sas_token=self.auth_provider.get_current_sas_token(),
callback=on_token_update_complete,
),
)
)
class HandleTwinOperationsStage(PipelineStage):
class TwinRequestResponseStage(PipelineStage):
"""
PipelineStage which handles twin operations. In particular, it converts twin GET and PATCH
operations into SendIotRequestAndWaitForResponseOperation operations. This is done at the IoTHub level because
operations into RequestAndResponseOperation operations. This is done at the IoTHub level because
there is nothing protocol-specific about this code. The protocol-specific implementation
for twin requests and responses is handled inside IoTHubMQTTConverterStage, when it converts
the SendIotRequestOperation to a protocol-specific send operation and when it converts the
protocol-specific receive event into an IotResponseEvent event.
for twin requests and responses is handled inside IoTHubMQTTTranslationStage, when it converts
the RequestOperation to a protocol-specific send operation and when it converts the
protocol-specific receive event into an ResponseEvent event.
"""
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def map_twin_error(original_op, twin_op):
if twin_op.error:
original_op.error = twin_op.error
def _run_op(self, op):
def map_twin_error(error, twin_op):
if error:
return error
elif twin_op.status_code >= 300:
# TODO map error codes to correct exceptions
logger.error("Error {} received from twin operation".format(twin_op.status_code))
logger.error("response body: {}".format(twin_op.response_body))
original_op.error = Exception(
return exceptions.ServiceError(
"twin operation returned status {}".format(twin_op.status_code)
)
if isinstance(op, pipeline_ops_iothub.GetTwinOperation):
def on_twin_response(twin_op):
logger.debug("{}({}): Got response for GetTwinOperation".format(self.name, op.name))
map_twin_error(original_op=op, twin_op=twin_op)
if not twin_op.error:
op.twin = json.loads(twin_op.response_body.decode("utf-8"))
operation_flow.complete_op(self, op)
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_waiting_for_response = op
operation_flow.pass_op_to_next_stage(
self,
pipeline_ops_base.SendIotRequestAndWaitForResponseOperation(
def on_twin_response(op, error):
logger.debug("{}({}): Got response for GetTwinOperation".format(self.name, op.name))
error = map_twin_error(error=error, twin_op=op)
if not error:
op_waiting_for_response.twin = json.loads(op.response_body.decode("utf-8"))
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.TWIN,
method="GET",
resource_location="/",
request_body=" ",
callback=on_twin_response,
),
)
)
elif isinstance(op, pipeline_ops_iothub.PatchTwinReportedPropertiesOperation):
def on_twin_response(twin_op):
# Alias to avoid overload within the callback below
# CT-TODO: remove the need for this with better callback semantics
op_waiting_for_response = op
def on_twin_response(op, error):
logger.debug(
"{}({}): Got response for PatchTwinReportedPropertiesOperation operation".format(
self.name, op.name
)
)
map_twin_error(original_op=op, twin_op=twin_op)
operation_flow.complete_op(self, op)
error = map_twin_error(error=error, twin_op=op)
op_waiting_for_response.complete(error=error)
logger.debug(
"{}({}): Sending reported properties patch: {}".format(self.name, op.name, op.patch)
)
operation_flow.pass_op_to_next_stage(
self,
(
pipeline_ops_base.SendIotRequestAndWaitForResponseOperation(
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.TWIN,
method="PATCH",
resource_location="/properties/reported/",
request_body=json.dumps(op.patch),
callback=on_twin_response,
)
),
)
else:
operation_flow.pass_op_to_next_stage(self, op)
super(TwinRequestResponseStage, self)._run_op(op)

Просмотреть файл

@ -0,0 +1,225 @@
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import json
import six.moves.urllib as urllib
from azure.iot.device.common.pipeline import (
pipeline_events_base,
pipeline_ops_base,
pipeline_ops_http,
PipelineStage,
pipeline_thread,
)
from . import pipeline_ops_iothub, pipeline_ops_iothub_http, http_path_iothub, http_map_error
from azure.iot.device import exceptions
from azure.iot.device import constant as pkg_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__)
@pipeline_thread.runs_on_pipeline_thread
def map_http_error(error, http_op):
if error:
return error
elif http_op.status_code >= 300:
translated_error = http_map_error.translate_error(http_op.status_code, http_op.reason)
return exceptions.ServiceError(
"HTTP operation returned: {} {}".format(http_op.status_code, translated_error)
)
class IoTHubHTTPTranslationStage(PipelineStage):
"""
PipelineStage which converts other Iot and EdgeHub operations into HTTP operations. This stage also
converts http pipeline events into Iot and EdgeHub pipeline events.
"""
def __init__(self):
super(IoTHubHTTPTranslationStage, self).__init__()
self.device_id = None
self.module_id = None
self.hostname = None
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetIoTHubConnectionArgsOperation):
self.device_id = op.device_id
self.module_id = op.module_id
if op.gateway_hostname:
logger.debug(
"Gateway Hostname Present. Setting Hostname to: {}".format(op.gateway_hostname)
)
self.hostname = op.gateway_hostname
else:
logger.debug(
"Gateway Hostname not present. Setting Hostname to: {}".format(
op.gateway_hostname
)
)
self.hostname = op.hostname
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_http.SetHTTPConnectionArgsOperation,
hostname=self.hostname,
server_verification_cert=op.server_verification_cert,
client_cert=op.client_cert,
sas_token=op.sas_token,
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub_http.MethodInvokeOperation):
logger.debug(
"{}({}): Translating Method Invoke Operation for HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
# if the target is a module.
body = json.dumps(op.method_params)
path = http_path_iothub.get_method_invoke_path(op.target_device_id, op.target_module_id)
# Note we do not add the sas Authorization header here. Instead we add it later on in the stage above
# the transport layer, since that stage stores the updated SAS and also X509 certs if that is what is
# being used.
x_ms_edge_string = "{deviceId}/{moduleId}".format(
deviceId=self.device_id, moduleId=self.module_id
) # these are the identifiers of the current module
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
headers = {
"Host": self.hostname,
"Content-Type": "application/json",
"Content-Length": len(str(body)),
"x-ms-edge-moduleId": x_ms_edge_string,
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for MethodInvokeOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
if not error:
op_waiting_for_response.method_response = json.loads(
op.response_body.decode("utf-8")
)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
elif isinstance(op, pipeline_ops_iothub_http.GetStorageInfoOperation):
logger.debug(
"{}({}): Translating Get Storage Info Operation to HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
path = http_path_iothub.get_storage_info_for_blob_path(self.device_id)
body = json.dumps({"blobName": op.blob_name})
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
headers = {
"Host": self.hostname,
"Accept": "application/json",
"Content-Type": "application/json",
"Content-Length": len(str(body)),
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for GetStorageInfoOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
if not error:
op_waiting_for_response.storage_info = json.loads(
op.response_body.decode("utf-8")
)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
elif isinstance(op, pipeline_ops_iothub_http.NotifyBlobUploadStatusOperation):
logger.debug(
"{}({}): Translating Get Storage Info Operation to HTTP.".format(self.name, op.name)
)
query_params = "api-version={apiVersion}".format(
apiVersion=pkg_constant.IOTHUB_API_VERSION
)
path = http_path_iothub.get_notify_blob_upload_status_path(self.device_id)
body = json.dumps(
{
"correlationId": op.correlation_id,
"isSuccess": op.is_success,
"statusCode": op.request_status_code,
"statusDescription": op.status_description,
}
)
user_agent = urllib.parse.quote_plus(
ProductInfo.get_iothub_user_agent()
+ str(self.pipeline_root.pipeline_configuration.product_info)
)
# Note we do not add the sas Authorization header here. Instead we add it later on in the stage above
# the transport layer, since that stage stores the updated SAS and also X509 certs if that is what is
# being used.
headers = {
"Host": self.hostname,
"Content-Type": "application/json; charset=utf-8",
"Content-Length": len(str(body)),
"User-Agent": user_agent,
}
op_waiting_for_response = op
def on_request_response(op, error):
logger.debug(
"{}({}): Got response for GetStorageInfoOperation".format(self.name, op.name)
)
error = map_http_error(error=error, http_op=op)
op_waiting_for_response.complete(error=error)
self.send_op_down(
pipeline_ops_http.HTTPRequestAndResponseOperation(
method="POST",
path=path,
headers=headers,
body=body,
query_params=query_params,
callback=on_request_response,
)
)
else:
# All other operations get passed down
self.send_op_down(op)

Просмотреть файл

@ -13,29 +13,32 @@ from azure.iot.device.common.pipeline import (
pipeline_ops_mqtt,
pipeline_events_mqtt,
PipelineStage,
operation_flow,
pipeline_thread,
)
from azure.iot.device.iothub.models import Message, MethodRequest
from . import pipeline_ops_iothub, pipeline_events_iothub, mqtt_topic_iothub
from . import constant as pipeline_constant
from . import exceptions as pipeline_exceptions
from azure.iot.device import constant as pkg_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__)
class IoTHubMQTTConverterStage(PipelineStage):
class IoTHubMQTTTranslationStage(PipelineStage):
"""
PipelineStage which converts other Iot and IoTHub operations into MQTT operations. This stage also
converts mqtt pipeline events into Iot and IoTHub pipeline events.
"""
def __init__(self):
super(IoTHubMQTTConverterStage, self).__init__()
super(IoTHubMQTTTranslationStage, self).__init__()
self.feature_to_topic = {}
self.device_id = None
self.module_id = None
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
if isinstance(op, pipeline_ops_iothub.SetIoTHubConnectionArgsOperation):
self.device_id = op.device_id
@ -51,14 +54,21 @@ class IoTHubMQTTConverterStage(PipelineStage):
else:
client_id = op.device_id
# For MQTT, the entire user agent string should be appended to the username field in the connect packet
# For example, the username may look like this without custom parameters:
# yosephsandboxhub.azure-devices.net/alpha/?api-version=2018-06-30&DeviceClientType=py-azure-iot-device%2F2.0.0-preview.12
# The customer user agent string would simply be appended to the end of this username, in URL Encoded format.
query_param_seq = [
("api-version", pkg_constant.IOTHUB_API_VERSION),
("DeviceClientType", pkg_constant.USER_AGENT),
("DeviceClientType", ProductInfo.get_iothub_user_agent()),
]
username = "{hostname}/{client_id}/?{query_params}".format(
username = "{hostname}/{client_id}/?{query_params}{optional_product_info}".format(
hostname=op.hostname,
client_id=client_id,
query_params=urllib.parse.urlencode(query_param_seq),
optional_product_info=urllib.parse.quote(
str(self.pipeline_root.pipeline_configuration.product_info)
),
)
if op.gateway_hostname:
@ -67,91 +77,64 @@ class IoTHubMQTTConverterStage(PipelineStage):
hostname = op.hostname
# TODO: test to make sure client_cert and sas_token travel down correctly
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation(
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation,
client_id=client_id,
hostname=hostname,
username=username,
ca_cert=op.ca_cert,
server_verification_cert=op.server_verification_cert,
client_cert=op.client_cert,
sas_token=op.sas_token,
),
)
self.send_op_down(worker_op)
elif (
isinstance(op, pipeline_ops_base.UpdateSasTokenOperation)
and self.pipeline_root.connected
):
logger.debug(
"{}({}): Connected. Passing op down and reconnecting after token is updated.".format(
"{}({}): Connected. Passing op down and reauthorizing after token is updated.".format(
self.name, op.name
)
)
# make a callback that can call the user's callback after the reconnect is complete
def on_reconnect_complete(reconnect_op):
if reconnect_op.error:
op.error = reconnect_op.error
logger.error(
"{}({}) reconnection failed. returning error {}".format(
self.name, op.name, op.error
)
)
operation_flow.complete_op(stage=self, op=op)
else:
logger.debug(
"{}({}) reconnection succeeded. returning success.".format(
self.name, op.name
)
)
operation_flow.complete_op(stage=self, op=op)
# save the old user callback so we can call it later.
old_callback = op.callback
# make a callback that either fails the UpdateSasTokenOperation (if the lower level failed it),
# or issues a ReconnectOperation (if the lower level returned success for the UpdateSasTokenOperation)
def on_token_update_complete(op):
op.callback = old_callback
if op.error:
# or issues a ReauthorizeConnectionOperation (if the lower level returned success for the UpdateSasTokenOperation)
def on_token_update_complete(op, error):
if error:
logger.error(
"{}({}) token update failed. returning failure {}".format(
self.name, op.name, op.error
self.name, op.name, error
)
)
operation_flow.complete_op(stage=self, op=op)
else:
logger.debug(
"{}({}) token update succeeded. reconnecting".format(self.name, op.name)
"{}({}) token update succeeded. reauthorizing".format(self.name, op.name)
)
operation_flow.pass_op_to_next_stage(
stage=self,
op=pipeline_ops_base.ReconnectOperation(callback=on_reconnect_complete),
# Stop completion of Token Update op, and only continue upon completion of ReauthorizeConnectionOperation
op.halt_completion()
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_base.ReauthorizeConnectionOperation
)
logger.debug(
"{}({}): passing to next stage with updated callback.".format(
self.name, op.name
)
)
self.send_op_down(worker_op)
# now, pass the UpdateSasTokenOperation down with our new callback.
op.callback = on_token_update_complete
operation_flow.pass_op_to_next_stage(stage=self, op=op)
op.add_callback(on_token_update_complete)
self.send_op_down(op)
elif isinstance(op, pipeline_ops_iothub.SendD2CMessageOperation) or isinstance(
op, pipeline_ops_iothub.SendOutputEventOperation
):
# Convert SendTelementry and SendOutputEventOperation operations into MQTT Publish operations
topic = mqtt_topic_iothub.encode_properties(op.message, self.telemetry_topic)
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(topic=topic, payload=op.message.data),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
topic=topic,
payload=op.message.data,
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_iothub.SendMethodResponseOperation):
# Sending a Method Response gets translated into an MQTT Publish operation
@ -159,52 +142,48 @@ class IoTHubMQTTConverterStage(PipelineStage):
op.method_response.request_id, str(op.method_response.status)
)
payload = json.dumps(op.method_response.payload)
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(topic=topic, payload=payload),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation, topic=topic, payload=payload
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.EnableFeatureOperation):
# Enabling a feature gets translated into an MQTT subscribe operation
topic = self.feature_to_topic[op.feature_name]
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTSubscribeOperation(topic=topic),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTSubscribeOperation, topic=topic
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.DisableFeatureOperation):
# Disabling a feature gets turned into an MQTT unsubscribe operation
topic = self.feature_to_topic[op.feature_name]
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTUnsubscribeOperation(topic=topic),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTUnsubscribeOperation, topic=topic
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.SendIotRequestOperation):
elif isinstance(op, pipeline_ops_base.RequestOperation):
if op.request_type == pipeline_constant.TWIN:
topic = mqtt_topic_iothub.get_twin_topic_for_publish(
method=op.method,
resource_location=op.resource_location,
request_id=op.request_id,
)
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(
topic=topic, payload=op.request_body
),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
topic=topic,
payload=op.request_body,
)
self.send_op_down(worker_op)
else:
raise NotImplementedError(
"SendIotRequestOperation request_type {} not supported".format(op.request_type)
raise pipeline_exceptions.OperationError(
"RequestOperation request_type {} not supported".format(op.request_type)
)
else:
# All other operations get passed down
operation_flow.pass_op_to_next_stage(self, op)
super(IoTHubMQTTTranslationStage, self)._run_op(op)
@pipeline_thread.runs_on_pipeline_thread
def _set_topic_names(self, device_id, module_id):
@ -240,17 +219,13 @@ class IoTHubMQTTConverterStage(PipelineStage):
if mqtt_topic_iothub.is_c2d_topic(topic, self.device_id):
message = Message(event.payload)
mqtt_topic_iothub.extract_properties_from_topic(topic, message)
operation_flow.pass_event_to_previous_stage(
self, pipeline_events_iothub.C2DMessageEvent(message)
)
self.send_event_up(pipeline_events_iothub.C2DMessageEvent(message))
elif mqtt_topic_iothub.is_input_topic(topic, self.device_id, self.module_id):
message = Message(event.payload)
mqtt_topic_iothub.extract_properties_from_topic(topic, message)
input_name = mqtt_topic_iothub.get_input_name_from_topic(topic)
operation_flow.pass_event_to_previous_stage(
self, pipeline_events_iothub.InputMessageEvent(input_name, message)
)
self.send_event_up(pipeline_events_iothub.InputMessageEvent(input_name, message))
elif mqtt_topic_iothub.is_method_topic(topic):
request_id = mqtt_topic_iothub.get_method_request_id_from_topic(topic)
@ -260,32 +235,28 @@ class IoTHubMQTTConverterStage(PipelineStage):
name=method_name,
payload=json.loads(event.payload.decode("utf-8")),
)
operation_flow.pass_event_to_previous_stage(
self, pipeline_events_iothub.MethodRequestEvent(method_received)
)
self.send_event_up(pipeline_events_iothub.MethodRequestEvent(method_received))
elif mqtt_topic_iothub.is_twin_response_topic(topic):
request_id = mqtt_topic_iothub.get_twin_request_id_from_topic(topic)
status_code = int(mqtt_topic_iothub.get_twin_status_code_from_topic(topic))
operation_flow.pass_event_to_previous_stage(
self,
pipeline_events_base.IotResponseEvent(
self.send_event_up(
pipeline_events_base.ResponseEvent(
request_id=request_id, status_code=status_code, response_body=event.payload
),
)
)
elif mqtt_topic_iothub.is_twin_desired_property_patch_topic(topic):
operation_flow.pass_event_to_previous_stage(
self,
self.send_event_up(
pipeline_events_iothub.TwinDesiredPropertiesPatchEvent(
patch=json.loads(event.payload.decode("utf-8"))
),
)
)
else:
logger.debug("Uunknown topic: {} passing up to next handler".format(topic))
operation_flow.pass_event_to_previous_stage(self, event)
logger.debug("Unknown topic: {} passing up to next handler".format(topic))
self.send_event_up(event)
else:
# all other messages get passed up
operation_flow.pass_event_to_previous_stage(self, event)
super(IoTHubMQTTTranslationStage, self)._handle_pipeline_event(event)

Просмотреть файл

@ -15,13 +15,41 @@ from .abstract_clients import (
)
from .models import Message
from .inbox_manager import InboxManager
from .sync_inbox import SyncClientInbox
from .pipeline import constant
from .sync_inbox import SyncClientInbox, InboxEmpty
from .pipeline import constant as pipeline_constant
from .pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.common.evented_callback import EventedCallback
from azure.iot.device.common.callable_weak_method import CallableWeakMethod
from azure.iot.device import constant as device_constant
logger = logging.getLogger(__name__)
def handle_result(callback):
try:
return callback.wait_for_completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except pipeline_exceptions.TlsExchangeAuthError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client due to TLS exchanges.", cause=e
)
except pipeline_exceptions.ProtocolProxyError as e:
raise exceptions.ClientError(
message="Error in the IoTHub client raised due to proxy connections.", cause=e
)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class GenericIoTHubClient(AbstractIoTHubClient):
"""A superclass representing a generic synchronous client.
This class needs to be extended for specific clients.
@ -33,8 +61,10 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
TODO: How to document kwargs?
Possible values: iothub_pipeline, edge_pipeline
:param iothub_pipeline: The IoTHubPipeline used for the client
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param http_pipeline: The HTTPPipeline used for the client
:type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
"""
# Depending on the subclass calling this __init__, there could be different arguments,
# and the super() call could call a different class, due to the different MROs
@ -42,10 +72,14 @@ class GenericIoTHubClient(AbstractIoTHubClient):
# **kwargs.
super(GenericIoTHubClient, self).__init__(**kwargs)
self._inbox_manager = InboxManager(inbox_type=SyncClientInbox)
self._iothub_pipeline.on_connected = self._on_connected
self._iothub_pipeline.on_disconnected = self._on_disconnected
self._iothub_pipeline.on_method_request_received = self._inbox_manager.route_method_request
self._iothub_pipeline.on_twin_patch_received = self._inbox_manager.route_twin_patch
self._iothub_pipeline.on_connected = CallableWeakMethod(self, "_on_connected")
self._iothub_pipeline.on_disconnected = CallableWeakMethod(self, "_on_disconnected")
self._iothub_pipeline.on_method_request_received = CallableWeakMethod(
self._inbox_manager, "route_method_request"
)
self._iothub_pipeline.on_twin_patch_received = CallableWeakMethod(
self._inbox_manager, "route_twin_patch"
)
def _on_connected(self):
"""Helper handler that is called upon an iothub pipeline connect"""
@ -65,12 +99,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the connection
to the service has been completely established.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Connecting to Hub...")
callback = EventedCallback()
self._iothub_pipeline.connect(callback=callback)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully connected to Hub")
@ -79,12 +122,15 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the connection
to the service has been completely closed.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Disconnecting from Hub...")
callback = EventedCallback()
self._iothub_pipeline.disconnect(callback=callback)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully disconnected from Hub")
@ -99,15 +145,29 @@ class GenericIoTHubClient(AbstractIoTHubClient):
:param message: The actual message to send. Anything passed that is not an instance of the
Message class will be converted to Message object.
:type message: :class:`azure.iot.device.Message` or str
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
"""
if not isinstance(message, Message):
message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of telemetry message can not exceed 256 KB.")
logger.info("Sending message to Hub...")
callback = EventedCallback()
self._iothub_pipeline.send_message(message, callback=callback)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully sent message to Hub")
@ -118,21 +178,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If this parameter is not given, all methods not already being specifically targeted by
a different request to receive_method will be received.
:param bool block: Indicates if the operation should block until a request is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation.
:raises: InboxEmpty if no request is available on a non-blocking operation.
:returns: MethodRequest object representing the received method request.
:returns: MethodRequest object representing the received method request, or None if
no method request has been received by the end of the blocking period.
"""
if not self._iothub_pipeline.feature_enabled[constant.METHODS]:
self._enable_feature(constant.METHODS)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.METHODS]:
self._enable_feature(pipeline_constant.METHODS)
method_inbox = self._inbox_manager.get_method_request_inbox(method_name)
logger.info("Waiting for method request...")
try:
method_request = method_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
method_request = None
logger.info("Received method request")
return method_request
@ -146,13 +206,22 @@ class GenericIoTHubClient(AbstractIoTHubClient):
function will open the connection before sending the event.
:param method_response: The MethodResponse to send.
:type method_response: MethodResponse
:type method_response: :class:`azure.iot.device.MethodResponse`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Sending method response to Hub...")
callback = EventedCallback()
self._iothub_pipeline.send_method_response(method_response, callback=callback)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully sent method response to Hub")
@ -180,14 +249,24 @@ class GenericIoTHubClient(AbstractIoTHubClient):
This is a synchronous call, meaning that this function will not return until the twin
has been retrieved from the service.
:returns: Twin object which was retrieved from the hub
:returns: Complete Twin as a JSON dict
:rtype: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
if not self._iothub_pipeline.feature_enabled[constant.TWIN]:
self._enable_feature(constant.TWIN)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN]:
self._enable_feature(pipeline_constant.TWIN)
callback = EventedCallback(return_arg_name="twin")
self._iothub_pipeline.get_twin(callback=callback)
twin = callback.wait_for_completion()
twin = handle_result(callback)
logger.info("Successfully retrieved twin")
return twin
@ -202,17 +281,26 @@ class GenericIoTHubClient(AbstractIoTHubClient):
If the service returns an error on the patch operation, this function will raise the
appropriate error.
:param reported_properties_patch:
:type reported_properties_patch: dict, str, int, float, bool, or None (JSON compatible values)
:param reported_properties_patch: Twin Reported Properties patch as a JSON dict
:type reported_properties_patch: dict
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
if not self._iothub_pipeline.feature_enabled[constant.TWIN]:
self._enable_feature(constant.TWIN)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN]:
self._enable_feature(pipeline_constant.TWIN)
callback = EventedCallback()
self._iothub_pipeline.patch_twin_reported_properties(
patch=reported_properties_patch, callback=callback
)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully patched twin")
@ -231,20 +319,21 @@ class GenericIoTHubClient(AbstractIoTHubClient):
an InboxEmpty exception
:param bool block: Indicates if the operation should block until a request is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation.
:raises: InboxEmpty if no request is available on a non-blocking operation.
:returns: desired property patch. This can be dict, str, int, float, bool, or None (JSON compatible values)
:returns: Twin Desired Properties patch as a JSON dict, or None if no patch has been
received by the end of the blocking period
:rtype: dict or None
"""
if not self._iothub_pipeline.feature_enabled[constant.TWIN_PATCHES]:
self._enable_feature(constant.TWIN_PATCHES)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.TWIN_PATCHES]:
self._enable_feature(pipeline_constant.TWIN_PATCHES)
twin_patch_inbox = self._inbox_manager.get_twin_patch_inbox()
logger.info("Waiting for twin patches...")
try:
patch = twin_patch_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
return None
logger.info("twin patch received")
return patch
@ -255,39 +344,78 @@ class IoTHubDeviceClient(GenericIoTHubClient, AbstractIoTHubDeviceClient):
Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+.
"""
def __init__(self, iothub_pipeline):
def __init__(self, iothub_pipeline, http_pipeline):
"""Initializer for a IoTHubDeviceClient.
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
"""
super(IoTHubDeviceClient, self).__init__(iothub_pipeline=iothub_pipeline)
self._iothub_pipeline.on_c2d_message_received = self._inbox_manager.route_c2d_message
super(IoTHubDeviceClient, self).__init__(
iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline
)
self._iothub_pipeline.on_c2d_message_received = CallableWeakMethod(
self._inbox_manager, "route_c2d_message"
)
def receive_message(self, block=True, timeout=None):
"""Receive a message that has been sent from the Azure IoT Hub.
:param bool block: Indicates if the operation should block until a message is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation.
:raises: InboxEmpty if no message is available on a non-blocking operation.
:returns: Message that was sent from the Azure IoT Hub.
:returns: Message that was sent from the Azure IoT Hub, or None if
no method request has been received by the end of the blocking period.
:rtype: :class:`azure.iot.device.Message` or None
"""
if not self._iothub_pipeline.feature_enabled[constant.C2D_MSG]:
self._enable_feature(constant.C2D_MSG)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.C2D_MSG]:
self._enable_feature(pipeline_constant.C2D_MSG)
c2d_inbox = self._inbox_manager.get_c2d_message_inbox()
logger.info("Waiting for message from Hub...")
try:
message = c2d_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
message = None
logger.info("Message received")
return message
def get_storage_info_for_blob(self, blob_name):
"""Sends a POST request over HTTP to an IoTHub endpoint that will return information for uploading via the Azure Storage Account linked to the IoTHub your device is connected to.
:param str blob_name: The name in string format of the blob that will be uploaded using the storage API. This name will be used to generate the proper credentials for Storage, and needs to match what will be used with the Azure Storage SDK to perform the blob upload.
:returns: A JSON-like (dictionary) object from IoT Hub that will contain relevant information including: correlationId, hostName, containerName, blobName, sasToken.
"""
callback = EventedCallback(return_arg_name="storage_info")
self._http_pipeline.get_storage_info_for_blob(blob_name, callback=callback)
storage_info = handle_result(callback)
logger.info("Successfully retrieved storage_info")
return storage_info
def notify_blob_upload_status(
self, correlation_id, is_success, status_code, status_description
):
"""When the upload is complete, the device sends a POST request to the IoT Hub endpoint with information on the status of an upload to blob attempt. This is used by IoT Hub to notify listening clients.
:param str correlation_id: Provided by IoT Hub on get_storage_info_for_blob request.
:param bool is_success: A boolean that indicates whether the file was uploaded successfully.
:param int status_code: A numeric status code that is the status for the upload of the fiel to storage.
:param str status_description: A description that corresponds to the status_code.
"""
callback = EventedCallback()
self._http_pipeline.notify_blob_upload_status(
correlation_id=correlation_id,
is_success=is_success,
status_code=status_code,
status_description=status_description,
callback=callback,
)
handle_result(callback)
logger.info("Successfully notified blob upload status")
class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
"""A synchronous module client that connects to an Azure IoT Hub or Azure IoT Edge instance.
@ -295,21 +423,23 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
Intended for usage with Python 2.7 or compatibility scenarios for Python 3.5.3+.
"""
def __init__(self, iothub_pipeline, edge_pipeline=None):
def __init__(self, iothub_pipeline, http_pipeline):
"""Intializer for a IoTHubModuleClient.
This initializer should not be called directly.
Instead, use one of the 'create_from_' classmethods to instantiate
:param iothub_pipeline: The pipeline used to connect to the IoTHub endpoint.
:type iothub_pipeline: IoTHubPipeline
:param edge_pipeline: (OPTIONAL) The pipeline used to connect to the Edge endpoint.
:type edge_pipeline: EdgePipeline
:type iothub_pipeline: :class:`azure.iot.device.iothub.pipeline.IoTHubPipeline`
:param http_pipeline: The pipeline used to connect to the IoTHub endpoint via HTTP.
:type http_pipeline: :class:`azure.iot.device.iothub.pipeline.HTTPPipeline`
"""
super(IoTHubModuleClient, self).__init__(
iothub_pipeline=iothub_pipeline, edge_pipeline=edge_pipeline
iothub_pipeline=iothub_pipeline, http_pipeline=http_pipeline
)
self._iothub_pipeline.on_input_message_received = CallableWeakMethod(
self._inbox_manager, "route_input_message"
)
self._iothub_pipeline.on_input_message_received = self._inbox_manager.route_input_message
def send_message_to_output(self, message, output_name):
"""Sends an event/message to the given module output.
@ -322,19 +452,34 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
If the connection to the service has not previously been opened by a call to connect, this
function will open the connection before sending the event.
:param message: message to send to the given output. Anything passed that is not an instance of the
:param message: Message to send to the given output. Anything passed that is not an instance of the
Message class will be converted to Message object.
:param output_name: Name of the output to send the event to.
:type message: :class:`azure.iot.device.Message` or str
:param str output_name: Name of the output to send the event to.
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
:raises: ValueError if the message fails size validation.
"""
if not isinstance(message, Message):
message = Message(message)
if message.get_size() > device_constant.TELEMETRY_MESSAGE_SIZE_LIMIT:
raise ValueError("Size of message can not exceed 256 KB.")
message.output_name = output_name
logger.info("Sending message to output:" + output_name + "...")
callback = EventedCallback()
self._iothub_pipeline.send_output_event(message, callback=callback)
callback.wait_for_completion()
handle_result(callback)
logger.info("Successfully sent message to output: " + output_name)
@ -343,19 +488,37 @@ class IoTHubModuleClient(GenericIoTHubClient, AbstractIoTHubModuleClient):
:param str input_name: The input name to receive a message on.
:param bool block: Indicates if the operation should block until a message is received.
Default True.
:param int timeout: Optionally provide a number of seconds until blocking times out.
:raises: InboxEmpty if timeout occurs on a blocking operation.
:raises: InboxEmpty if no message is available on a non-blocking operation.
:returns: Message that was sent to the specified input.
:returns: Message that was sent to the specified input, or None if
no method request has been received by the end of the blocking period.
"""
if not self._iothub_pipeline.feature_enabled[constant.INPUT_MSG]:
self._enable_feature(constant.INPUT_MSG)
if not self._iothub_pipeline.feature_enabled[pipeline_constant.INPUT_MSG]:
self._enable_feature(pipeline_constant.INPUT_MSG)
input_inbox = self._inbox_manager.get_input_message_inbox(input_name)
logger.info("Waiting for input message on: " + input_name + "...")
try:
message = input_inbox.get(block=block, timeout=timeout)
except InboxEmpty:
message = None
logger.info("Input message received on: " + input_name)
return message
def invoke_method(self, method_params, device_id, module_id=None):
"""Invoke a method from your client onto a device or module client, and receive the response to the method call.
:param dict method_params: Should contain a method_name, payload, connect_timeout_in_seconds, response_timeout_in_seconds.
:param str device_id: Device ID of the target device where the method will be invoked.
:param str module_id: Module ID of the target module where the method will be invoked. (Optional)
:returns: method_result should contain a status, and a payload
:rtype: dict
"""
callback = EventedCallback(return_arg_name="invoke_method_response")
self._http_pipeline.invoke_method(
device_id, method_params, callback=callback, module_id=module_id
)
invoke_method_response = handle_result(callback)
logger.info("Successfully invoked method")
return invoke_method_response

Просмотреть файл

@ -0,0 +1,50 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import platform
from azure.iot.device.constant import VERSION, IOTHUB_IDENTIFIER, PROVISIONING_IDENTIFIER
python_runtime = platform.python_version()
os_type = platform.system()
os_release = platform.version()
architecture = platform.machine()
class ProductInfo(object):
"""
A class for creating product identifiers or agent strings for IotHub as well as Provisioning.
"""
@staticmethod
def _get_common_user_agent():
return "({python_runtime};{os_type} {os_release};{architecture})".format(
python_runtime=python_runtime,
os_type=os_type,
os_release=os_release,
architecture=architecture,
)
@staticmethod
def get_iothub_user_agent():
"""
Create the user agent for IotHub
"""
return "{iothub_iden}/{version}{common}".format(
iothub_iden=IOTHUB_IDENTIFIER,
version=VERSION,
common=ProductInfo._get_common_user_agent(),
)
@staticmethod
def get_provisioning_user_agent():
"""
Create the user agent for Provisioning
"""
return "{provisioning_iden}/{version}{common}".format(
provisioning_iden=PROVISIONING_IDENTIFIER,
version=VERSION,
common=ProductInfo._get_common_user_agent(),
)

Просмотреть файл

@ -11,13 +11,22 @@ Device Provisioning Service.
import abc
import six
import logging
from .security.sk_security_client import SymmetricKeySecurityClient
from .security.x509_security_client import X509SecurityClient
from azure.iot.device.provisioning.pipeline.provisioning_pipeline import ProvisioningPipeline
from azure.iot.device.provisioning import pipeline, security
logger = logging.getLogger(__name__)
def _validate_kwargs(**kwargs):
"""Helper function to validate user provided kwargs.
Raises TypeError if an invalid option has been provided"""
# TODO: add support for server_verification_cert
valid_kwargs = ["websockets", "cipher"]
for kwarg in kwargs:
if kwarg not in valid_kwargs:
raise TypeError("Got an unexpected keyword argument '{}'".format(kwarg))
@six.add_metaclass(abc.ABCMeta)
class AbstractProvisioningDeviceClient(object):
"""
@ -27,80 +36,110 @@ class AbstractProvisioningDeviceClient(object):
def __init__(self, provisioning_pipeline):
"""
Initializes the provisioning client.
NOTE: This initializer should not be called directly.
Instead, the class methods that start with `create_from_` should be used to create a
client object.
:param provisioning_pipeline: Instance of the provisioning pipeline object.
:type provisioning_pipeline: :class:`azure.iot.device.provisioning.pipeline.ProvisioningPipeline`
"""
self._provisioning_pipeline = provisioning_pipeline
self._provisioning_payload = None
@classmethod
def create_from_symmetric_key(
cls, provisioning_host, registration_id, id_scope, symmetric_key, protocol_choice=None
cls, provisioning_host, registration_id, id_scope, symmetric_key, **kwargs
):
"""
Create a client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication.
:param provisioning_host: Host running the Device Provisioning Service. Can be found in the Azure portal in the
Overview tab as the string Global device endpoint
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
The registration ID is alphanumeric, lowercase string and may contain hyphens.
:param id_scope: The ID scope is used to uniquely identify the specific provisioning service the device will
register through. The ID scope is assigned to a Device Provisioning Service when it is created by the user and
is generated by the service and is immutable, guaranteeing uniqueness.
:param symmetric_key: The key which will be used to create the shared access signature token to authenticate
the device with the Device Provisioning Service. By default, the Device Provisioning Service creates
new symmetric keys with a default length of 32 bytes when new enrollments are saved with the Auto-generate keys
option enabled. Users can provide their own symmetric keys for enrollments by disabling this option within
16 bytes and 64 bytes and in valid Base64 format.
:param protocol_choice: The choice for the protocol to be used. This is optional and will default to protocol MQTT currently.
:return: A ProvisioningDeviceClient which can register via Symmetric Key.
:param str provisioning_host: Host running the Device Provisioning Service.
Can be found in the Azure portal in the Overview tab as the string Global device endpoint.
:param str registration_id: The registration ID used to uniquely identify a device in the
Device Provisioning Service. The registration ID is alphanumeric, lowercase string
and may contain hyphens.
:param str id_scope: The ID scope used to uniquely identify the specific provisioning
service the device will register through. The ID scope is assigned to a
Device Provisioning Service when it is created by the user and is generated by the
service and is immutable, guaranteeing uniqueness.
:param str symmetric_key: The key which will be used to create the shared access signature
token to authenticate the device with the Device Provisioning Service. By default,
the Device Provisioning Service creates new symmetric keys with a default length of
32 bytes when new enrollments are saved with the Auto-generate keys option enabled.
Users can provide their own symmetric keys for enrollments by disabling this option
within 16 bytes and 64 bytes and in valid Base64 format.
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:raises: TypeError if given an unrecognized parameter.
:returns: A ProvisioningDeviceClient instance which can register via Symmetric Key.
"""
if protocol_choice is not None:
protocol_name = protocol_choice.lower()
else:
protocol_name = "mqtt"
if protocol_name == "mqtt":
security_client = SymmetricKeySecurityClient(
provisioning_host, registration_id, id_scope, symmetric_key
_validate_kwargs(**kwargs)
security_client = security.SymmetricKeySecurityClient(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
pipeline_configuration = pipeline.ProvisioningPipelineConfig(**kwargs)
mqtt_provisioning_pipeline = pipeline.ProvisioningPipeline(
security_client, pipeline_configuration
)
mqtt_provisioning_pipeline = ProvisioningPipeline(security_client)
return cls(mqtt_provisioning_pipeline)
else:
raise NotImplementedError(
"A symmetric key can only create symmetric key security client which is compatible "
"only with MQTT protocol.Any other protocol has not been implemented."
)
@classmethod
def create_from_x509_certificate(
cls, provisioning_host, registration_id, id_scope, x509, protocol_choice=None
cls, provisioning_host, registration_id, id_scope, x509, **kwargs
):
"""
Create a client which can be used to run the registration of a device with provisioning service
using X509 certificate authentication.
:param provisioning_host: Host running the Device Provisioning Service. Can be found in the Azure portal in the
Overview tab as the string Global device endpoint
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
The registration ID is alphanumeric, lowercase string and may contain hyphens.
:param id_scope: The ID scope is used to uniquely identify the specific provisioning service the device will
register through. The ID scope is assigned to a Device Provisioning Service when it is created by the user and
is generated by the service and is immutable, guaranteeing uniqueness.
:param x509: The x509 certificate, To use the certificate the enrollment object needs to contain cert (either the root certificate or one of the intermediate CA certificates).
Create a client which can be used to run the registration of a device with
provisioning service using X509 certificate authentication.
:param str provisioning_host: Host running the Device Provisioning Service. Can be found in
the Azure portal in the Overview tab as the string Global device endpoint.
:param str registration_id: The registration ID used to uniquely identify a device in the
Device Provisioning Service. The registration ID is alphanumeric, lowercase string
and may contain hyphens.
:param str id_scope: The ID scope is used to uniquely identify the specific
provisioning service the device will register through. The ID scope is assigned to a
Device Provisioning Service when it is created by the user and is generated by the
service and is immutable, guaranteeing uniqueness.
:param x509: The x509 certificate, To use the certificate the enrollment object needs to
contain cert (either the root certificate or one of the intermediate CA certificates).
If the cert comes from a CER file, it needs to be base64 encoded.
:param protocol_choice: The choice for the protocol to be used. This is optional and will default to protocol MQTT currently.
:return: A ProvisioningDeviceClient which can register via Symmetric Key.
:type x509: :class:`azure.iot.device.X509`
:param bool websockets: Configuration Option. Default is False. Set to true if using MQTT
over websockets.
:param cipher: Configuration Option. Cipher suite(s) for TLS/SSL, as a string in
"OpenSSL cipher list format" or as a list of cipher suite strings.
:type cipher: str or list(str)
:raises: TypeError if given an unrecognized parameter.
:returns: A ProvisioningDeviceClient which can register via Symmetric Key.
"""
if protocol_choice is None:
protocol_name = "mqtt"
else:
protocol_name = protocol_choice.lower()
if protocol_name == "mqtt":
security_client = X509SecurityClient(provisioning_host, registration_id, id_scope, x509)
mqtt_provisioning_pipeline = ProvisioningPipeline(security_client)
return cls(mqtt_provisioning_pipeline)
else:
raise NotImplementedError(
"A x509 certificate can only create x509 security client which is compatible only "
"with MQTT protocol.Any other protocol has not been implemented."
_validate_kwargs(**kwargs)
security_client = security.X509SecurityClient(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
x509=x509,
)
pipeline_configuration = pipeline.ProvisioningPipelineConfig(**kwargs)
mqtt_provisioning_pipeline = pipeline.ProvisioningPipeline(
security_client, pipeline_configuration
)
return cls(mqtt_provisioning_pipeline)
@abc.abstractmethod
def register(self):
@ -109,12 +148,19 @@ class AbstractProvisioningDeviceClient(object):
"""
pass
@abc.abstractmethod
def cancel(self):
@property
def provisioning_payload(self):
return self._provisioning_payload
@provisioning_payload.setter
def provisioning_payload(self, provisioning_payload):
"""
Cancel an in progress registration of the device with the Device Provisioning Service.
Set the payload that will form the request payload in a registration request.
:param provisioning_payload: The payload that can be supplied by the user.
:type provisioning_payload: This can be an object or dictionary or a string or an integer.
"""
pass
self._provisioning_payload = provisioning_payload
def log_on_register_complete(result=None):

Просмотреть файл

@ -3,8 +3,10 @@
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module contains user-facing asynchronous clients for the
Azure Provisioning Device SDK for Python.
"""
This module contains user-facing asynchronous Provisioning Device Client for Azure Provisioning
Device SDK. This client uses Symmetric Key and X509 authentication to register devices with an
IoT Hub via the Device Provisioning Service.
"""
import logging
@ -15,55 +17,77 @@ from azure.iot.device.provisioning.abstract_provisioning_device_client import (
from azure.iot.device.provisioning.abstract_provisioning_device_client import (
log_on_register_complete,
)
from azure.iot.device.provisioning.internal.polling_machine import PollingMachine
from azure.iot.device.provisioning.pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
from azure.iot.device.provisioning.pipeline import constant as dps_constant
logger = logging.getLogger(__name__)
async def handle_result(callback):
try:
return await callback.completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class ProvisioningDeviceClient(AbstractProvisioningDeviceClient):
"""
Client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication.
using Symmetric Key or X509 authentication.
"""
def __init__(self, provisioning_pipeline):
"""
Initializer for the Provisioning Client.
NOTE : This initializer should not be called directly.
Instead, the class method `create_from_security_client` should be used to create a client object.
:param provisioning_pipeline: The protocol pipeline for provisioning. As of now this only supports MQTT.
"""
super(ProvisioningDeviceClient, self).__init__(provisioning_pipeline)
self._polling_machine = PollingMachine(provisioning_pipeline)
async def register(self):
"""
Register the device with the provisioning service.
Before returning the client will also disconnect from the provisioning service.
If a registration attempt is made while a previous registration is in progress it may throw an error.
If a registration attempt is made while a previous registration is in progress it may
throw an error.
:returns: RegistrationResult indicating the result of the registration.
:rtype: :class:`azure.iot.device.RegistrationResult`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Registering with Provisioning Service...")
register_async = async_adapter.emulate_async(self._polling_machine.register)
callback = async_adapter.AwaitableCallback(return_arg_name="result")
await register_async(callback=callback)
result = await callback.completion()
if not self._provisioning_pipeline.responses_enabled[dps_constant.REGISTER]:
await self._enable_responses()
register_async = async_adapter.emulate_async(self._provisioning_pipeline.register)
register_complete = async_adapter.AwaitableCallback(return_arg_name="result")
await register_async(payload=self._provisioning_payload, callback=register_complete)
result = await handle_result(register_complete)
log_on_register_complete(result)
return result
async def cancel(self):
async def _enable_responses(self):
"""Enable to receive responses from Device Provisioning Service.
"""
Before returning the client will also disconnect from the provisioning service.
logger.info("Enabling reception of response from Device Provisioning Service...")
subscribe_async = async_adapter.emulate_async(self._provisioning_pipeline.enable_responses)
In case there is no registration in process it will throw an error as there is
no registration process to cancel.
"""
logger.info("Disconnecting from Provisioning Service...")
cancel_async = async_adapter.emulate_async(self._polling_machine.cancel)
subscription_complete = async_adapter.AwaitableCallback()
await subscribe_async(callback=subscription_complete)
await handle_result(subscription_complete)
callback = async_adapter.AwaitableCallback()
await cancel_async(callback=callback)
await callback.completion()
logger.info("Successfully cancelled the current registration process")
logger.info("Successfully subscribed to Device Provisioning Service to receive responses")

Просмотреть файл

@ -1,4 +0,0 @@
"""Azure Provisioning Device Internal
This package provides internal classes for use within the Azure Provisioning Device SDK.
"""

Просмотреть файл

@ -1,450 +0,0 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
import uuid
import json
import traceback
from threading import Timer
from transitions import Machine
from azure.iot.device.provisioning.pipeline import constant
import six.moves.urllib as urllib
from .request_response_provider import RequestResponseProvider
from azure.iot.device.provisioning.models.registration_result import (
RegistrationResult,
RegistrationState,
)
from .registration_query_status_result import RegistrationQueryStatusResult
logger = logging.getLogger(__name__)
POS_STATUS_CODE_IN_TOPIC = 3
POS_QUERY_PARAM_PORTION = 2
class PollingMachine(object):
"""
Class that is responsible for sending the initial registration request and polling the
registration process for constant updates.
"""
def __init__(self, provisioning_pipeline):
"""
:param provisioning_pipeline: The pipeline for provisioning.
"""
self._polling_timer = None
self._query_timer = None
self._register_callback = None
self._cancel_callback = None
self._registration_error = None
self._registration_result = None
self._operations = {}
self._request_response_provider = RequestResponseProvider(provisioning_pipeline)
states = [
"disconnected",
"initializing",
"registering",
"waiting_to_poll",
"polling",
"completed",
"error",
"cancelling",
]
transitions = [
{
"trigger": "_trig_register",
"source": "disconnected",
"before": "_initialize_register",
"dest": "initializing",
},
{
"trigger": "_trig_register",
"source": "error",
"before": "_initialize_register",
"dest": "initializing",
},
{"trigger": "_trig_register", "source": "registering", "dest": None},
{
"trigger": "_trig_send_register_request",
"source": "initializing",
"before": "_send_register_request",
"dest": "registering",
},
{
"trigger": "_trig_send_register_request",
"source": "waiting_to_poll",
"before": "_send_register_request",
"dest": "registering",
},
{
"trigger": "_trig_wait",
"source": "registering",
"dest": "waiting_to_poll",
"after": "_wait_for_interval",
},
{"trigger": "_trig_wait", "source": "cancelling", "dest": None},
{
"trigger": "_trig_wait",
"source": "polling",
"dest": "waiting_to_poll",
"after": "_wait_for_interval",
},
{
"trigger": "_trig_poll",
"source": "waiting_to_poll",
"dest": "polling",
"after": "_query_operation_status",
},
{"trigger": "_trig_poll", "source": "cancelling", "dest": None},
{
"trigger": "_trig_complete",
"source": ["registering", "waiting_to_poll", "polling"],
"dest": "completed",
"after": "_call_complete",
},
{
"trigger": "_trig_error",
"source": ["registering", "waiting_to_poll", "polling"],
"dest": "error",
"after": "_call_error",
},
{"trigger": "_trig_error", "source": "cancelling", "dest": None},
{
"trigger": "_trig_cancel",
"source": ["disconnected", "completed"],
"dest": None,
"after": "_inform_no_process",
},
{
"trigger": "_trig_cancel",
"source": ["initializing", "registering", "waiting_to_poll", "polling"],
"dest": "cancelling",
"after": "_call_cancel",
},
]
def _on_transition_complete(event_data):
if not event_data.transition:
dest = "[no transition]"
else:
dest = event_data.transition.dest
logger.debug(
"Transition complete. Trigger={}, Src={}, Dest={}, result={}, error{}".format(
event_data.event.name,
event_data.transition.source,
dest,
str(event_data.result),
str(event_data.error),
)
)
self._state_machine = Machine(
model=self,
states=states,
transitions=transitions,
initial="disconnected",
send_event=True, # Use event_data structures to pass transition arguments
finalize_event=_on_transition_complete,
queued=True,
)
def register(self, callback=None):
"""
Register the device with the provisioning service.
:param:Callback to be called upon finishing the registration process
"""
logger.info("register called from polling machine")
self._register_callback = callback
self._trig_register()
def cancel(self, callback=None):
"""
Cancels the current registration process of the device.
:param:Callback to be called upon finishing the cancellation process
"""
logger.info("cancel called from polling machine")
self._cancel_callback = callback
self._trig_cancel()
def _initialize_register(self, event_data):
logger.info("Initializing the registration process.")
self._request_response_provider.enable_responses(callback=self._on_subscribe_completed)
def _send_register_request(self, event_data):
"""
Send the registration request.
"""
logger.info("Sending registration request")
self._set_query_timer()
request_id = str(uuid.uuid4())
self._operations[request_id] = constant.PUBLISH_TOPIC_REGISTRATION.format(request_id)
self._request_response_provider.send_request(
request_id=request_id,
request_payload=" ",
operation_id=None,
callback_on_response=self._on_register_response_received,
)
def _query_operation_status(self, event_data):
"""
Poll the service for operation status.
"""
logger.info("Querying operation status from polling machine")
self._set_query_timer()
request_id = str(uuid.uuid4())
result = event_data.args[0].args[0]
operation_id = result.operation_id
self._operations[request_id] = constant.PUBLISH_TOPIC_QUERYING.format(
request_id, operation_id
)
self._request_response_provider.send_request(
request_id=request_id,
request_payload=" ",
operation_id=operation_id,
callback_on_response=self._on_query_response_received,
)
def _on_register_response_received(self, request_id, status_code, key_values_dict, response):
"""
The function to call in case of a response from a registration request.
:param request_id: The id of the original register request.
:param status_code: The status code in the response.
:param key_values_dict: The dictionary containing the query parameters of the returned topic.
:param response: The complete response from the service.
"""
self._query_timer.cancel()
retry_after = (
None if "retry-after" not in key_values_dict else str(key_values_dict["retry-after"][0])
)
intermediate_registration_result = RegistrationQueryStatusResult(request_id, retry_after)
if int(status_code, 10) >= 429:
del self._operations[request_id]
self._trig_wait(intermediate_registration_result)
elif int(status_code, 10) >= 300: # pure failure
self._registration_error = ValueError("Incoming message failure")
self._trig_error()
else: # successful case, transition into complete or poll status
self._process_successful_response(request_id, retry_after, response)
def _on_query_response_received(self, request_id, status_code, key_values_dict, response):
"""
The function to call in case of a response from a polling/query request.
:param request_id: The id of the original query request.
:param status_code: The status code in the response.
:param key_values_dict: The dictionary containing the query parameters of the returned topic.
:param response: The complete response from the service.
"""
self._query_timer.cancel()
self._polling_timer.cancel()
retry_after = (
None if "retry-after" not in key_values_dict else str(key_values_dict["retry-after"][0])
)
intermediate_registration_result = RegistrationQueryStatusResult(request_id, retry_after)
if int(status_code, 10) >= 429:
if request_id in self._operations:
publish_query_topic = self._operations[request_id]
del self._operations[request_id]
topic_parts = publish_query_topic.split("$")
key_values_publish_topic = urllib.parse.parse_qs(
topic_parts[POS_QUERY_PARAM_PORTION]
)
operation_id = key_values_publish_topic["operationId"][0]
intermediate_registration_result.operation_id = operation_id
self._trig_wait(intermediate_registration_result)
else:
self._registration_error = ValueError("This request was never sent")
self._trig_error()
elif int(status_code, 10) >= 300: # pure failure
self._registration_error = ValueError("Incoming message failure")
self._trig_error()
else: # successful status code case, transition into complete or another poll status
self._process_successful_response(request_id, retry_after, response)
def _process_successful_response(self, request_id, retry_after, response):
"""
Fucntion to call in case of 200 response from the service
:param request_id: The request id
:param retry_after: The time after which to try again.
:param response: The complete response
"""
del self._operations[request_id]
successful_result = self._decode_json_response(request_id, retry_after, response)
if successful_result.status == "assigning":
self._trig_wait(successful_result)
elif successful_result.status == "assigned" or successful_result.status == "failed":
complete_registration_result = self._decode_complete_json_response(
successful_result, response
)
self._registration_result = complete_registration_result
self._trig_complete()
else:
self._registration_error = ValueError("Other types of failure have occurred.", response)
self._trig_error()
def _inform_no_process(self, event_data):
raise RuntimeError("There is no registration process to cancel.")
def _call_cancel(self, event_data):
"""
Completes the cancellation process
"""
logger.info("Cancel called from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_cancel)
def _call_error(self, event_data):
logger.info("Failed register from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_error)
def _call_complete(self, event_data):
logger.info("Complete register from polling machine")
self._clear_timers()
self._request_response_provider.disconnect(callback=self._on_disconnect_completed_register)
def _clear_timers(self):
"""
Clears all the timers and disconnects from the service
"""
if self._query_timer is not None:
self._query_timer.cancel()
if self._polling_timer is not None:
self._polling_timer.cancel()
def _set_query_timer(self):
def time_up_query():
logger.error("Time is up for query timer")
self._query_timer.cancel()
# TimeoutError not defined in python 2
self._registration_error = ValueError("Time is up for query timer")
self._trig_error()
self._query_timer = Timer(constant.DEFAULT_TIMEOUT_INTERVAL, time_up_query)
self._query_timer.start()
def _wait_for_interval(self, event_data):
def time_up_polling():
self._polling_timer.cancel()
logger.debug("Done waiting for polling interval of {} secs".format(polling_interval))
if result.operation_id is None:
self._trig_send_register_request(event_data)
else:
self._trig_poll(event_data)
result = event_data.args[0]
polling_interval = (
constant.DEFAULT_POLLING_INTERVAL
if result.retry_after is None
else int(result.retry_after, 10)
)
self._polling_timer = Timer(polling_interval, time_up_polling)
logger.debug("Waiting for " + str(constant.DEFAULT_POLLING_INTERVAL) + " secs")
self._polling_timer.start() # This is waiting for that polling interval
def _decode_complete_json_response(self, query_result, response):
"""
Decodes the complete json response for details regarding the registration process.
:param query_result: The partially formed result.
:param response: The complete response from the service
"""
decoded_result = json.loads(response)
decoded_state = (
None
if "registrationState" not in decoded_result
else decoded_result["registrationState"]
)
registration_state = None
if decoded_state is not None:
# Everything needs to be converted to string explicitly for python 2
# as everything is by default a unicode character
registration_state = RegistrationState(
None if "deviceId" not in decoded_state else str(decoded_state["deviceId"]),
None if "assignedHub" not in decoded_state else str(decoded_state["assignedHub"]),
None if "substatus" not in decoded_state else str(decoded_state["substatus"]),
None
if "createdDateTimeUtc" not in decoded_state
else str(decoded_state["createdDateTimeUtc"]),
None
if "lastUpdatedDateTimeUtc" not in decoded_state
else str(decoded_state["lastUpdatedDateTimeUtc"]),
None if "etag" not in decoded_state else str(decoded_state["etag"]),
)
registration_result = RegistrationResult(
request_id=query_result.request_id,
operation_id=query_result.operation_id,
status=query_result.status,
registration_state=registration_state,
)
return registration_result
def _decode_json_response(self, request_id, retry_after, response):
"""
Decodes the json response for operation id and status
:param request_id: The request id.
:param retry_after: The time in secs after which to retry.
:param response: The complete response from the service.
"""
decoded_result = json.loads(response)
operation_id = (
None if "operationId" not in decoded_result else str(decoded_result["operationId"])
)
status = None if "status" not in decoded_result else str(decoded_result["status"])
return RegistrationQueryStatusResult(request_id, retry_after, operation_id, status)
def _on_disconnect_completed_error(self):
logger.info("on_disconnect_completed for Device Provisioning Service")
callback = self._register_callback
if callback:
self._register_callback = None
try:
callback(error=self._registration_error)
except Exception:
logger.error("Unexpected error calling callback supplied to register")
logger.error(traceback.format_exc())
def _on_disconnect_completed_cancel(self):
logger.info("on_disconnect_completed after cancelling current Device Provisioning Service")
callback = self._cancel_callback
if callback:
self._cancel_callback = None
callback()
def _on_disconnect_completed_register(self):
logger.info("on_disconnect_completed after registration to Device Provisioning Service")
callback = self._register_callback
if callback:
self._register_callback = None
try:
callback(result=self._registration_result)
except Exception:
logger.error("Unexpected error calling callback supplied to register")
logger.error(traceback.format_exc())
def _on_subscribe_completed(self):
logger.debug("on_subscribe_completed for Device Provisioning Service")
self._trig_send_register_request()

Просмотреть файл

@ -1,58 +0,0 @@
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
class RegistrationQueryStatusResult(object):
"""
The result of any registration attempt
:ivar:request_id: The request id to which the response is being obtained
:ivar:operation_id: The id of the operation as returned by the registration request.
:ivar status: The status of the registration process as returned by provisioning service.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
from the provisioning service.
"""
def __init__(self, request_id=None, retry_after=None, operation_id=None, status=None):
"""
:param request_id: The request id to which the response is being obtained
:param retry_after : Number of secs after which to retry again.
:param operation_id: The id of the operation as returned by the initial registration request.
:param status: The status of the registration process.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
from the provisioning service.
"""
self._request_id = request_id
self._operation_id = operation_id
self._status = status
self._retry_after = retry_after
@property
def request_id(self):
return self._request_id
@property
def retry_after(self):
return self._retry_after
@retry_after.setter
def retry_after(self, val):
self._retry_after = val
@property
def operation_id(self):
return self._operation_id
@operation_id.setter
def operation_id(self, val):
self._operation_id = val
@property
def status(self):
return self._status
@status.setter
def status(self, val):
self._status = val

Просмотреть файл

@ -1,101 +0,0 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
logger = logging.getLogger(__name__)
POS_STATUS_CODE_IN_TOPIC = 3
POS_URL_PORTION = 1
POS_QUERY_PARAM_PORTION = 2
class RequestResponseProvider(object):
"""
Class that processes requests sent from device and responses received at device.
"""
def __init__(self, provisioning_pipeline):
self._provisioning_pipeline = provisioning_pipeline
self._provisioning_pipeline.on_message_received = self._receive_response
self._pending_requests = {}
def send_request(
self, request_id, request_payload, operation_id=None, callback_on_response=None
):
"""
Sends a request
:param request_id: Id of the request
:param request_payload: The payload of the request.
:param operation_id: A id of the operation in case it is an ongoing process.
:param callback_on_response: callback which is called when response comes back for this request.
"""
self._pending_requests[request_id] = callback_on_response
self._provisioning_pipeline.send_request(
request_id=request_id,
request_payload=request_payload,
operation_id=operation_id,
callback=self._on_publish_completed,
)
def connect(self, callback=None):
if callback is None:
callback = self._on_connection_state_change
self._provisioning_pipeline.connect(callback=callback)
def disconnect(self, callback=None):
if callback is None:
callback = self._on_connection_state_change
self._provisioning_pipeline.disconnect(callback=callback)
def enable_responses(self, callback=None):
if callback is None:
callback = self._on_subscribe_completed
self._provisioning_pipeline.enable_responses(callback=callback)
def disable_responses(self, callback=None):
if callback is None:
callback = self._on_unsubscribe_completed
self._provisioning_pipeline.disable_responses(callback=callback)
def _receive_response(self, request_id, status_code, key_value_dict, response_payload):
"""
Handler that processes the response from the service.
:param request_id: The id of the request which is being responded to.
:param status_code: The status code inside the response
:param key_value_dict: A dictionary of keys mapped to a list of values extracted from the topic of the response.
:param response_payload: String payload of the message received.
:return:
"""
# """ Sample topic and payload
# $dps/registrations/res/200/?$rid=28c32371-608c-4390-8da7-c712353c1c3b
# {"operationId":"4.550cb20c3349a409.390d2957-7b58-4701-b4f9-7fe848348f4a","status":"assigning"}
# """
logger.debug("Received response {}:".format(response_payload))
if request_id in self._pending_requests:
callback = self._pending_requests[request_id]
# Only send the status code and the extracted topic
callback(request_id, status_code, key_value_dict, response_payload)
del self._pending_requests[request_id]
# TODO : What happens when request_id if not there ? trigger error ?
def _on_connection_state_change(self, new_state):
"""Handler to be called by the pipeline upon a connection state change."""
logger.info("Connection State - {}".format(new_state))
def _on_publish_completed(self):
logger.debug("publish completed for request response provider")
def _on_subscribe_completed(self):
logger.debug("subscribe completed for request response provider")
def _on_unsubscribe_completed(self):
logger.debug("on_unsubscribe_completed for request response provider")

Просмотреть файл

@ -3,6 +3,7 @@
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import json
class RegistrationResult(object):
@ -16,24 +17,18 @@ class RegistrationResult(object):
from the provisioning service.
"""
def __init__(self, request_id, operation_id, status, registration_state=None):
def __init__(self, operation_id, status, registration_state=None):
"""
:param request_id: The request id to which the response is being obtained
:param operation_id: The id of the operation as returned by the initial registration request.
:param status: The status of the registration process.
Values can be "unassigned", "assigning", "assigned", "failed", "disabled"
:param registration_state : Details like device id, assigned hub , date times etc returned
from the provisioning service.
"""
self._request_id = request_id
self._operation_id = operation_id
self._status = status
self._registration_state = registration_state
@property
def request_id(self):
return self._request_id
@property
def operation_id(self):
return self._operation_id
@ -70,6 +65,7 @@ class RegistrationState(object):
created_date_time=None,
last_update_date_time=None,
etag=None,
payload=None,
):
"""
:param device_id: Desired device id for the provisioned device
@ -79,6 +75,7 @@ class RegistrationState(object):
:param created_date_time: Registration create date time (in UTC).
:param last_update_date_time: Last updated date time (in UTC).
:param etag: The entity tag associated with the resource.
:param payload: The payload with which hub is responding
"""
self._device_id = device_id
self._assigned_hub = assigned_hub
@ -86,6 +83,7 @@ class RegistrationState(object):
self._created_date_time = created_date_time
self._last_update_date_time = last_update_date_time
self._etag = etag
self._response_payload = payload
@property
def device_id(self):
@ -111,5 +109,11 @@ class RegistrationState(object):
def etag(self):
return self._etag
@property
def response_payload(self):
return json.dumps(self._response_payload, default=lambda o: o.__dict__, sort_keys=True)
def __str__(self):
return "\n".join([self.device_id, self.assigned_hub, self.sub_status])
return "\n".join(
[self.device_id, self.assigned_hub, self.sub_status, self.response_payload]
)

Просмотреть файл

@ -5,3 +5,4 @@ This package provides pipeline for use with the Azure Provisioning Device SDK.
INTERNAL USAGE ONLY
"""
from .provisioning_pipeline import ProvisioningPipeline
from .config import ProvisioningPipelineConfig

Просмотреть файл

@ -0,0 +1,17 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import logging
from azure.iot.device.common.pipeline.config import BasePipelineConfig
logger = logging.getLogger(__name__)
class ProvisioningPipelineConfig(BasePipelineConfig):
"""A class for storing all configurations/options for Provisioning clients in the Azure IoT Python Device Client Library.
"""
pass

Просмотреть файл

@ -0,0 +1,21 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""This module defines an exception surface, exposed as part of the pipeline API"""
# For now, present relevant transport errors as part of the Pipeline API surface
# so that they do not have to be duplicated at this layer.
# OK TODO This mimics the IotHub Case. Both IotHub and Provisioning needs to change
from azure.iot.device.common.pipeline.pipeline_exceptions import *
from azure.iot.device.common.transport_exceptions import (
ConnectionFailedError,
ConnectionDroppedError,
# CT TODO: UnauthorizedError (the one from transport) should probably not surface out of
# the pipeline due to confusion with the higher level service UnauthorizedError. It
# should probably get turned into some other error instead (e.g. ConnectionFailedError).
# But for now, this is a stopgap.
UnauthorizedError,
ProtocolClientError,
)

Просмотреть файл

@ -24,24 +24,24 @@ def get_topic_for_subscribe():
return _get_topic_base() + "res/#"
def get_topic_for_register(request_id):
def get_topic_for_register(method, request_id):
"""
return the topic string used to publish telemetry
"""
return (_get_topic_base() + "PUT/iotdps-register/?$rid={request_id}").format(
request_id=request_id
return (_get_topic_base() + "{method}/iotdps-register/?$rid={request_id}").format(
method=method, request_id=request_id
)
def get_topic_for_query(request_id, operation_id):
def get_topic_for_query(method, request_id, operation_id):
"""
:return: The topic for cloud to device messages.It is of the format
"devices/<deviceid>/messages/devicebound/#"
"""
return (
_get_topic_base()
+ "GET/iotdps-get-operationstatus/?$rid={request_id}&operationId={operation_id}"
).format(request_id=request_id, operation_id=operation_id)
+ "{method}/iotdps-get-operationstatus/?$rid={request_id}&operationId={operation_id}"
).format(method=method, request_id=request_id, operation_id=operation_id)
def get_topic_for_response():
@ -93,3 +93,22 @@ def extract_status_code_from_topic(topic):
url_parts = topic_parts[1].split("/")
status_code = url_parts[POS_STATUS_CODE_IN_TOPIC]
return status_code
def get_optional_element(content, element_name, index=0):
"""
Gets an optional element from json string , or dictionary.
:param content: The content from which the element needs to be retrieved.
:param element_name: The name of the element
:param index: Optional index in case the return is a collection of elements.
"""
element = None if element_name not in content else content[element_name]
if element is None:
return None
else:
if isinstance(element, list):
return element[index]
elif isinstance(element, object):
return element
else:
return str(element)

Просмотреть файл

@ -1,26 +0,0 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device.common.pipeline.pipeline_events_base import PipelineEvent
class RegistrationResponseEvent(PipelineEvent):
"""
A PipelineEvent object which represents an incoming RegistrationResponse event. This object is probably
created by some converter stage based on a pipeline-specific event
"""
def __init__(self, request_id, status_code, key_values, response_payload):
"""
Initializer for RegistrationResponse objects.
:param request_id : The id of the request to which the response arrived.
:param status_code: The status code received in the topic.
:param key_values: A dictionary containing key mapped to a list of values that were extarcted from the topic.
:param response_payload: The response received from a registration process
"""
super(RegistrationResponseEvent, self).__init__()
self.request_id = request_id
self.status_code = status_code
self.key_values = key_values
self.response_payload = response_payload

Просмотреть файл

@ -16,7 +16,7 @@ class SetSymmetricKeySecurityClientOperation(PipelineOperation):
very provisioning-specific
"""
def __init__(self, security_client, callback=None):
def __init__(self, security_client, callback):
"""
Initializer for SetSecurityClient.
@ -41,7 +41,7 @@ class SetX509SecurityClientOperation(PipelineOperation):
(such as a Provisioning client).
"""
def __init__(self, security_client, callback=None):
def __init__(self, security_client, callback):
"""
Initializer for SetSecurityClient.
@ -71,9 +71,9 @@ class SetProvisioningClientConnectionArgsOperation(PipelineOperation):
provisioning_host,
registration_id,
id_scope,
callback,
client_cert=None,
sas_token=None,
callback=None,
):
"""
Initializer for SetProvisioningClientConnectionArgsOperation.
@ -91,7 +91,7 @@ class SetProvisioningClientConnectionArgsOperation(PipelineOperation):
self.sas_token = sas_token
class SendRegistrationRequestOperation(PipelineOperation):
class RegisterOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to send a registration request
to an Device Provisioning Service.
@ -99,22 +99,26 @@ class SendRegistrationRequestOperation(PipelineOperation):
This operation is in the group of DPS operations because it is very specific to the DPS client.
"""
def __init__(self, request_id, request_payload, callback=None):
def __init__(self, request_payload, registration_id, callback, registration_result=None):
"""
Initializer for SendRegistrationRequestOperation objects.
Initializer for RegisterOperation objects.
:param request_id : The id of the request being sent
:param request_payload: The request that we are sending to the service
:param registration_id: The registration ID is used to uniquely identify a device in the Device Provisioning Service.
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(SendRegistrationRequestOperation, self).__init__(callback=callback)
self.request_id = request_id
super(RegisterOperation, self).__init__(callback=callback)
self.request_payload = request_payload
self.registration_id = registration_id
self.registration_result = registration_result
self.retry_after_timer = None
self.polling_timer = None
self.provisioning_timeout_timer = None
class SendQueryRequestOperation(PipelineOperation):
class PollStatusOperation(PipelineOperation):
"""
A PipelineOperation object which contains arguments used to send a registration request
to an Device Provisioning Service.
@ -122,17 +126,20 @@ class SendQueryRequestOperation(PipelineOperation):
This operation is in the group of DPS operations because it is very specific to the DPS client.
"""
def __init__(self, request_id, operation_id, request_payload, callback=None):
def __init__(self, operation_id, request_payload, callback, registration_result=None):
"""
Initializer for SendRegistrationRequestOperation objects.
Initializer for PollStatusOperation objects.
:param request_id
:param operation_id: The id of the existing operation for which the polling was started.
:param request_payload: The request that we are sending to the service
:param Function callback: The function that gets called when this operation is complete or has failed.
The callback function must accept A PipelineOperation object which indicates the specific operation which
has completed or failed.
"""
super(SendQueryRequestOperation, self).__init__(callback=callback)
self.request_id = request_id
super(PollStatusOperation, self).__init__(callback=callback)
self.operation_id = operation_id
self.request_payload = request_payload
self.registration_result = registration_result
self.retry_after_timer = None
self.polling_timer = None
self.provisioning_timeout_timer = None

Просмотреть файл

@ -4,9 +4,23 @@
# license information.
# --------------------------------------------------------------------------
from azure.iot.device.common.pipeline import pipeline_ops_base, operation_flow, pipeline_thread
from azure.iot.device.common.pipeline import pipeline_ops_base, pipeline_thread
from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage
from . import pipeline_ops_provisioning
from azure.iot.device import exceptions
from azure.iot.device.provisioning.pipeline import constant
from azure.iot.device.provisioning.models.registration_result import (
RegistrationResult,
RegistrationState,
)
import logging
import weakref
import json
from threading import Timer
import time
from .mqtt_topic import get_optional_element
logger = logging.getLogger(__name__)
class UseSecurityClientStage(PipelineStage):
@ -18,33 +32,477 @@ class UseSecurityClientStage(PipelineStage):
"""
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.SetSymmetricKeySecurityClientOperation):
security_client = op.security_client
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation(
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation,
provisioning_host=security_client.provisioning_host,
registration_id=security_client.registration_id,
id_scope=security_client.id_scope,
sas_token=security_client.get_current_sas_token(),
),
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_provisioning.SetX509SecurityClientOperation):
security_client = op.security_client
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation(
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation,
provisioning_host=security_client.provisioning_host,
registration_id=security_client.registration_id,
id_scope=security_client.id_scope,
client_cert=security_client.get_x509_certificate(),
),
)
self.send_op_down(worker_op)
else:
super(UseSecurityClientStage, self)._run_op(op)
class CommonProvisioningStage(PipelineStage):
"""
This is a super stage that the RegistrationStage and PollingStatusStage of
provisioning would both use. It contains some common functions like decoding response
and retrieving error, retrieving registration status, retrieving operation id
and forming a complete result.
"""
@pipeline_thread.runs_on_pipeline_thread
def _clear_timeout_timer(self, op, error):
"""
Clearing timer for provisioning operations (Register and PollStatus)
when they respond back from service.
"""
if op.provisioning_timeout_timer:
logger.debug("{}({}): Cancelling provisioning timeout timer".format(self.name, op.name))
op.provisioning_timeout_timer.cancel()
op.provisioning_timeout_timer = None
@staticmethod
def _decode_response(provisioning_op):
return json.loads(provisioning_op.response_body.decode("utf-8"))
@staticmethod
def _get_registration_status(decoded_response):
return get_optional_element(decoded_response, "status")
@staticmethod
def _get_operation_id(decoded_response):
return get_optional_element(decoded_response, "operationId")
@staticmethod
def _form_complete_result(operation_id, decoded_response, status):
"""
Create the registration result from the complete decoded json response for details regarding the registration process.
"""
decoded_state = get_optional_element(decoded_response, "registrationState")
registration_state = None
if decoded_state is not None:
registration_state = RegistrationState(
device_id=get_optional_element(decoded_state, "deviceId"),
assigned_hub=get_optional_element(decoded_state, "assignedHub"),
sub_status=get_optional_element(decoded_state, "substatus"),
created_date_time=get_optional_element(decoded_state, "createdDateTimeUtc"),
last_update_date_time=get_optional_element(decoded_state, "lastUpdatedDateTimeUtc"),
etag=get_optional_element(decoded_state, "etag"),
payload=get_optional_element(decoded_state, "payload"),
)
registration_result = RegistrationResult(
operation_id=operation_id, status=status, registration_state=registration_state
)
return registration_result
def _process_service_error_status_code(self, original_provisioning_op, request_response_op):
logger.error(
"{stage_name}({op_name}): Received error with status code {status_code} for {prov_op_name} request operation".format(
stage_name=self.name,
op_name=request_response_op.name,
prov_op_name=request_response_op.request_type,
status_code=request_response_op.status_code,
)
)
logger.error(
"{stage_name}({op_name}): Response body: {body}".format(
stage_name=self.name,
op_name=request_response_op.name,
body=request_response_op.response_body,
)
)
original_provisioning_op.complete(
error=exceptions.ServiceError(
"{prov_op_name} request returned a service error status code {status_code}".format(
prov_op_name=request_response_op.request_type,
status_code=request_response_op.status_code,
)
)
)
def _process_retry_status_code(self, error, original_provisioning_op, request_response_op):
retry_interval = (
int(request_response_op.retry_after, 10)
if request_response_op.retry_after is not None
else constant.DEFAULT_POLLING_INTERVAL
)
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_retry_after():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): retrying".format(
stage_name=this.name, op_name=request_response_op.name
)
)
original_provisioning_op.retry_after_timer.cancel()
original_provisioning_op.retry_after_timer = None
original_provisioning_op.completed = False
this.run_op(original_provisioning_op)
logger.warning(
"{stage_name}({op_name}): Op needs retry with interval {interval} because of {error}. Setting timer.".format(
stage_name=self.name,
op_name=request_response_op.name,
interval=retry_interval,
error=error,
)
)
logger.debug("{}({}): Creating retry timer".format(self.name, request_response_op.name))
original_provisioning_op.retry_after_timer = Timer(retry_interval, do_retry_after)
original_provisioning_op.retry_after_timer.start()
@staticmethod
def _process_failed_and_assigned_registration_status(
error,
operation_id,
decoded_response,
registration_status,
original_provisioning_op,
request_response_op,
):
complete_registration_result = CommonProvisioningStage._form_complete_result(
operation_id=operation_id, decoded_response=decoded_response, status=registration_status
)
original_provisioning_op.registration_result = complete_registration_result
if registration_status == "failed":
error = exceptions.ServiceError(
"Query Status operation returned a failed registration status with a status code of {status_code}".format(
status_code=request_response_op.status_code
)
)
original_provisioning_op.complete(error=error)
@staticmethod
def _process_unknown_registration_status(
registration_status, original_provisioning_op, request_response_op
):
error = exceptions.ServiceError(
"Query Status Operation encountered an invalid registration status {status} with a status code of {status_code}".format(
status=registration_status, status_code=request_response_op.status_code
)
)
original_provisioning_op.complete(error=error)
class PollingStatusStage(CommonProvisioningStage):
"""
This stage is responsible for sending the query request once initial response
is received from the registration response.
Upon the receipt of the response this stage decides whether
to send another query request or complete the procedure.
"""
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.PollStatusOperation):
query_status_op = op
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def query_timeout():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): returning timeout error".format(
stage_name=this.name, op_name=op.name
)
)
query_status_op.complete(
error=(
exceptions.ServiceError(
"Operation timed out before provisioning service could respond for {op_type} operation".format(
op_type=constant.QUERY
)
)
)
)
logger.debug("{}({}): Creating provisioning timeout timer".format(self.name, op.name))
query_status_op.provisioning_timeout_timer = Timer(
constant.DEFAULT_TIMEOUT_INTERVAL, query_timeout
)
query_status_op.provisioning_timeout_timer.start()
def on_query_response(op, error):
self._clear_timeout_timer(query_status_op, error)
logger.debug(
"{stage_name}({op_name}): Received response with status code {status_code} for PollStatusOperation with operation id {oper_id}".format(
stage_name=self.name,
op_name=op.name,
status_code=op.status_code,
oper_id=op.query_params["operation_id"],
)
)
if error:
logger.error(
"{stage_name}({op_name}): Received error for {prov_op_name} operation".format(
stage_name=self.name, op_name=op.name, prov_op_name=op.request_type
)
)
query_status_op.complete(error=error)
else:
if 300 <= op.status_code < 429:
self._process_service_error_status_code(query_status_op, op)
elif op.status_code >= 429:
self._process_retry_status_code(error, query_status_op, op)
else:
decoded_response = self._decode_response(op)
operation_id = self._get_operation_id(decoded_response)
registration_status = self._get_registration_status(decoded_response)
if registration_status == "assigning":
polling_interval = (
int(op.retry_after, 10)
if op.retry_after is not None
else constant.DEFAULT_POLLING_INTERVAL
)
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_polling():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): retrying".format(
stage_name=this.name, op_name=op.name
)
)
query_status_op.polling_timer.cancel()
query_status_op.polling_timer = None
query_status_op.completed = False
this.run_op(query_status_op)
logger.info(
"{stage_name}({op_name}): Op needs retry with interval {interval} because of {error}. Setting timer.".format(
stage_name=self.name,
op_name=op.name,
interval=polling_interval,
error=error,
)
)
logger.debug(
"{}({}): Creating polling timer".format(self.name, op.name)
)
query_status_op.polling_timer = Timer(polling_interval, do_polling)
query_status_op.polling_timer.start()
elif registration_status == "assigned" or registration_status == "failed":
self._process_failed_and_assigned_registration_status(
error=error,
operation_id=operation_id,
decoded_response=decoded_response,
registration_status=registration_status,
original_provisioning_op=query_status_op,
request_response_op=op,
)
else:
operation_flow.pass_op_to_next_stage(self, op)
self._process_unknown_registration_status(
registration_status=registration_status,
original_provisioning_op=query_status_op,
request_response_op=op,
)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.QUERY,
method="GET",
resource_location="/",
query_params={"operation_id": query_status_op.operation_id},
request_body=query_status_op.request_payload,
callback=on_query_response,
)
)
else:
super(PollingStatusStage, self)._run_op(op)
class RegistrationStage(CommonProvisioningStage):
"""
This is the first stage that decides converts a registration request
into a normal request and response operation.
Upon the receipt of the response this stage decides whether
to send another registration request or send a query request.
Depending on the status and result of the response
this stage may also complete the registration process.
"""
@pipeline_thread.runs_on_pipeline_thread
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.RegisterOperation):
initial_register_op = op
self_weakref = weakref.ref(self)
@pipeline_thread.invoke_on_pipeline_thread_nowait
def register_timeout():
this = self_weakref()
logger.info(
"{stage_name}({op_name}): returning timeout error".format(
stage_name=this.name, op_name=op.name
)
)
initial_register_op.complete(
error=(
exceptions.ServiceError(
"Operation timed out before provisioning service could respond for {op_type} operation".format(
op_type=constant.REGISTER
)
)
)
)
logger.debug("{}({}): Creating provisioning timeout timer".format(self.name, op.name))
initial_register_op.provisioning_timeout_timer = Timer(
constant.DEFAULT_TIMEOUT_INTERVAL, register_timeout
)
initial_register_op.provisioning_timeout_timer.start()
def on_registration_response(op, error):
self._clear_timeout_timer(initial_register_op, error)
logger.debug(
"{stage_name}({op_name}): Received response with status code {status_code} for RegisterOperation".format(
stage_name=self.name, op_name=op.name, status_code=op.status_code
)
)
if error:
logger.error(
"{stage_name}({op_name}): Received error for {prov_op_name} operation".format(
stage_name=self.name, op_name=op.name, prov_op_name=op.request_type
)
)
initial_register_op.complete(error=error)
else:
if 300 <= op.status_code < 429:
self._process_service_error_status_code(initial_register_op, op)
elif op.status_code >= 429:
self._process_retry_status_code(error, initial_register_op, op)
else:
decoded_response = self._decode_response(op)
operation_id = self._get_operation_id(decoded_response)
registration_status = self._get_registration_status(decoded_response)
if registration_status == "assigning":
self_weakref = weakref.ref(self)
def copy_result_to_original_op(op, error):
logger.debug(
"Copying registration result from Query Status Op to Registration Op"
)
initial_register_op.registration_result = op.registration_result
initial_register_op.error = error
@pipeline_thread.invoke_on_pipeline_thread_nowait
def do_query_after_interval():
this = self_weakref()
initial_register_op.polling_timer.cancel()
initial_register_op.polling_timer = None
logger.info(
"{stage_name}({op_name}): polling".format(
stage_name=this.name, op_name=op.name
)
)
query_worker_op = initial_register_op.spawn_worker_op(
worker_op_type=pipeline_ops_provisioning.PollStatusOperation,
request_payload=" ",
operation_id=operation_id,
callback=copy_result_to_original_op,
)
self.send_op_down(query_worker_op)
logger.warning(
"{stage_name}({op_name}): Op will transition into polling after interval {interval}. Setting timer.".format(
stage_name=self.name,
op_name=op.name,
interval=constant.DEFAULT_POLLING_INTERVAL,
)
)
logger.debug(
"{}({}): Creating polling timer".format(self.name, op.name)
)
initial_register_op.polling_timer = Timer(
constant.DEFAULT_POLLING_INTERVAL, do_query_after_interval
)
initial_register_op.polling_timer.start()
elif registration_status == "failed" or registration_status == "assigned":
self._process_failed_and_assigned_registration_status(
error=error,
operation_id=operation_id,
decoded_response=decoded_response,
registration_status=registration_status,
original_provisioning_op=initial_register_op,
request_response_op=op,
)
else:
self._process_unknown_registration_status(
registration_status=registration_status,
original_provisioning_op=initial_register_op,
request_response_op=op,
)
registration_payload = DeviceRegistrationPayload(
registration_id=initial_register_op.registration_id,
custom_payload=initial_register_op.request_payload,
)
self.send_op_down(
pipeline_ops_base.RequestAndResponseOperation(
request_type=constant.REGISTER,
method="PUT",
resource_location="/",
request_body=registration_payload.get_json_string(),
callback=on_registration_response,
)
)
else:
super(RegistrationStage, self)._run_op(op)
class DeviceRegistrationPayload(object):
"""
The class representing the payload that needs to be sent to the service.
"""
def __init__(self, registration_id, custom_payload=None):
# This is not a convention to name variables in python but the
# DPS service spec needs the name to be exact for it to work
self.registrationId = registration_id
self.payload = custom_payload
def get_json_string(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True)

Просмотреть файл

@ -10,32 +10,31 @@ from azure.iot.device.common.pipeline import (
pipeline_ops_base,
pipeline_ops_mqtt,
pipeline_events_mqtt,
operation_flow,
pipeline_thread,
pipeline_events_base,
)
from azure.iot.device.common.pipeline.pipeline_stages_base import PipelineStage
from azure.iot.device.provisioning.pipeline import mqtt_topic
from azure.iot.device.provisioning.pipeline import (
pipeline_events_provisioning,
pipeline_ops_provisioning,
)
from azure.iot.device.provisioning.pipeline import pipeline_ops_provisioning
from azure.iot.device import constant as pkg_constant
from . import constant as pipeline_constant
from azure.iot.device.product_info import ProductInfo
logger = logging.getLogger(__name__)
class ProvisioningMQTTConverterStage(PipelineStage):
class ProvisioningMQTTTranslationStage(PipelineStage):
"""
PipelineStage which converts other Provisioning pipeline operations into MQTT operations. This stage also
converts MQTT pipeline events into Provisioning pipeline events.
"""
def __init__(self):
super(ProvisioningMQTTConverterStage, self).__init__()
super(ProvisioningMQTTTranslationStage, self).__init__()
self.action_to_topic = {}
@pipeline_thread.runs_on_pipeline_thread
def _execute_op(self, op):
def _run_op(self, op):
if isinstance(op, pipeline_ops_provisioning.SetProvisioningClientConnectionArgsOperation):
# get security client args from above, save some, use some to build topic names,
@ -44,7 +43,7 @@ class ProvisioningMQTTConverterStage(PipelineStage):
client_id = op.registration_id
query_param_seq = [
("api-version", pkg_constant.PROVISIONING_API_VERSION),
("ClientVersion", pkg_constant.USER_AGENT),
("ClientVersion", ProductInfo.get_provisioning_user_agent()),
]
username = "{id_scope}/registrations/{registration_id}/{query_params}".format(
id_scope=op.id_scope,
@ -54,61 +53,59 @@ class ProvisioningMQTTConverterStage(PipelineStage):
hostname = op.provisioning_host
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation(
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.SetMQTTConnectionArgsOperation,
client_id=client_id,
hostname=hostname,
username=username,
client_cert=op.client_cert,
sas_token=op.sas_token,
),
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_provisioning.SendRegistrationRequestOperation):
# Convert Sending the request into MQTT Publish operations
topic = mqtt_topic.get_topic_for_register(op.request_id)
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(
topic=topic, payload=op.request_payload
),
elif isinstance(op, pipeline_ops_base.RequestOperation):
if op.request_type == pipeline_constant.REGISTER:
topic = mqtt_topic.get_topic_for_register(
method=op.method, request_id=op.request_id
)
elif isinstance(op, pipeline_ops_provisioning.SendQueryRequestOperation):
# Convert Sending the request into MQTT Publish operations
topic = mqtt_topic.get_topic_for_query(op.request_id, op.operation_id)
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTPublishOperation(
topic=topic, payload=op.request_payload
),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
topic=topic,
payload=op.request_body,
)
self.send_op_down(worker_op)
else:
topic = mqtt_topic.get_topic_for_query(
method=op.method,
request_id=op.request_id,
operation_id=op.query_params["operation_id"],
)
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTPublishOperation,
topic=topic,
payload=op.request_body,
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.EnableFeatureOperation):
# Enabling for register gets translated into an MQTT subscribe operation
topic = mqtt_topic.get_topic_for_subscribe()
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTSubscribeOperation(topic=topic),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTSubscribeOperation, topic=topic
)
self.send_op_down(worker_op)
elif isinstance(op, pipeline_ops_base.DisableFeatureOperation):
# Disabling a register response gets turned into an MQTT unsubscribe operation
topic = mqtt_topic.get_topic_for_subscribe()
operation_flow.delegate_to_different_op(
stage=self,
original_op=op,
new_op=pipeline_ops_mqtt.MQTTUnsubscribeOperation(topic=topic),
worker_op = op.spawn_worker_op(
worker_op_type=pipeline_ops_mqtt.MQTTUnsubscribeOperation, topic=topic
)
self.send_op_down(worker_op)
else:
# All other operations get passed down
operation_flow.pass_op_to_next_stage(self, op)
super(ProvisioningMQTTTranslationStage, self)._run_op(op)
@pipeline_thread.runs_on_pipeline_thread
def _handle_pipeline_event(self, event):
@ -126,22 +123,22 @@ class ProvisioningMQTTConverterStage(PipelineStage):
)
)
key_values = mqtt_topic.extract_properties_from_topic(topic)
retry_after = mqtt_topic.get_optional_element(key_values, "retry-after", 0)
status_code = mqtt_topic.extract_status_code_from_topic(topic)
request_id = key_values["rid"][0]
if event.payload is not None:
response = event.payload.decode("utf-8")
# Extract pertinent information from mqtt topic
# like status code request_id and send it upwards.
operation_flow.pass_event_to_previous_stage(
self,
pipeline_events_provisioning.RegistrationResponseEvent(
request_id, status_code, key_values, response
),
self.send_event_up(
pipeline_events_base.ResponseEvent(
request_id=request_id,
status_code=int(status_code, 10),
response_body=event.payload,
retry_after=retry_after,
)
)
else:
logger.warning("Unknown topic: {} passing up to next handler".format(topic))
operation_flow.pass_event_to_previous_stage(self, event)
self.send_event_up(event)
else:
# all other messages get passed up
operation_flow.pass_event_to_previous_stage(self, event)
super(ProvisioningMQTTTranslationStage, self)._handle_pipeline_event(event)

Просмотреть файл

@ -13,46 +13,94 @@ from azure.iot.device.provisioning.pipeline import (
pipeline_stages_provisioning,
pipeline_stages_provisioning_mqtt,
)
from azure.iot.device.provisioning.pipeline import pipeline_events_provisioning
from azure.iot.device.provisioning.pipeline import pipeline_ops_provisioning
from azure.iot.device.provisioning.security import SymmetricKeySecurityClient, X509SecurityClient
from azure.iot.device.provisioning.pipeline import constant as provisioning_constants
logger = logging.getLogger(__name__)
class ProvisioningPipeline(object):
def __init__(self, security_client):
def __init__(self, security_client, pipeline_configuration):
"""
Constructor for instantiating a pipeline
:param security_client: The security client which stores credentials
"""
self.responses_enabled = {provisioning_constants.REGISTER: False}
# Event Handlers - Will be set by Client after instantiation of pipeline
self.on_connected = None
self.on_disconnected = None
self.on_message_received = None
self._registration_id = security_client.registration_id
self._pipeline = (
pipeline_stages_base.PipelineRootStage()
#
# The root is always the root. By definition, it's the first stage in the pipeline.
#
pipeline_stages_base.PipelineRootStage(pipeline_configuration=pipeline_configuration)
#
# UseSecurityClientStager comes near the root by default because it doesn't need to be after
# anything, but it does need to be before ProvisoningMQTTTranslationStage.
#
.append_stage(pipeline_stages_provisioning.UseSecurityClientStage())
.append_stage(pipeline_stages_provisioning_mqtt.ProvisioningMQTTConverterStage())
.append_stage(pipeline_stages_base.EnsureConnectionStage())
.append_stage(pipeline_stages_base.SerializeConnectOpsStage())
#
# RegistrationStage needs to come early because this is the stage that converts registration
# or query requests into request and response objects which are used by later stages
#
.append_stage(pipeline_stages_provisioning.RegistrationStage())
#
# PollingStatusStage needs to come after RegistrationStage because RegistrationStage counts
# on PollingStatusStage to poll until the registration is complete.
#
.append_stage(pipeline_stages_provisioning.PollingStatusStage())
#
# CoordinateRequestAndResponseStage needs to be after RegistrationStage and PollingStatusStage
# because these 2 stages create the request ops that CoordinateRequestAndResponseStage
# is coordinating. It needs to be before ProvisioningMQTTTranslationStage because that stage
# operates on ops that CoordinateRequestAndResponseStage produces
#
.append_stage(pipeline_stages_base.CoordinateRequestAndResponseStage())
#
# ProvisioningMQTTTranslationStage comes here because this is the point where we can translate
# all operations directly into MQTT. After this stage, only pipeline_stages_base stages
# are allowed because ProvisioningMQTTTranslationStage removes all the provisioning-ness from the ops
#
.append_stage(pipeline_stages_provisioning_mqtt.ProvisioningMQTTTranslationStage())
#
# AutoConnectStage comes here because only MQTT ops have the need_connection flag set
# and this is the first place in the pipeline wherer we can guaranetee that all network
# ops are MQTT ops.
#
.append_stage(pipeline_stages_base.AutoConnectStage())
#
# ReconnectStage needs to be after AutoConnectStage because ReconnectStage sets/clears
# the virtually_conencted flag and we want an automatic connection op to set this flag so
# we can reconnect autoconnect operations.
#
.append_stage(pipeline_stages_base.ReconnectStage())
#
# ConnectionLockStage needs to be after ReconnectStage because we want any ops that
# ReconnectStage creates to go through the ConnectionLockStage gate
#
.append_stage(pipeline_stages_base.ConnectionLockStage())
#
# RetryStage needs to be near the end because it's retrying low-level MQTT operations.
#
.append_stage(pipeline_stages_base.RetryStage())
#
# OpTimeoutStage needs to be after RetryStage because OpTimeoutStage returns the timeout
# errors that RetryStage is watching for.
#
.append_stage(pipeline_stages_base.OpTimeoutStage())
#
# MQTTTransportStage needs to be at the very end of the pipeline because this is where
# operations turn into network traffic
#
.append_stage(pipeline_stages_mqtt.MQTTTransportStage())
)
def _on_pipeline_event(event):
if isinstance(event, pipeline_events_provisioning.RegistrationResponseEvent):
if self.on_message_received:
self.on_message_received(
event.request_id,
event.status_code,
event.key_values,
event.response_payload,
)
else:
logger.warning("Provisioning event received with no handler. dropping.")
else:
logger.warning("Dropping unknown pipeline event {}".format(event.name))
def _on_connected():
@ -82,24 +130,25 @@ class ProvisioningPipeline(object):
self._pipeline.run_op(op)
callback.wait_for_completion()
if op.error:
logger.error("{} failed: {}".format(op.name, op.error))
raise op.error
def connect(self, callback=None):
"""
Connect to the service.
:param callback: callback which is called when the connection to the service is complete.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ProtocolClientError`
"""
logger.info("connect called")
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
def pipeline_callback(op, error):
callback(error=error)
self._pipeline.run_op(pipeline_ops_base.ConnectOperation(callback=pipeline_callback))
@ -108,83 +157,60 @@ class ProvisioningPipeline(object):
Disconnect from the service.
:param callback: callback which is called when the connection to the service has been disconnected
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.iothub.pipeline.exceptions.ProtocolClientError`
"""
logger.info("disconnect called")
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
def pipeline_callback(op, error):
callback(error=error)
self._pipeline.run_op(pipeline_ops_base.DisconnectOperation(callback=pipeline_callback))
def send_request(self, request_id, request_payload, operation_id=None, callback=None):
"""
Send a request to the Device Provisioning Service.
:param request_id: The id of the request
:param request_payload: The request which is to be sent.
:param operation_id: The id of the operation.
:param callback: callback which is called when the message publish has been acknowledged by the service.
"""
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
op = None
if operation_id is not None:
op = pipeline_ops_provisioning.SendQueryRequestOperation(
request_id=request_id,
operation_id=operation_id,
request_payload=request_payload,
callback=pipeline_callback,
)
else:
op = pipeline_ops_provisioning.SendRegistrationRequestOperation(
request_id=request_id, request_payload=request_payload, callback=pipeline_callback
)
self._pipeline.run_op(op)
def enable_responses(self, callback=None):
"""
Disable response from the DPS service by subscribing to the appropriate topics.
Enable response from the DPS service by subscribing to the appropriate topics.
:param callback: callback which is called when the feature is enabled
:param callback: callback which is called when responses are enabled
"""
logger.debug("enable_responses called")
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
self.responses_enabled[provisioning_constants.REGISTER] = True
def pipeline_callback(op, error):
callback(error=error)
self._pipeline.run_op(
pipeline_ops_base.EnableFeatureOperation(feature_name=None, callback=pipeline_callback)
)
def disable_responses(self, callback=None):
def register(self, payload=None, callback=None):
"""
Disable response from the DPS service by unsubscribing from the appropriate topics.
:param callback: callback which is called when the feature is disabled
Register to the device provisioning service.
:param payload: Payload that can be sent with the registration request.
:param callback: callback which is called when the registration is done.
The following exceptions are not "raised", but rather returned via the "error" parameter
when invoking "callback":
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionFailedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ConnectionDroppedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.UnauthorizedError`
:raises: :class:`azure.iot.device.provisioning.pipeline.exceptions.ProtocolClientError`
"""
logger.debug("disable_responses called")
def pipeline_callback(call):
if call.error:
# TODO we need error semantics on the client
exit(1)
if callback:
callback()
def on_complete(op, error):
# TODO : Apparently when its failed we can get result as well as error.
if error:
callback(error=error, result=None)
else:
callback(result=op.registration_result)
self._pipeline.run_op(
pipeline_ops_base.DisableFeatureOperation(feature_name=None, callback=pipeline_callback)
pipeline_ops_provisioning.RegisterOperation(
request_payload=payload, registration_id=self._registration_id, callback=on_complete
)
)

Просмотреть файл

@ -4,63 +4,93 @@
# license information.
# --------------------------------------------------------------------------
"""
This module contains one of the implementations of the Provisioning Device Client which uses Symmetric Key authentication.
This module contains user-facing synchronous Provisioning Device Client for Azure Provisioning
Device SDK. This client uses Symmetric Key and X509 authentication to register devices with an
IoT Hub via the Device Provisioning Service.
"""
import logging
from azure.iot.device.common.evented_callback import EventedCallback
from .abstract_provisioning_device_client import AbstractProvisioningDeviceClient
from .abstract_provisioning_device_client import log_on_register_complete
from .internal.polling_machine import PollingMachine
from azure.iot.device.provisioning.pipeline import constant as dps_constant
from .pipeline import exceptions as pipeline_exceptions
from azure.iot.device import exceptions
logger = logging.getLogger(__name__)
def handle_result(callback):
try:
return callback.wait_for_completion()
except pipeline_exceptions.ConnectionDroppedError as e:
raise exceptions.ConnectionDroppedError(message="Lost connection to IoTHub", cause=e)
except pipeline_exceptions.ConnectionFailedError as e:
raise exceptions.ConnectionFailedError(message="Could not connect to IoTHub", cause=e)
except pipeline_exceptions.UnauthorizedError as e:
raise exceptions.CredentialError(message="Credentials invalid, could not connect", cause=e)
except pipeline_exceptions.ProtocolClientError as e:
raise exceptions.ClientError(message="Error in the IoTHub client", cause=e)
except Exception as e:
raise exceptions.ClientError(message="Unexpected failure", cause=e)
class ProvisioningDeviceClient(AbstractProvisioningDeviceClient):
"""
Client which can be used to run the registration of a device with provisioning service
using Symmetric Key authentication.
using Symmetric Key orr X509 authentication.
"""
def __init__(self, provisioning_pipeline):
"""
Initializer for the Provisioning Client.
NOTE : This initializer should not be called directly.
Instead, the class methods that start with `create_from_` should be used to create a client object.
:param provisioning_pipeline: The protocol pipeline for provisioning. As of now this only supports MQTT.
"""
super(ProvisioningDeviceClient, self).__init__(provisioning_pipeline)
self._polling_machine = PollingMachine(provisioning_pipeline)
def register(self):
"""
Register the device with the with thw provisioning service
This is a synchronous call, meaning that this function will not return until the registration
process has completed successfully or the attempt has resulted in a failure. Before returning
the client will also disconnect from the provisioning service.
If a registration attempt is made while a previous registration is in progress it may throw an error.
Register the device with the with the provisioning service
This is a synchronous call, meaning that this function will not return until the
registration process has completed successfully or the attempt has resulted in a failure.
Before returning, the client will also disconnect from the provisioning service.
If a registration attempt is made while a previous registration is in progress it may
throw an error.
:returns: RegistrationResult indicating the result of the registration.
:rtype: :class:`azure.iot.device.RegistrationResult`
:raises: :class:`azure.iot.device.exceptions.CredentialError` if credentials are invalid
and a connection cannot be established.
:raises: :class:`azure.iot.device.exceptions.ConnectionFailedError` if a establishing a
connection results in failure.
:raises: :class:`azure.iot.device.exceptions.ConnectionDroppedError` if connection is lost
during execution.
:raises: :class:`azure.iot.device.exceptions.ClientError` if there is an unexpected failure
during execution.
"""
logger.info("Registering with Provisioning Service...")
if not self._provisioning_pipeline.responses_enabled[dps_constant.REGISTER]:
self._enable_responses()
register_complete = EventedCallback(return_arg_name="result")
self._polling_machine.register(callback=register_complete)
result = register_complete.wait_for_completion()
self._provisioning_pipeline.register(
payload=self._provisioning_payload, callback=register_complete
)
result = handle_result(register_complete)
log_on_register_complete(result)
return result
def cancel(self):
def _enable_responses(self):
"""Enable to receive responses from Device Provisioning Service.
This is a synchronous call, meaning that this function will not return until the feature
has been enabled.
"""
This is a synchronous call, meaning that this function will not return until the cancellation
process has completed successfully or the attempt has resulted in a failure. Before returning
the client will also disconnect from the provisioning service.
logger.info("Enabling reception of response from Device Provisioning Service...")
In case there is no registration in process it will throw an error as there is
no registration process to cancel.
"""
logger.info("Cancelling the current registration process")
subscription_complete = EventedCallback()
self._provisioning_pipeline.enable_responses(callback=subscription_complete)
cancel_complete = EventedCallback()
self._polling_machine.cancel(callback=cancel_complete)
cancel_complete.wait_for_completion()
handle_result(subscription_complete)
logger.info("Successfully cancelled the current registration process")
logger.info("Successfully subscribed to Device Provisioning Service to receive responses")

Двоичные данные
azure-iot-device/doc/images/azure_iot_sdk_python_banner.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 15 KiB

Просмотреть файл

@ -11,6 +11,7 @@ This directory contains samples showing how to use the various features of the M
```bash
az iot hub create --resource-group <your resource group> --name <your IoT Hub name>
```
* Note that this operation make take a few minutes.
2. Add the IoT Extension to the Azure CLI, and then [register a device identity](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-create)
@ -20,14 +21,15 @@ This directory contains samples showing how to use the various features of the M
az iot hub device-identity create --hub-name <your IoT Hub name> --device-id <your device id>
```
2. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI
3. [Retrieve your Device Connection String](https://docs.microsoft.com/en-us/cli/azure/ext/azure-cli-iot-ext/iot/hub/device-identity?view=azure-cli-latest#ext-azure-cli-iot-ext-az-iot-hub-device-identity-show-connection-string) using the Azure CLI
```bash
az iot hub device-identity show-connection-string --device-id <your device id> --hub-name <your IoT Hub name>
```
It should be in the format:
```
```Text
HostName=<your IoT Hub name>.azure-devices.net;DeviceId=<your device id>;SharedAccessKey=<some value>
```
@ -39,13 +41,16 @@ This directory contains samples showing how to use the various features of the M
5. On your device, set the Device Connection String as an enviornment variable called `IOTHUB_DEVICE_CONNECTION_STRING`.
### Windows (cmd)
**Windows (cmd)**
```cmd
set IOTHUB_DEVICE_CONNECTION_STRING=<your connection string here>
```
* Note that there are **NO** quotation marks around the connection string.
### Linux (bash)
**Linux (bash)**
```bash
export IOTHUB_DEVICE_CONNECTION_STRING="<your connection string here>"
```
@ -56,7 +61,6 @@ This directory contains samples showing how to use the various features of the M
import os
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import auth
async def main():
@ -94,6 +98,7 @@ This directory contains samples showing how to use the various features of the M
8. Your device is now able to connect to Azure IoT Hub!
## Additional Samples
Further samples with more complex IoT Hub scenarios are contained in the [advanced-hub-scenarios](advanced-hub-scenarios) directory, including:
* Send multiple telemetry messages from a Device
@ -101,10 +106,10 @@ Further samples with more complex IoT Hub scenarios are contained in the [advanc
* Send and receive updates to device twin
* Receive direct method invocations
Further samples with more complex IoT Edge scnearios are contained in the [advanced-edge-scenarios](advanced-edge-scenarios) directory, including:
Further samples with more complex IoT Edge scenarios are contained in the [advanced-edge-scenarios](advanced-edge-scenarios) directory, including:
* Send multiple telemetry messages from a Module
* Receive input messages on a Module
* Send messages to a Module Output
Samples for the legacy clients, that use a synchronous API, intended for use with Python 2.7, Python 3.4, or compatibility scenarios for Python 3.5+ are contained in the [legacy-samples](legacy-samples) directory.
Samples for the synchronous clients, intended for use with Python 2.7 or compatibility scenarios for Python 3.5+ are contained in the [sync-samples](sync-samples) directory.

Просмотреть файл

@ -1,59 +0,0 @@
# Advanced IoT Hub Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub.
**These samples are written to run in Python 3.7+**, but can be made to work with Python 3.5 and 3.6 with a slight modification as noted in each sample:
```python
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()
```
## Included Samples
### IoTHub Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
### DPS Samples
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [register_symmetric_key.py](register_symmetric_key.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [register_x509.py](register_x509.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.

Просмотреть файл

@ -0,0 +1,43 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# -------------------------------------------------------------------------
import asyncio
import time
import uuid
from azure.iot.device.aio import IoTHubModuleClient
from azure.iot.device import Message
messages_to_send = 10
async def main():
# Inputs/Ouputs are only supported in the context of Azure IoT Edge and module client
# The module client object acts as an Azure IoT Edge module and interacts with an Azure IoT Edge hub
module_client = IoTHubModuleClient.create_from_edge_environment()
# Connect the client.
await module_client.connect()
fake_method_params = {
"methodName": "doSomethingInteresting",
"payload": "foo",
"responseTimeoutInSeconds": 5,
"connectTimeoutInSeconds": 2,
}
response = await module_client.invoke_method(
device_id="fakeDeviceId", module_id="fakeModuleId", method_params=fake_method_params
)
print("Method Response: {}".format(response))
# finally, disconnect
module_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,83 @@
# Advanced IoT Hub Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub.
**These samples are written to run in Python 3.7+**, but can be made to work with Python 3.5 and 3.6 with a slight modification as noted in each sample:
```python
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()
```
## Included Samples
### IoTHub Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```bash
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```bash
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```bash
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```bash
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
### DPS Samples
#### Individual
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [provision_symmetric_key.py](provision_symmetric_key.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_and_send_telemetry.py](provision_symmetric_key_and_send_telemetry.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key, then send a telemetry message to IoTHub. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_with_payload.py](provision_symmetric_key_with_payload.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key while supplying a custom payload. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_x509.py](provision_x509.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
* [provision_x509_and_send_telemetry.py](provision_x509_and_send_telemetry.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key, then send a telemetry message to IoTHub. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
#### Group
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* [provision_symmetric_key_group.py](provision_symmetric_key_group.py) - Provision multiple devices to IoTHub by registering them to the Device Provisioning Service using derived symmetric keys. For this you must have the environment variables PROVISIONING_MASTER_SYMMETRIC_KEY, PROVISIONING_DEVICE_ID_1, PROVISIONING_DEVICE_ID_2, PROVISIONING_DEVICE_ID_3.

Просмотреть файл

@ -15,7 +15,6 @@ symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
async def main():
async def register_device():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
@ -23,10 +22,8 @@ async def main():
symmetric_key=symmetric_key,
)
return await provisioning_device_client.register()
registration_result = await provisioning_device_client.register()
results = await asyncio.gather(register_device())
registration_result = results[0]
print("The complete registration result is")
print(registration_result.registration_state)

Просмотреть файл

@ -0,0 +1,70 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import asyncio
from azure.iot.device.aio import ProvisioningDeviceClient
import os
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message
import uuid
messages_to_send = 10
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
async def main():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
device_client = IoTHubDeviceClient.create_from_symmetric_key(
symmetric_key=symmetric_key,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["count"] = i
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,87 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
import base64
import hmac
import hashlib
from azure.iot.device.aio import ProvisioningDeviceClient
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
# These are the names of the devices that will eventually show up on the IoTHub
device_id_1 = os.getenv("PROVISIONING_DEVICE_ID_1")
device_id_2 = os.getenv("PROVISIONING_DEVICE_ID_2")
device_id_3 = os.getenv("PROVISIONING_DEVICE_ID_3")
# For computation of device keys
device_ids_to_keys = {}
# NOTE : Only for illustration purposes.
# This is how a device key can be derived from the group symmetric key.
# This is just a helper function to show how it is done.
# Please don't directly store the group key on the device.
# Follow the following method to compute the device key somewhere else.
def derive_device_key(device_id, group_symmetric_key):
"""
The unique device ID and the group master key should be encoded into "utf-8"
After this the encoded group master key must be used to compute an HMAC-SHA256 of the encoded registration ID.
Finally the result must be converted into Base64 format.
The device key is the "utf-8" decoding of the above result.
"""
message = device_id.encode("utf-8")
signing_key = base64.b64decode(group_symmetric_key.encode("utf-8"))
signed_hmac = hmac.HMAC(signing_key, message, hashlib.sha256)
device_key_encoded = base64.b64encode(signed_hmac.digest())
return device_key_encoded.decode("utf-8")
# derived_device_key has been computed already using the helper function somewhere else
# AND NOT on this sample. Do not use the direct master key on this sample to compute device key.
derived_device_key_1 = "some_value_already_computed"
derived_device_key_2 = "some_value_already_computed"
derived_device_key_3 = "some_value_already_computed"
device_ids_to_keys[device_id_1] = derived_device_key_1
device_ids_to_keys[device_id_1] = derived_device_key_2
device_ids_to_keys[device_id_1] = derived_device_key_3
async def main():
async def register_device(registration_id):
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=device_ids_to_keys[registration_id],
)
return await provisioning_device_client.register()
results = await asyncio.gather(
register_device(device_ids_to_keys[device_id_1]),
register_device(device_ids_to_keys[device_id_2]),
register_device(device_ids_to_keys[device_id_3]),
)
for index in range(0, len(device_ids_to_keys)):
registration_result = results[index]
print("The complete state of registration result is")
print(registration_result.registration_state)
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,47 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device.aio import ProvisioningDeviceClient
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID_PAYLOAD")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY_PAYLOAD")
class Wizard(object):
def __init__(self, first_name, last_name, dict_of_stuff):
self.first_name = first_name
self.last_name = last_name
self.props = dict_of_stuff
async def main():
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
properties = {"House": "Gryffindor", "Muggle-Born": "False"}
wizard_a = Wizard("Harry", "Potter", properties)
provisioning_device_client.provisioning_payload = wizard_a
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -3,10 +3,6 @@
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
# This is for illustration purposes only. The sample will not work currently.
import os
import asyncio
from azure.iot.device import X509
@ -18,7 +14,6 @@ registration_id = os.getenv("DPS_X509_REGISTRATION_ID")
async def main():
async def register_device():
x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"),
@ -31,10 +26,8 @@ async def main():
x509=x509,
)
return await provisioning_device_client.register()
registration_result = await provisioning_device_client.register()
results = await asyncio.gather(register_device())
registration_result = results[0]
print("The complete registration result is")
print(registration_result.registration_state)

Просмотреть файл

@ -0,0 +1,76 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device import X509
from azure.iot.device.aio import ProvisioningDeviceClient
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message
import uuid
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("DPS_X509_REGISTRATION_ID")
messages_to_send = 10
async def main():
x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"),
pass_phrase=os.getenv("PASS_PHRASE"),
)
provisioning_device_client = ProvisioningDeviceClient.create_from_x509_certificate(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
x509=x509,
)
registration_result = await provisioning_device_client.register()
print("The complete registration result is")
print(registration_result.registration_state)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
device_client = IoTHubDeviceClient.create_from_x509_certificate(
x509=x509,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["count"] = i
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,35 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient
async def main():
# Fetch the connection string from an enviornment variable
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
# Create instance of the device client using the connection string
device_client = IoTHubDeviceClient.create_from_connection_string(conn_str, websockets=True)
# We do not need to call device_client.connect(), since it will be connected when we send a message.
# Send a single message
print("Sending message...")
await device_client.send_message("This is a message that is being sent")
print("Message successfully sent!")
# Finally, we do not need a disconnect. When the program completes, the client will be disconnected and destroyed.
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,55 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import asyncio
import uuid
from azure.iot.device.aio import IoTHubDeviceClient
from azure.iot.device import Message, ProxyOptions
import socks
messages_to_send = 10
async def main():
# The connection string for a device should never be stored in code. For the sake of simplicity we're using an environment variable here.
conn_str = os.getenv("IOTHUB_DEVICE_CONNECTION_STRING")
proxy_opts = ProxyOptions(
proxy_type=socks.HTTP, proxy_addr="127.0.0.1", proxy_port=8888 # localhost
)
# The client object is used to interact with your Azure IoT hub.
device_client = IoTHubDeviceClient.create_from_connection_string(
conn_str, websockets=True, proxy_options=proxy_opts
)
# Connect the client.
await device_client.connect()
async def send_test_message(i):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.correlation_id = "correlation-1234"
msg.custom_properties["tornado-warning"] = "yes"
await device_client.send_message(msg)
print("done sending message #" + str(i))
# send `messages_to_send` messages in parallel
await asyncio.gather(*[send_test_message(i) for i in range(1, messages_to_send + 1)])
# finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())
# If using Python 3.6 or below, use the following code instead of asyncio.run(main()):
# loop = asyncio.get_event_loop()
# loop.run_until_complete(main())
# loop.close()

Просмотреть файл

@ -0,0 +1,121 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import uuid
import asyncio
from azure.iot.device.aio import IoTHubDeviceClient, IoTHubModuleClient
from azure.iot.device import X509
import http.client
import pprint
import json
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
"""
Welcome to the Upload to Blob sample. To use this sample you must have azure.storage.blob installed in your python environment.
To do this, you can run:
$ pip isntall azure.storage.blob
This sample covers using the following Device Client APIs:
get_storage_info_for_blob
- used to get relevant information from IoT Hub about a linked Storage Account, including
a hostname, a container name, a blob name, and a sas token. Additionally it returns a correlation_id
which is used in the notify_blob_upload_status, since the correlation_id is IoT Hub's way of marking
which blob you are working on.
notify_blob_upload_status
- used to notify IoT Hub of the status of your blob storage operation. This uses the correlation_id obtained
by the get_storage_info_for_blob task, and will tell IoT Hub to notify any service that might be listening for a notification on the
status of the file upload task.
You can learn more about File Upload with IoT Hub here:
https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-file-upload
"""
# Host is in format "<iothub name>.azure-devices.net"
async def storage_blob(blob_info):
try:
print("Azure Blob storage v12 - Python quickstart sample")
sas_url = "https://{}/{}/{}{}".format(
blob_info["hostName"],
blob_info["containerName"],
blob_info["blobName"],
blob_info["sasToken"],
)
blob_client = BlobClient.from_blob_url(sas_url)
# Create a file in local Documents directory to upload and download
local_file_name = "data/quickstart" + str(uuid.uuid4()) + ".txt"
filename = os.path.join(os.path.dirname(os.path.realpath(__file__)), local_file_name)
# Write text to the file
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
file = open(filename, "w")
file.write("Hello, World!")
file.close()
print("\nUploading to Azure Storage as blob:\n\t" + local_file_name)
# # Upload the created file
with open(filename, "rb") as f:
result = blob_client.upload_blob(f)
return (None, result)
except Exception as ex:
print("Exception:")
print(ex)
return ex
async def main():
hostname = os.getenv("IOTHUB_HOSTNAME")
device_id = os.getenv("IOTHUB_DEVICE_ID")
x509 = X509(
cert_file=os.getenv("X509_CERT_FILE"),
key_file=os.getenv("X509_KEY_FILE"),
pass_phrase=os.getenv("PASS_PHRASE"),
)
device_client = IoTHubDeviceClient.create_from_x509_certificate(
hostname=hostname, device_id=device_id, x509=x509
)
# device_client = IoTHubModuleClient.create_from_connection_string(conn_str)
# Connect the client.
await device_client.connect()
# await device_client.get_storage_info_for_blob("fake_device", "fake_method_params")
# get the storage sas
blob_name = "fakeBlobName12"
storage_info = await device_client.get_storage_info_for_blob(blob_name)
# upload to blob
connection = http.client.HTTPSConnection(hostname)
connection.connect()
# notify iot hub of blob upload result
# await device_client.notify_upload_result(storage_blob_result)
storage_blob_result = await storage_blob(storage_info)
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(storage_blob_result)
connection.close()
await device_client.notify_blob_upload_status(
storage_info["correlationId"], True, 200, "fake status description"
)
# Finally, disconnect
await device_client.disconnect()
if __name__ == "__main__":
asyncio.run(main())

Просмотреть файл

@ -1,54 +0,0 @@
# Legacy Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub and Azure IoT Edge.
**These samples are legacy samples**, they use the sycnhronous API intended for use with Python 2.7 and 3.4, or in compatibility scenarios with later versions. We recommend you use the [asynchronous API instead](../advanced-hub-scenarios).
## IoTHub Device Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```bash
az iot hub monitor-events --hub-name <your IoT Hub name> --output table
```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
## IoT Edge Module Samples
In order to use these samples, they **must** be run from inside an Edge container.
* [receive_message_on_input.py](receive_message_on_input.py) - Receive messages sent to an Edge module on a specific module input.
* [send_message_to_output.py](send_message_to_output.py) - Send multiple messages in parallel from an Edge module to a specific output
## DPS Samples
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [register_symmetric_key.py](register_symmetric_key.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [register_x509.py](register_x509.py) - Register to provisioning service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.

Просмотреть файл

@ -0,0 +1,75 @@
# Legacy Scenario Samples for the Azure IoT Hub Device SDK
This directory contains samples showing how to use the various features of Azure IoT Hub Device SDK with the Azure IoT Hub and Azure IoT Edge.
**These samples are legacy samples**, they use the sycnhronous API intended for use with Python 2.7, or in compatibility scenarios with later versions. We recommend you use the [asynchronous API instead](../advanced-hub-scenarios).
## IoTHub Device Samples
In order to use these samples, you **must** set your Device Connection String in the environment variable `IOTHUB_DEVICE_CONNECTION_STRING`.
* [send_message.py](send_message.py) - Send multiple telmetry messages in parallel from a device to the Azure IoT Hub.
* You can monitor the Azure IoT Hub for messages received by using the following Azure CLI command:
```Shell
bash az iot hub monitor-events --hub-name <your IoT Hub name> --output table```
* [receive_message.py](receive_message.py) - Receive Cloud-to-Device (C2D) messages sent from the Azure IoT Hub to a device.
* In order to send a C2D message, use the following Azure CLI command:
```Shell
az iot device c2d-message send --device-id <your device id> --hub-name <your IoT Hub name> --data <your message here>
```
* [receive_direct_method.py](receive_direct_method.py) - Receive direct method requests on a device from the Azure IoT Hub and send responses back
* In order to invoke a direct method, use the following Azure CLI command:
```Shell
az iot hub invoke-device-method --device-id <your device id> --hub-name <your IoT Hub name> --method-name <desired method>
```
* [receive_twin_desired_properties_patch](receive_twin_desired_properties_patch.py) - Receive an update patch of changes made to the device twin's desired properties
* In order to send a update patch to a device twin's reported properties, use the following Azure CLI command:
```Shell
az iot hub device-twin update --device-id <your device id> --hub-name <your IoT Hub name> --set properties.desired.<property name>=<value>
```
* [update_twin_reported_properties](update_twin_reported_properties.py) - Send an update patch of changes to the device twin's reported properties
* You can see the changes reflected in your device twin by using the following Azure CLI command:
```Shell
az iot hub device-twin show --device-id <your device id> --hub-name <yoru IoT Hub name>
```
## IoT Edge Module Samples
In order to use these samples, they **must** be run from inside an Edge container.
* [receive_message_on_input.py](receive_message_on_input.py) - Receive messages sent to an Edge module on a specific module input.
* [send_message_to_output.py](send_message_to_output.py) - Send multiple messages in parallel from an Edge module to a specific output
## DPS Samples
### Individual
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* PROVISIONING_REGISTRATION_ID
There are 2 ways that your device can get registered to the provisioning service differing in authentication mechanisms and another additional environment variable is needed to for the samples:-
* [provision_symmetric_key.py](provision_symmetric_key.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_symmetric_key_with_payload.py](provision_symmetric_key_with_payload.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key while supplying a custom payload. For this you must have the environment variable PROVISIONING_SYMMETRIC_KEY.
* [provision_x509.py](provision_x509.py) - Provision a device to IoTHub by registering to the Device Provisioning Service using a symmetric key. For this you must have the environment variable X509_CERT_FILE, X509_KEY_FILE, PASS_PHRASE.
#### Group
In order to use these samples, you **must** have the following environment variables :-
* PROVISIONING_HOST
* PROVISIONING_IDSCOPE
* [provision_symmetric_key_group.py](provision_symmetric_key_group.py) - Provision multiple devices to IoTHub by registering them to the Device Provisioning Service using derived symmetric keys. For this you must have the environment variables PROVISIONING_MASTER_SYMMETRIC_KEY, PROVISIONING_DEVICE_ID_1, PROVISIONING_DEVICE_ID_2, PROVISIONING_DEVICE_ID_3.

Просмотреть файл

@ -23,7 +23,7 @@ registration_result = provisioning_device_client.register()
print(registration_result)
# Individual attributes can be seen as well
print("The request_id was :-")
print(registration_result.request_id)
print("The status was :-")
print(registration_result.status)
print("The etag is :-")
print(registration_result.registration_state.etag)

Просмотреть файл

@ -0,0 +1,62 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
from azure.iot.device import ProvisioningDeviceClient
import os
import time
from azure.iot.device import IoTHubDeviceClient, Message
import uuid
provisioning_host = os.getenv("PROVISIONING_HOST")
id_scope = os.getenv("PROVISIONING_IDSCOPE")
registration_id = os.getenv("PROVISIONING_REGISTRATION_ID")
symmetric_key = os.getenv("PROVISIONING_SYMMETRIC_KEY")
provisioning_device_client = ProvisioningDeviceClient.create_from_symmetric_key(
provisioning_host=provisioning_host,
registration_id=registration_id,
id_scope=id_scope,
symmetric_key=symmetric_key,
)
registration_result = provisioning_device_client.register()
# The result can be directly printed to view the important details.
print(registration_result)
# Individual attributes can be seen as well
print("The request_id was :-")
print(registration_result.request_id)
print("The etag is :-")
print(registration_result.registration_state.etag)
if registration_result.status == "assigned":
print("Will send telemetry from the provisioned device")
# Create device client from the above result
device_client = IoTHubDeviceClient.create_from_symmetric_key(
symmetric_key=symmetric_key,
hostname=registration_result.registration_state.assigned_hub,
device_id=registration_result.registration_state.device_id,
)
# Connect the client.
device_client.connect()
for i in range(1, 6):
print("sending message #" + str(i))
device_client.send_message("test payload message " + str(i))
time.sleep(1)
for i in range(6, 11):
print("sending message #" + str(i))
msg = Message("test wind speed " + str(i))
msg.message_id = uuid.uuid4()
msg.custom_properties["tornado-warning"] = "yes"
device_client.send_message(msg)
time.sleep(1)
# finally, disconnect
device_client.disconnect()
else:
print("Can not send telemetry from the provisioned device")

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше