* Stages (except the MQTTTransportStage) now always invoke `.send_event_up`() at the _end_ of the `.handle_pipeline_event()` call
* Now that connection state has been improved by previous PRs, we no longer have to work around timing issues by invoking `.send_event_up()` early
* By doing things in the correct order, we now will no longer be accidentally republishing the twin GET that triggers upon connection
* Fixes#994
* Removed the boolean `.connected` attribute being set on the `PipelineNucleus` by the `PipelineRootStage`
* `.connected` is now a property that is derived from the `connection_state` attribute, which is set by the `ConnectionStateStage`
* This is now the single source of truth for the connection state
* Added additional testing fixtures to allow for easy mocking of the connection state
* Note that changes made to the `ConnectionLockStage` are merely a stopgap until the entire stage can be removed
* Renamed ReconnectStage to ConnectionStateStage
* Reduced footprint of the connection retry conditional
* Updated tests to account for new approach to connection retry conditional
* Updated logging
- Addresses KeyError message when enabling on_twin_desired_properties_patch_received
- Inconsistencies with how connection states are handled will be further addressed
* fix (azure-iot-device) Add MQTT_ERR_KEEPALIVE mapping to fix connection issues caused by new Paho error code
* Add failure documentation to mqtt_transport.py and handle disconnect failures in the connection watchdog
* add tests for disconnect exception in watchdog handler
* Added `gateway_hostname` user option to `.create_from_sastoken()`, `.create_from_x509_certificate()` and `.create_from_symmetric_key()` on the Device/Module IoTHub Clients
* Added `gateway_hostname` user option to all factory methods on the Provisioning Clients
* Added a ValueError that is raised if trying to use an X509 Connection String
* Fixed an issue where if no client event handlers were set (`on_connection_state_change`, `on_new_sastoken_required` or `on_background_exception`) ClientEvent objects could pile up in inboxes without being resolved
* Any ClientEvents that occur before a handler is set will now be lost. This was always the intended design, however.
* Disallow install on those versions of Python
* Removed dependencies that are no longer necessary
* Removed `ChainableExceptions` in favor of using built in `raise from` functionality
* Removed `CallableWeakMethod` as Python 3 garbage collection will be sufficient
* Adjusted error handling where different versions of Python raised different errors
* Simplified `pipeline_thread` logic
* Removed references to 2.7 and 3.5 in documentation and samples
* Moved to Python 3-style `super()` invocations
* Removed custom implementation of `urlencode` in favor of using built-in `urlencode`
* Removed support for universal wheels
* Adjusted unittests that had Python 2.7 or 3.5 specific logic
* Removed Python 2.7 jobs from gate and canary
* Replaced the use of the HTTPSConnection client with the use of the requests library for HTTP operations in the Transport
* Enabled HTTP, SOCKS4 and SOCKS5 proxy support
* Fixed an issue where Content-Length header was not formatted as a string
* Added model-level validation for proxy type on ProxyOptions
Alarms will now correctly trigger at the specified time, even if the system goes to sleep.
This was already supposed to have been implemented but apparently was not.
Fixed an issue where sufficiently long SAS token expiry or retry interval could cause a OverflowError due to exceeding the max value supported by a waiting thread
* Removed functionality whereby `MQTTTransportStage` would raise a `DisconnectedEvent` on Pub/Sub/Unsub attempts if the `MQTTTransport` raised an exception
* This was unnecessary because the `on_mqtt_disconnected` handler being attached to the `MQTTTransport` would already do this
* Pipeline will now send background exceptions to the user via the on_background_exception handler
* Various fixes and refinements of exception flow in the pipeline
* Background exceptions from threading modules and the handler manager do not.
* Background exceptions from the pipeline are still also given to the original logging function
* Note that a bunch of unrelated tests had to be fixed as old error handling processes were allowing them to spuriously pass when they should have failed. With that process removed, they started to fail as they should have all along, and required fixing
* Renamed the internal PipelineTimeoutError to OperationTimeout
* Added a user-facing exception also named OperationTimeout
* Fixed an issue where connection timeouts were raising OperationCancelled incorrectly, now will raise OperationTimeout
* Updated various docs and tests to reflect this
* Fixed a bug where if automatic reauthorization upon SAS refresh failed, it would not attempt it again
* Also fixed a few random typos and removed unnecessary comments
* Auto-reconnect and SAS renewals should no longer cause problems if one or the other is turned off
* Operations now fail if they require a connection, and one cannot be automatically established
* All user-initiated actions should now return errors in case of failure rather than any kind of indefinite retry
* Created completely separate flows for automatic reconnection vs other connections
* Added more explicitness in conditional logic and improved documentation for clarity
* Adjusted Connect timeout to be 60 seconds to line up with Hub functionality
* Removed dead codepath for retrying Publish operations (not related to the above, but something I came across)
- Enabled the `.on_new_sastoken_required` handler
- Renamed the `SasTokenRenewalStage` to simply `SasTokenStage` (perhaps `SasTokenUpdateStage` would be preferable?)
- Added functionality to the `SasTokenStage` to fire an event indicating a new token is required
- Updated samples to show this new handler
- Also ended up black formatting a bunch of files that weren't black formatted (and have nothing to do with this PR)
* Reduced duplication of code by moving handler property definitions to the abstract clients
* Reorganized order of definitions for clarity
* Added some missing internal docstrings
* When an unexpected disconnect occurs and connection retry is not enabled, in-flight operations will now be cancelled instead of hanging
* This is most notably in order to fix issues where quota is exceeded, but applies to other situations as well
* Refactored HandlerManagers to differentiate between the existing handlers for received data (now called Receiver Handlers) and the new Client Event Handlers
* Added infrastructure to support Client Event Handlers
* Implemented the .on_connection_state_change handler
* Within the HandlerManagers support has been added for upcoming .on_new_sastoken_required and .on_background_exception, but client or pipeline implementation of these handlers has not been added.