initial commit for splitting tables from storage to cosmosdb namespcae
This commit is contained in:
Srinath Narayanan 2017-10-02 12:12:01 -07:00
Родитель 859b0f329a
Коммит 7ed82668a7
1055 изменённых файлов: 358088 добавлений и 309 удалений

319
.gitignore поставляемый
Просмотреть файл

@ -1,288 +1,59 @@
## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.
##
## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore
# Python cache
__pycache__/
*.pyc
# Virtual environment
env/
# PTVS analysis
.ptvs/
# Build results
bin/
obj/
dist/
MANIFEST
# Result of running python setup.py install/pip install -e
RECORD.txt
build/
*.egg-info/
# Test results
TestResults/
# Credentials
testsettings_local.json
settings_real.py
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
x64/
x86/
bld/
[Bb]in/
[Oo]bj/
[Ll]og/
# Visual Studio 2015 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/
# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*
# Windows image file caches
Thumbs.db
ehthumbs.db
# NUNIT
*.VisualState.xml
TestResult.xml
# Folder config file
Desktop.ini
# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c
# Recycle Bin used on file shares
$RECYCLE.BIN/
# .NET Core
project.lock.json
project.fragment.lock.json
artifacts/
**/Properties/launchSettings.json
# Mac desktop service store files
.DS_Store
*_i.c
*_p.c
*_i.h
*.ilk
*.meta
*.obj
*.pch
*.pdb
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc
.idea
src/build
*.iml
/doc/_build
/.vs/config/applicationhost.config
# Chutzpah Test files
_Chutzpah*
# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
*.VC.VC.opendb
# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap
# TFS 2012 Local Workspace
$tf/
# Guidance Automation Toolkit
*.gpState
# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user
# JustCode is a .NET coding add-in
.JustCode
# TeamCity is a build add-in
_TeamCity*
# DotCover is a Code Coverage Tool
*.dotCover
# Visual Studio code coverage results
*.coverage
*.coveragexml
# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*
# MightyMoose
*.mm.*
AutoTest.Net/
# Web workbench (sass)
.sass-cache/
# Installshield output folder
[Ee]xpress/
# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html
# Click-Once directory
publish/
# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# TODO: Comment the next line if you want to checkin your web deploy settings
# but database connection strings (with potential passwords) will be unencrypted
# Azure deployment credentials
*.pubxml
*.publishproj
# Microsoft Azure Web App publish settings. Comment the next line if you want to
# checkin your Azure Web App publish settings, but sensitive information contained
# in these scripts will be unencrypted
PublishScripts/
# NuGet Packages
*.nupkg
# The packages folder can be ignored because of Package Restore
**/packages/*
# except build/, which is used as an MSBuild target.
!**/packages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/packages/repositories.config
# NuGet v3's project.json files produces more ignorable files
*.nuget.props
*.nuget.targets
# Microsoft Azure Build Output
csx/
*.build.csdef
# Microsoft Azure Emulator
ecf/
rcf/
# Windows Store app package directories and files
AppPackages/
BundleArtifacts/
Package.StoreAssociation.xml
_pkginfo.txt
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/
# Others
ClientBin/
~$*
*~
*.dbmdl
*.dbproj.schemaview
*.jfm
*.pfx
*.publishsettings
orleans.codegen.cs
# Since there are multiple workflows, uncomment next line to ignore bower_components
# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
#bower_components/
# RIA/Silverlight projects
Generated_Code/
# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm
# SQL Server files
*.mdf
*.ldf
*.ndf
# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings
# Microsoft Fakes
FakesAssemblies/
# GhostDoc plugin setting file
*.GhostDoc.xml
# Node.js Tools for Visual Studio
.ntvs_analysis.dat
node_modules/
# Typescript v1 declaration files
typings/
# Visual Studio 6 build log
*.plg
# Visual Studio 6 workspace options file
*.opt
# Visual Studio 6 auto-generated workspace file (contains which files were open etc.)
*.vbw
# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions
# Paket dependency manager
.paket/paket.exe
paket-files/
# FAKE - F# Make
.fake/
# JetBrains Rider
.idea/
*.sln.iml
# CodeRush
.cr/
# Python Tools for Visual Studio (PTVS)
__pycache__/
*.pyc
# Cake - Uncomment if you are using it
# tools/**
# !tools/packages.config
# Telerik's JustMock configuration file
*.jmconfig
# BizTalk build output
*.btp.cs
*.btm.cs
*.odx.cs
*.xsd.cs
# code coverage
.coverage
.cache/

109
BreakingChanges.md Normal file
Просмотреть файл

@ -0,0 +1,109 @@
# Breaking Changes
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.
**Note: This changelog is deprecated starting with version XX.XX.XX, please refer to the ChangeLog.md in each package for future change logs.**
## Version XX.XX.XX:
### All:
- The library has been split into 4 different packages:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
- The package `azure-storage` is now deprecated.
- The classes that were directly under azure.storage, not under azure.storage.*(blob, file, queue, table), are now under azure.storage.common.
- Example: azure.storage.retry becomes azure.storage.common.retry
## Version 0.34.0:
### All:
- Several error messages have been clarified or made more specific.
### Blob:
- If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
### Queue:
- For put_message a QueueMessage will be returned. This message will have pop receipt, insertion/expiration time, and message ID populated.
## Version 0.33.0:
- Remove with_filter from service client in favor of the newer callback functions.
- Remove max_retries and retry_wait from the blob and file create and get functions in favor of the new client-level retry policies.
## Version 0.32.0:
### All:
- Cryptography library dependency
### Blob:
- get_blob_to_* will do an initial get request of size 32 MB. If it then finds the blob is larger than this size, it will parallelize by default.
- Block blob and page blob create_blob_from_* methods will parallelize by default.
### File:
- get_file_to_* will do an initial get request of size 32 MB. If it then finds the file is larger than this size, it will parallelize by default.
- create_file_from_* methods will parallelize by default.
## Version 0.30.0
### All:
- UserAgent string has changed to conform to the Azure Storage standard.
### Shared Access Signatures (SAS) and ACL
- set and get acl methods take and return dictionaries mapping an id to an AccessPolicy object rather than a SignedIdentifiers object.
- generate_shared_access_signature methods were renamed to indicate the object for which they were creating the sas. For example, generate_shared_access_signature for containers became generate_container_shared_access_signature.
- generate_*_shared_access_signature methods take permission, expiry, start and id directly rather than as part of a SharedAccessPolicy object.
- *SharedAccessPermissions objects were renamed to *Permissions. For example, TableSharedAccessPermissions became TablePermissions.
- SharedAccessSignature was completely redone and contains methods for individual services rather than one method for all. For example, generate_file to create a file shared access token.
### Table:
- Entity insert, update, merge, delete, insert_or_replace and insert_or_merge operations do not take a content_type parameter.
- Entity update, merge, insert_or_replace and insert_or_merge operations do not take partition_key or row_key parameters. These are still required to be part of the entity parameter as before.
- insert_entity returns the entity's etag rather than returning the entire entity.
- Entity update, merge, insert_or_replace and insert_or_merge operations return the etag directly as a string rather than returning a dictionary containing the etag.
- Operations which return entities (get_entity, query_entities) will return Edm.Int64 properties as plain Python ints and Edm.Int32 properties as as EntityProperty objects.
- All table entity integer values are stored on the service with type Edm.Int64 unless the type is explicitly overridden as Edm.Int32.
- Table batches are constructed using the Batch class rather than turning batching on and off via the TableService. The TableService can then execute these batches using commit_batch(table_name, batch). TableService no longer contains begin_batch or cancel_batch methods, and commit_batch works differently and takes different parameters.
- Table sas generation requires start/end pk/rk to be specified as direct parameters to the method rather than as part of an AccessPolicy.
### Blob:
- Separated lease_container and lease_blob into unique methods for each lease action.
- Refactored the blob service into a block blob and page blob service.
- Renamed APIs and params: All x_ms(_blob) prefixes and duplicates headers removed. x_ms_range => byte_range for applicable APIs. maxresults => max_results for applicable APIs. For append blobs and page blobs: put_blob => create_blob. For block blobs put_blob => _put_blob. x_ms_blob_condition_maxsize => maxsize_condition for append blob APIs. x_ms_blob_condition_appendpos => appendpos_condition for append blob APIs. text_encoding => encoding for applicable APIs. put_blob_from* => create_blob_from* for page and block blobs. x_ms_blob_content_md5 => transactional_content_md5 for put_block_list. blocklisttype => block_list_type for get_block_list. blockid => block_id for put_block. get_blob_to_file => get_blob_to_stream. get_blob => _get_blob (use get_blob_to_bytes instead). All get_blob* APIs use boolean value for range_get_content_md5 parameter. All get_blob* APIs take in a start_range and end_range param instead of byte_range.
- Client-side validation added for ranges used in APIs.
- Changed models for better usability. Blob & BlobResult classes have been joined. ContainerEnumResults => list of Container objects. Properties => ContainerProperties. BlobEnumResults => list of Blob objects. BlobBlock objects are used for specifying information for blocks passed to put_block_list. PageList => list of PageRange objects. get_blob_properties returns a Blob object with the metadata and properties variables filled in. All get_blob_to_* APIs now return a Blob object.
- ContentSettings objects have replaced all content_* and cache_control params for applicable APIs. Create a ContentSettings object passing with those params and pass it to APIs instead.
- list_blobs no longer exposes prefix, marker, max_results, or delimiter.
- Single-threaded blob download APIs will now download the blob without chunking to improve perf.
- get_blob_to_* progress_callback may receive None for its total parameter when parallelism is off to allow a perf optimization.
- Metadata returned for blobs and containers will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_container_properties and get_blob_properties return parsed Container and Blob objects, respectively, instead of string header dictionaries.
- copy_blob returns a parsed CopyProperties object instead of a string header dictionary.
- acquire and renew lease calls return the lease id, break lease returns the remaining lease time, and change and release lease return nothing instead of string header dictionaries.
- snapshot_blob returns a Blob object with the name, snapshot, etag and LMT properties populated instead of a string header dictionary.
- PageBlob put_page API is split into update_page and clear_page instead of being parsed a flag to indicate the behavior.
- An error is thrown immediately if parallel operations are attempted with a non-seekable stream rather than being thrown later.
### Queue:
- The list_queues operation returns a sequence of Queue objects. The sequence returned has a single attribute, next_marker. Queue objects contain a name and metadata element. The metadata is returned as a dictionary rather than an object.
- The peek_messages and get_messages operations return a list of QueueMessage objects. QueueMessage objects contain the same fields as previously, but insertion_time, expiration_time, and time_next_visible are returned as UTC dates rather than strings.
- Renamed params: maxresults => max_results for list_queues.
- update_message takes message_text as an optional parameter. This changes the parameter ordering.
- Encoding and decoding functions default to xml encoding and decoding. Previously messages were only xml encoded but not decoded.
- Metadata returned for queues will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_queue_metadata returns a metadata dict with an approximate_message_count property as an int.
- update_message returns a QueueMessage object with pop receipt and time next visible (parsed as a date) populated rather than a header dictionary.
### File:
- Renamed APIs and params: All x_ms prefixes have been removed. x_ms_range => byte_range for applicable APIs. get_file => _get_file (use get_file_to_bytes instead). maxresults => max_results for applicable APIs. x_ms_meta_name_values => metadata for applicable APIs. text_encoding => encoding for applicable APIs. get_file is now internal (_get_file) since the get_file_to_* APIs make get_file somewhat redundant. All get_file* APIs use boolean value for range_get_content_md5 parameter. All APIs that took byte_range now take start_range and end_range params.
- Client-side validation added for ranges used in APIs.
- Added sas_token parameter to FileService constructor before the connection_string param. Added quota parameter to create_share before the fail_on_exist param.
- Changed list_ranges to return the list of file ranges directly rather than nested within a RangeList object.
- ContentSettings objects have replaced all content_* and cache_control params for applicable APIs. Create a ContentSettings object passing with those params and pass it to APIs instead.
- Single-threaded file download APIs will now download the file without chunking to improve perf.
- Combined models for File & FileResult for better usability. get_file_properties returns a File object with the metadata and properties variables filled in. All get_file_to_* APIs now return a File object.
- list_directories_and_files no longer exposes marker or max_results.
- get_file_to_* progress_callback may receive None for its total parameter when parallelism is off to allow a perf optimization.
- Metadata returned for shares, directories, and files will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_share_properties, get_directory_properties, and get_file_properties return parsed Share, Directory, and File objects, respectively, instead of string header dictionaries.
- copy_file returns a parsed CopyProperties object instead of a string header dictionary.

49
CONTRIBUTING.md Normal file
Просмотреть файл

@ -0,0 +1,49 @@
If you intend to contribute to the project, please make sure you've followed the instructions provided in the [Azure Projects Contribution Guidelines](http://azure.github.io/guidelines/).
## Project Setup
The Azure Storage development team uses Visual Studio so instructions will be tailored to that preference. However, any preferred IDE or other toolset should be usable.
### Install
* Python 2.7, 3.3, 3.4, or 3.5
* Visual Studio 2013 or 2015.
* [Python Tools for Visual Studio](https://www.visualstudio.com/en-us/features/python-vs.aspx)
* Clone the source code from GitHub and then run "python setup.py install" from the azure-storage-python folder.
### Open Solution
Open the project from VS using File->Open->Project/Solution and navigating to the azure-storage-python.sln solution file in the repo base folder.
## Tests
### Configuration
The only step to configure testing is to add a settings_real.py file to the Test folder. You should insert your storage account information into the file using [this](Test/settings_fake.py) as a template.
### Running
To actually run tests, right click the individual test or test class in the Test Explorer panel.
### Testing Features
As you develop a feature, you'll need to write tests to ensure quality. You should also run existing tests related to your change to address any unexpected breaks.
## Pull Requests
### Guidelines
The following are the minimum requirements for any pull request that must be met before contributions can be accepted.
* Make sure you've signed the CLA before you start working on any change.
* Discuss any proposed contribution with the team via a GitHub issue **before** starting development.
* Code must be professional quality
* No style issues
* You should strive to mimic the style with which we have written the library
* Clean, well-commented, well-designed code
* Try to limit the number of commits for a feature to 1-2. If you end up having too many we may ask you to squash your changes into fewer commits.
* [ChangeLog.md](ChangeLog.md) needs to be updated describing the new change
* Thoroughly test your feature
### Branching Policy
Changes should be based on the **dev** branch, not master as master is considered publicly released code. Each breaking change should be recorded in [BreakingChanges.md](BreakingChanges.md).
### Adding Features for All Platforms
We strive to release each new feature for each of our environments at the same time. Therefore, we ask that all contributions be written for both Python 2.7 and 3.3+ (you can validate 3.3, 3.4, and 3.5 with just one of the versions since they are so similar). We recommend writing in Python 3.x first, and then back porting to 2.7. This is much easier than the other direction.
### Review Process
We expect all guidelines to be met before accepting a pull request. As such, we will work with you to address issues we find by leaving comments in your code. Please understand that it may take a few iterations before the code is accepted as we maintain high standards on code quality. Once we feel comfortable with a contribution, we will validate the change and accept the pull request.
Thank you for any contributions! Please let the team know if you have any questions or concerns about our contribution policy.

228
ChangeLog.md Normal file
Просмотреть файл

@ -0,0 +1,228 @@
# Change Log
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.
**Note: This changelog is deprecated starting with version XX.XX.XX, please refer to the ChangeLog.md in each package for future change logs.**
## Version XX.XX.XX:
### All:
- Added logging to the library, the name of the logger is 'azure.storage'. User must add handlers to the logger to output logs.
- Secondary endpoints may now be specified in connection strings, provided the corresponding primary endpoint is specified. See the connection string documentation for details.
- The library has been split into 4 different packages:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
- The package `azure-storage` is now deprecated.
- The classes that were directly under azure.storage, not under azure.storage.*(blob, file, queue, table), are now under azure.storage.common.
- Example: azure.storage.retry becomes azure.storage.common.retry
## Version 0.36.0:
### Blob:
- For Standard Storage Accounts only, added support for getting and setting the tier on an individual block blob.
### Table:
- Fixed bug where inserting entity with an empty sting as RowKey or PartitionKey raised error.
## Version 0.35.1:
### Blob:
- Fixed bug where calling create_from_* and and append_blob_from_* methods with no data fails.
## Version 0.35.0:
### All:
- Support for 2017-04-17 REST version. Please see our REST API documentation and blogs for information about the related added features. If you are using the Storage Emulator, please update to Emulator version 5.2.
- Fixed a bug where deserialization of service stats throws a TypeError when the service is unavailable.
### Blob:
- For Premium Accounts only, added support for getting and setting the tier on a page blob. The tier can also be set when creating or copying from an existing page blob.
- create_from_* and and append_blob_from_* methods will return response_properties which contains the etag and last modified time.
### Table:
- Fixed syntax error in _convert_json_response_to_entities.
- Fixed a bug where the urls are not correctly formed when making commit_batch to the emulator.
### File:
- The `server_encrypted` file property will now be populated when calling 'get_directory_properties', 'get_file', and 'get_file_properties'. This value is set to True if the file data (for files) and application metadata are completely encrypted.
## Version 0.34.3:
- All: Made the socket timeout configurable. Increased the default socket timeout to 20 seconds.
- All: Fixed a bug where SAS tokens were being duplicated on retries
## Version 0.34.2:
### All:
- Updated the azure namespace packaging system.
## Version 0.34.1:
### Blob:
- Fixed a bug where downloading the snapshot of a blob will fail in some cases.
## Version 0.34.0:
### All:
- All: Support for 2016-05-31 REST version. Please see our REST API documentation and blogs for information about the related added features. If you are using the Storage Emulator, please update to Emulator version 4.6
- All: Several error messages have been clarified or made more specific.
### Blob:
- Added support for server-side encryption headers.
- Properly return connections to pool when checking for non-existent blobs.
- Fixed a bug with parallel uploads for PageBlobs and BlockBlobs where chunks were being buffered and queued faster than can be processed, potentially causing out-of-memory issues.
- Added large block blob upload support. Blocks can now support sizes up to 100 MB and thus the maximum size of a BlockBlob is now 5,000,000 MB (~4.75 TB).
- Added streaming upload support for the put_block method and a new memory optimized upload algorithm for create_blob_from_stream and create_blob_from_file APIs. (BlockBlobService)
- The new upload strategy will no longer fully buffer seekable streams unless Encryption is enabled. See 'use_byte_buffer' parameter documentation on the 'create_blob_from_stream' method for more details.
- Fixed a deserialization bug with get_block_list() where calling it with anything but the 'all' block_list_type would cause an error.
- Using If-None-Match: * will now fail when reading a blob. Previously this header was ignored for blob reads.
- Populate public access when listing blob containers.
- The public access setting on a blob container is now a container property returned from downloadProperties.
- Populate content MD5 for range gets on blobs.
- Added support for incremental copy on page blobs. The source must be a snapshot of a page blob and include a SAS token.
### File:
- Prefix support for listing files and directories.
- Populate content MD5 for range gets on files.
### Queue:
- put_message now returns a QueueMessage with the PopReceipt, Id, NextVisibleTime, InsertionTime, and ExpirationTime properties populated along with the content.
## Version 0.33.0:
### All:
- Remove with_filter from service client in favor of the newer callback functions.
- Fixed a bug where empty signed identifiers could not be parsed.
- Improved the error message returned when too many signed identifers are provided.
- Added support for automatic retries. A retry function taking a RetryContext object and returning a retry wait time (or None for no retry) may be set on the service client. The default retry has an exponential back-off and is defined in the retry class.
- Added support for reading from secondary. Note that this only applies for RA-GRS accounts. If the client location_mode is set to LocationMode.SECONDARY, read requests which may be sent to secondary will be.
### Blob:
- Client-side encryption. Allows a user to encrypt entire blobs (not individual blocks) before uploading them by providing an encryption policy. See ~samples.blob.encryption_usage.py for samples.
### Table:
- Fixed a bug with Table Entity where EDM bound checks would not allow for full resolution of 32/64-bit values.
- Client-side encryption. Allows a user to encrypt specified properties on an entity before uploading them by providing an encryption policy. See ~samples.table.encryption_usage.py for samples.
### Queue:
- Client-side encryption. Allows a user to encrypt queue messages before uploading them by specifying fields on the queueservice. See ~samples.queue.encryption_usuage.py for samples.
## Version 0.32.0:
### All:
- request_callback and response_callback functions may be set on the service clients. These callbacks will be run before the request is executed and after the response is received, respectively. They maybe used to add custom headers to the request and for logging, among other purposes.
- A client request id is added to requests by default.
### Blob:
- Get requests taking the start_range parameter incorrectly sent an x-ms-range header when start_range was not specified.
- get_blob_to_* will do an initial get request of size 32 MB. If it then finds the blob is larger than this size, it will parallelize by default.
- Block blob and page blob create_blob_from_* methods will parallelize by default.
- The validate_content option on get_blob_to_* and on methods which put blob data will compute and validate an md5 hash of the content if set to True. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate.
- Fixed a bug where lease_id was not specified if given by the user for each chunk on parallel get requests.
### File:
- Get requests taking the start_range parameter incorrectly sent an x-ms-range header when start_range was not specified.
- get_file_to_* will do an initial get request of size 32 MB. If it then finds the file is larger than this size, it will parallelize by default.
- create_file_from_* methods will parallelize by default.
- The validate_content option on get_file_to_* and create_file_from_* will compute and validate an md5 hash of the content if set to True. This is primarily valuable for detecting bitflips on the wire if using http instead of https as https (the default) will already validate.
## Version 0.31.0:
### All:
- Support for 2015-07-08 REST version. Please see our REST API documentation and blogs for information about the related added features.
- ListGenerator extends Iterable
- Added get_*_service_stats APIs to retrieve statistics related to replication for read-access geo-redundant storage accounts.
- Fixed a bug where custom endpoints with a trailing slash were not handled correctly.
### Blob:
- Diffing support has been added to the get_page_range API which facilitates finding different page ranges between a previous snapshot and newer snapshot (or current Page Blob).
### Table:
- Fixed a bug in table SAS generation where table names with capital letters were not signed correctly.
- Fixed a bug where list_tables did not parse continuation tokens correctly.
### Queue:
- QueueMessage dequeue_count was documented as and intended to be an int but was instead returned as a string. Changed it to be an int.
## Version 0.30.0:
### All:
- Support for 2015-04-05 REST version. Please see our REST API documentation and blogs for information about the related added features.
- UserAgent string has changed to conform to the Azure Storage standard.
- Added optional timeout parameter to all APIs.
- Empty headers are signed.
- Exceptions produced after request construction and before request parsing (ie, connection or HTTP exceptions) are always wrapped as type AzureException.
### Shared Access Signatures (SAS) and ACL
- Added support for Account SAS. See CloudStorageAccount.generateSharedAccessSignature and the generate_account_shared_access_signature methods on each service.
- Added support for protocol (HTTP/HTTPS) and IP restrictions on the SAS token.
- Created instantiable objects for the shared access Permissions classes to simplify specifying more than one permission.
- set and get acl methods take and return dictionaries mapping an id to an AccessPolicy object rather than a SignedIdentifiers object.
- generate_shared_access_signature methods take permission, expiry, start and id directly rather than as part of a SharedAccessPolicy object.
- generate_signed_query_string on SharedAccessSignature takes permission, expiry, start and id directly rather than as part of a SharedAccessPolicy object.
- expiry and start, whether as part of AccessPolicy or params in generateSharedAccessSignature, may be given as UTC date objects or as strings.
### Table:
- Simplified tableservice *_entity functions by removing partition_key, row_key, and content_type parameters where possible.
- tableservice *_entity functions that returned dictionaries instead return the etag.
- tableservice insert_entity and create_table operations no longer echo content from the service, improving performance.
- tableservice uses json instead of atompub, improving performance.
- Accept type can be specified for the tableservice get_entity and query_entities functions. Minimal metadata is the default. No metadata can be used to reduce the payload size but will not return the Edm type for the entity properties. For inferable property types like string, boolean, int32 and double the return values will be the same. For binary, guid, int64 and datetime values simple strings will be returned. A property resolver delegate can be specified if you would like to specify the Edm type manually for these entity properties. The library will use the Edm type returned by the delegate to cast the entity property value appropriately before adding it to the returned entity dictionary.
- All table entity integer values are stored on the service with type Edm.Int64 unless the type is explicitly overridden as Edm.Int32.
- Table Entity class extends dict but also allows property access as if it were an object to allow more flexible usage.
- Table batches are constructed using the Batch class rather than turning batching on and off via the TableService. The TableService can then execute these batches using commit_batch(table_name, batch).
- Table sas generation requires start/end pk/rk to be specified as direct parameters to the method rather than as part of an AccessPolicy.
- Added exists method to check table existence.
### Blob:
- Added snapshot support for the get_blob_properties API.
- Separated lease_container and lease_blob into unique methods for each lease action.
- Added access condition support for all applicable APIs.
- Refactored the blob service into a block blob and page blob service.
- Added Append Blob support.
- Renamed some APIs and parameters for better readablity and less redundancy.
- Changed models for better usability.
- ContentSettings objects have replaced all content_* and cache_control params for applicable APIs. Create a ContentSettings object passing with those params and pass it to APIs instead.
- list_blobs no longer exposes prefix, marker, max_results, or delimiter.
- resize and set_sequence_number APIs have been added for Page Blob. It is not possible to make these changes with set_blob_properties.
- Single-threaded blob download APIs will now download the blob without chunking to improve perf.
- Allow '?' as part of blob names.
- get_blob_to_* progress_callback may receive None for its total parameter when parallelism is off to allow a perf optimization.
- Added exists method to check container or blob existence.
- Client-side validation added for ranges used in APIs.
- Metadata returned for blobs and containers will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_container_properties and get_blob_properties return parsed Container and Blob objects, respectively, instead of string header dictionaries.
- copy_blob returns a parsed CopyProperties object instead of a string header dictionary.
- acquire and renew lease calls return the lease id, break lease returns the remaining lease time, and change and release lease return nothing instead of string header dictionaries.
- snapshot_blob returns a Blob object with the name, snapshot, etag and LMT properties populated instead of a string header dictionary.
- PageBlob put_page API is split into update_page and clear_page instead of being parsed a flag to indicate the behavior.
- An error is thrown immediately if parallel operations are attempted with a non-seekable stream rather than being thrown later.
- get_container_acl returns a public_access property attached to the returned ACL dictionary.
- Blob uploads which fail no longer commit an empty blob.
### Queue:
- The list_queues operation returns a list of Queue objects. The list returned has a single attribute, next_marker. Queue objects contain a name and metadata element. The metadata is returned as a dictionary rather than an object.
- The peek_messages and get_messages operations return a list of QueueMessage objects. QueueMessage objects contain the same fields as previously, but insertion_time, expiration_time, and time_next_visible are returned as UTC dates rather than strings.
- update_message takes message_text as an optional parameter. This changes the parameter ordering.
- create_queue and set_queue_metadata apis take metadata rather than x_ms_meta_name_values.
- Added encode_function and decode_function properties to the queue service to allow users to specify custom encoding and decoding of queue messages.
- Encoding and decoding functions default to xml encoding and decoding. Previously messages were only xml encoded but not decoded.
- Added exists method to check queue existence.
- Metadata returned for queues will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_queue_metadata returns a metadata dict with an approximate_message_count property as an int.
- update_message returns a QueueMessage object with pop receipt and time next visible (parsed as a date) populated rather than a header dictionary.
### File:
- Renamed some APIs and parameters for better readablity and less redundancy.
- Added new file features including support for SAS and ACL, share usage stats, directory metadata, async server side file copy, and share quota.
- ContentSettings objects have replaced all content_* and cache_control params for applicable APIs. Create a ContentSettings object passing with those params and pass it to APIs instead.
- Single-threaded file download APIs will now download the file without chunking to improve perf.
- Combined models for File & FileResult for better usability. get_file_properties double returns FileProperties object and a metadata dict.
- list_directories_and_files no longer exposes marker or max_results.
- get_file_to_* progress_callback may receive None for its total parameter when parallelism is off to allow a perf optimization.
- Added exists method to check share, directory, or file existence.
- Client-side validation added for ranges used in APIs.
- Metadata returned for shares, directories, and files will be returned without the 'x-ms-meta' prefix on the keys. Namely, metadata will be returned as it is received.
- get_share_properties, get_directory_properties, and get_file_properties return parsed Share, Directory, and File objects, respectively, instead of string header dictionaries.
- copy_file returns a parsed CopyProperties object instead of a string header dictionary.

21
LICENSE
Просмотреть файл

@ -1,21 +0,0 @@
MIT License
Copyright (c) Microsoft Corporation. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE

176
LICENSE.txt Normal file
Просмотреть файл

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

Просмотреть файл

@ -1,14 +0,0 @@
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

10
README.rst Normal file
Просмотреть файл

@ -0,0 +1,10 @@
Microsoft Azure CosmosDB SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure CosmosDB services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.

Просмотреть файл

@ -0,0 +1 @@
include *.rst

Просмотреть файл

@ -0,0 +1,11 @@
Microsoft Azure Storage SDK for Python
======================================
This is the Microsoft Azure CosmosDB namespace package.
This package is not intended to be installed directly by the end user.
It provides the necessary files for other packages to extend the azure.cosmosdb namespace.
If you are looking to install the Azure CosmosDB libraries, see the
`azure <https://pypi.python.org/pypi/azure>`__ bundle package.

Просмотреть файл

Просмотреть файл

@ -0,0 +1,2 @@
[bdist_wheel]
universal=1

Просмотреть файл

@ -0,0 +1,64 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from setuptools import setup
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-cosmosdb-nspkg',
version='2.0.0',
description='Microsoft Azure CosmosDB Namespace Package [Internal]',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-cosmosdb-python',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=[
'azure'
],
install_requires=[
'azure-nspkg>=2.0.0',
]
)

Просмотреть файл

@ -0,0 +1,381 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>6c4a3e11-9be6-437a-837c-887b2fa19525</ProjectGuid>
<ProjectHome>.</ProjectHome>
<StartupFile>
</StartupFile>
<SearchPath>
</SearchPath>
<WorkingDirectory>.</WorkingDirectory>
<OutputPath>.</OutputPath>
<Name>azure-cosmosdb-python</Name>
<RootNamespace>azure-cosmosdb-python</RootNamespace>
<InterpreterId>{53cb0330-049f-4de9-87d8-7a2b536751f9}</InterpreterId>
<InterpreterVersion>3.5</InterpreterVersion>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)' == 'Debug' ">
<DebugSymbols>true</DebugSymbols>
<EnableUnmanagedDebugging>false</EnableUnmanagedDebugging>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)' == 'Release' ">
<DebugSymbols>true</DebugSymbols>
<EnableUnmanagedDebugging>false</EnableUnmanagedDebugging>
</PropertyGroup>
<PropertyGroup>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">10.0</VisualStudioVersion>
<PtvsTargetsFile>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Python Tools\Microsoft.PythonTools.targets</PtvsTargetsFile>
</PropertyGroup>
<ItemGroup>
<Content Include=".gitignore" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_create_table.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_create_table_fail_on_exist.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_create_table_with_already_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_create_table_with_already_existing_table_fail_on_exist.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_delete_table_with_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_delete_table_with_existing_table_fail_not_exist.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_delete_table_with_non_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_delete_table_with_non_existing_table_fail_not_exist.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_get_table_acl.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_list_tables.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_list_tables_with_marker.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_list_tables_with_num_results.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_locale.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_set_table_acl.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_set_table_acl_too_many_ids.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_set_table_acl_with_empty_signed_identifier.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_set_table_acl_with_empty_signed_identifiers.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_set_table_acl_with_signed_identifiers.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_table_exists.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_table_not_exists.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table.test_unicode_create_table_unicode_name.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_all_operations_together.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_all_operations_together_context_manager.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_delete.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_different_partition_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_insert.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_inserts.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_insert_merge.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_insert_replace.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_merge.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_reuse.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_same_row_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_too_many_ops.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_update.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_update_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_batch.test_batch_update_if_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_batch_entity_inserts_context_manager.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypted_dict.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypted_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypted_entity_encryption_resolver.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypted_entity_key_resolver.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypted_entity_properties_and_resolver.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_encrypt_multiple_properties.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_entity_invalid_value_kek_unwrap.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_entity_missing_attribute_kek_unwrap.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_entity_nonmatching_kid.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_entity_no_decryption.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_payload_formats.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_strict_mode_no_key.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_get_strict_mode_unencrypted_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_invalid_encryption_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_invalid_encryption_operations_fail_batch.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_property_resolver_decrypt_conflict.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_query_entities_all_properties.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_query_entities_mixed_mode.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_query_entities_projection.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_replace_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_strict_mode_policy_no_encrypted_properties.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_table_ops_ignore_encryption.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_validate_encryption.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_encryption.test_validate_swapping_properties_fails.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_binary_property_value.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_delete_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_delete_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_delete_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_delete_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_empty_and_spaces_property_value.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_full_metadata.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_if_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_no_metadata.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_with_property_resolver.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_with_property_resolver_invalid.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_with_property_resolver_not_supported.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_with_select.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_get_entity_with_special_doubles.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_entity_class_instance.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_entity_conflict.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_entity_dictionary.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_entity_empty_string_pk.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_entity_empty_string_rk.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_or_merge_entity_with_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_or_merge_entity_with_non_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_or_replace_entity_with_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_insert_or_replace_entity_with_non_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_merge_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_merge_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_merge_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_merge_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_none_property_value.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_full_metadata.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_large.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_no_metadata.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_with_filter.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_with_property_resolver.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_with_select.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_with_top.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_entities_with_top_and_next.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_query_zero_entities.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_timezone.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_unicode_property_name.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_unicode_property_value.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_update_entity.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_update_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\test_table_entity.test_update_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\azure\cosmosdb\tests\run-storage.bat" />
<Content Include="azure-cosmosdb-table\doc\BuildDocs.bat" />
<Content Include="azure-cosmosdb-table\doc\index.rst" />
<Content Include="azure-cosmosdb-table\doc\InstallDocDependencies.bat" />
<Content Include="azure-cosmosdb-table\doc\make.bat" />
<Content Include="azure-cosmosdb-table\doc\Makefile" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.common.cloudstorageaccount.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.common.models.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.common.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.common.sharedaccesssignature.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.common.storageclient.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.table.models.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.table.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.table.tablebatch.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\azure.storage.table.tableservice.rst" />
<Content Include="azure-cosmosdb-table\doc\ref\modules.rst" />
<Content Include="azure-cosmosdb-table\doc\requirements.txt" />
<Content Include="azure-cosmosdb-table\doc\upgrade.rst" />
<Content Include="azure-cosmosdb-nspkg\MANIFEST.in" />
<Content Include="azure-cosmosdb-nspkg\README.rst" />
<Content Include="azure-cosmosdb-nspkg\setup.cfg" />
<Content Include="azure-cosmosdb-table\BreakingChanges.md" />
<Content Include="azure-cosmosdb-table\ChangeLog.md" />
<Content Include="azure-cosmosdb-table\MANIFEST.in" />
<Content Include="azure-cosmosdb-table\README.rst" />
<Content Include="azure-cosmosdb-table\setup.cfg" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_create_table.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_create_table_fail_on_exist.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_create_table_with_already_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_create_table_with_already_existing_table_fail_on_exist.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_delete_table_with_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_delete_table_with_existing_table_fail_not_exist.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_delete_table_with_non_existing_table.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_delete_table_with_non_existing_table_fail_not_exist.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_get_table_acl.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_list_tables.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_list_tables_with_marker.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_list_tables_with_num_results.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_locale.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_set_table_acl.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_set_table_acl_too_many_ids.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_set_table_acl_with_empty_signed_identifier.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_set_table_acl_with_empty_signed_identifiers.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_set_table_acl_with_signed_identifiers.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_table_exists.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_table_not_exists.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table.test_unicode_create_table_unicode_name.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_all_operations_together.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_all_operations_together_context_manager.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_delete.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_different_partition_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_insert.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_inserts.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_insert_merge.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_insert_replace.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_merge.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_reuse.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_same_row_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_too_many_ops.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_update.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_update_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_batch.test_batch_update_if_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_batch_entity_inserts_context_manager.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypted_dict.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypted_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypted_entity_encryption_resolver.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypted_entity_key_resolver.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypted_entity_properties_and_resolver.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_encrypt_multiple_properties.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_entity_invalid_value_kek_unwrap.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_entity_missing_attribute_kek_unwrap.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_entity_nonmatching_kid.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_entity_no_decryption.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_payload_formats.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_strict_mode_no_key.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_get_strict_mode_unencrypted_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_invalid_encryption_operations_fail.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_invalid_encryption_operations_fail_batch.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_property_resolver_decrypt_conflict.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_query_entities_all_properties.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_query_entities_mixed_mode.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_query_entities_projection.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_replace_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_strict_mode_policy_no_encrypted_properties.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_table_ops_ignore_encryption.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_validate_encryption.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_encryption.test_validate_swapping_properties_fails.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_binary_property_value.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_delete_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_delete_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_delete_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_delete_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_empty_and_spaces_property_value.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_full_metadata.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_if_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_no_metadata.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_with_property_resolver.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_with_property_resolver_invalid.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_with_property_resolver_not_supported.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_with_select.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_get_entity_with_special_doubles.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_entity_class_instance.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_entity_conflict.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_entity_dictionary.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_entity_empty_string_pk.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_entity_empty_string_rk.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_or_merge_entity_with_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_or_merge_entity_with_non_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_or_replace_entity_with_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_insert_or_replace_entity_with_non_existing_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_merge_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_merge_entity_not_existing.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_merge_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_merge_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_none_property_value.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_full_metadata.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_large.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_no_metadata.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_with_filter.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_with_property_resolver.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_with_select.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_with_top.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_entities_with_top_and_next.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_query_zero_entities.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_timezone.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_unicode_property_name.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_unicode_property_value.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_update_entity.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_update_entity_with_if_doesnt_match.yaml" />
<Content Include="azure-cosmosdb-table\tests\recordings\test_table_entity.test_update_entity_with_if_matches.yaml" />
<Content Include="azure-cosmosdb-table\tests\run-storage.bat" />
<Content Include="BreakingChanges.md" />
<Content Include="ChangeLog.md" />
<Content Include="CONTRIBUTING.md" />
<Content Include="LICENSE.txt" />
<Content Include="README.rst" />
<Content Include="requirements.txt" />
</ItemGroup>
<ItemGroup>
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\common\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\common\_http\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\tests\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\tests\recordings\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\" />
<Folder Include="azure-cosmosdb-table\doc\" />
<Folder Include="azure-cosmosdb-table\doc\ref\" />
<Folder Include="azure-cosmosdb-nspkg\" />
<Folder Include="azure-cosmosdb-nspkg\azure\" />
<Folder Include="azure-cosmosdb-nspkg\azure\cosmosdb\" />
<Folder Include="azure-cosmosdb-table\" />
<Folder Include="azure-cosmosdb-table\azure\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\" />
<Folder Include="azure-cosmosdb-table\azure\cosmosdb\table\" />
<Folder Include="azure-cosmosdb-table\samples\" />
<Folder Include="azure-cosmosdb-table\samples\advanced\" />
<Folder Include="azure-cosmosdb-table\samples\table\" />
<Folder Include="azure-cosmosdb-table\tests\" />
<Folder Include="azure-cosmosdb-table\tests\recordings\" />
<Folder Include="azure-cosmosdb-table\tests\table\" />
</ItemGroup>
<ItemGroup>
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\cloudstorageaccount.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_http\httpclient.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_http\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\models.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\retry.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\sharedaccesssignature.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\storageclient.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_auth.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_common_conversion.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_connection.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_constants.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_deserialization.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_encryption.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_error.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\_serialization.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\common\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\encryption_test_helper.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\settings_fake.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\test_table.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\test_table_batch.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\test_table_encryption.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\test_table_entity.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\table\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\testcase.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\tests\__init__.py" />
<Compile Include="azure-cosmosdb-table\doc\conf.py" />
<Compile Include="azure-cosmosdb-table\doc\__init__.py" />
<Compile Include="azure-cosmosdb-nspkg\azure\cosmosdb\__init__.py" />
<Compile Include="azure-cosmosdb-nspkg\setup.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\models.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\tablebatch.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\tableservice.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\_deserialization.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\_encryption.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\_error.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\_request.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\_serialization.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\table\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure\cosmosdb\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure\__init__.py" />
<Compile Include="azure-cosmosdb-table\azure_bdist_wheel.py" />
<Compile Include="azure-cosmosdb-table\samples\advanced\authentication.py" />
<Compile Include="azure-cosmosdb-table\samples\advanced\client.py" />
<Compile Include="azure-cosmosdb-table\samples\advanced\__init__.py" />
<Compile Include="azure-cosmosdb-table\samples\config.py" />
<Compile Include="azure-cosmosdb-table\samples\table\encryption_usage.py" />
<Compile Include="azure-cosmosdb-table\samples\table\sas_usage.py" />
<Compile Include="azure-cosmosdb-table\samples\table\table_usage.py" />
<Compile Include="azure-cosmosdb-table\samples\table\__init__.py" />
<Compile Include="azure-cosmosdb-table\samples\test_sample.py" />
<Compile Include="azure-cosmosdb-table\samples\__init__.py" />
<Compile Include="azure-cosmosdb-table\setup.py" />
<Compile Include="azure-cosmosdb-table\tests\encryption_test_helper.py" />
<Compile Include="azure-cosmosdb-table\tests\settings_fake.py" />
<Compile Include="azure-cosmosdb-table\tests\table\test_table.py" />
<Compile Include="azure-cosmosdb-table\tests\table\test_table_batch.py" />
<Compile Include="azure-cosmosdb-table\tests\table\test_table_encryption.py" />
<Compile Include="azure-cosmosdb-table\tests\table\test_table_entity.py" />
<Compile Include="azure-cosmosdb-table\tests\table\__init__.py" />
<Compile Include="azure-cosmosdb-table\tests\testcase.py" />
<Compile Include="azure-cosmosdb-table\tests\__init__.py" />
<Compile Include="build_packages.py" />
</ItemGroup>
<ItemGroup>
<InterpreterReference Include="{53cb0330-049f-4de9-87d8-7a2b536751f9}\3.5" />
<InterpreterReference Include="{9a7a9026-48c1-4688-9d5d-e5699d47d074}\2.7" />
</ItemGroup>
<Import Condition="Exists($(PtvsTargetsFile))" Project="$(PtvsTargetsFile)" />
<Import Condition="!Exists($(PtvsTargetsFile))" Project="$(MSBuildToolsPath)\Microsoft.Common.targets" />
<!-- Uncomment the CoreCompile target to enable the Build command in
Visual Studio and specify your pre- and post-build commands in
the BeforeBuild and AfterBuild targets below. -->
<!--<Target Name="CoreCompile" />-->
<Target Name="BeforeBuild">
</Target>
<Target Name="AfterBuild">
</Target>
</Project>

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-table
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-table
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -0,0 +1,2 @@
include *.rst
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,131 @@
Microsoft Azure CosmosDB Table SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure CosmosDB Table services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
Features
========
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
Getting Started
===============
Download
--------
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-cosmosdb-python.git
cd ./azure-cosmosdb-table
python setup.py install
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-cosmosdb-table
python setup.py install
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1,44 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.common._constants import (
__author__,
__version__,
X_MS_VERSION,
)
from azure.cosmosdb.common.cloudstorageaccount import CloudStorageAccount
from azure.cosmosdb.common.models import (
RetentionPolicy,
Logging,
Metrics,
CorsRule,
ServiceProperties,
AccessPolicy,
ResourceTypes,
Services,
AccountPermissions,
Protocol,
ServiceStats,
GeoReplication,
LocationMode,
RetryContext,
)
from azure.cosmosdb.common.retry import (
ExponentialRetry,
LinearRetry,
no_retry,
)
from azure.cosmosdb.common.sharedaccesssignature import (
SharedAccessSignature,
)

Просмотреть файл

@ -0,0 +1,130 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.common._common_conversion import (
_sign_string,
)
import logging
logger = logging.getLogger(__name__)
class _StorageSharedKeyAuthentication(object):
def __init__(self, account_name, account_key):
self.account_name = account_name
self.account_key = account_key
def _get_headers(self, request, headers_to_sign):
headers = dict((name.lower(), value) for name, value in request.headers.items() if value)
if 'content-length' in headers and headers['content-length'] == '0':
del headers['content-length']
return '\n'.join(headers.get(x, '') for x in headers_to_sign) + '\n'
def _get_verb(self, request):
return request.method + '\n'
def _get_canonicalized_resource(self, request):
uri_path = request.path.split('?')[0]
return '/' + self.account_name + uri_path
def _get_canonicalized_headers(self, request):
string_to_sign = ''
x_ms_headers = []
for name, value in request.headers.items():
if name.startswith('x-ms-'):
x_ms_headers.append((name.lower(), value))
x_ms_headers.sort()
for name, value in x_ms_headers:
if value is not None:
string_to_sign += ''.join([name, ':', value, '\n'])
return string_to_sign
def _add_authorization_header(self, request, string_to_sign):
signature = _sign_string(self.account_key, string_to_sign)
auth_string = 'SharedKey ' + self.account_name + ':' + signature
request.headers['Authorization'] = auth_string
class _StorageSharedKeyAuthentication(_StorageSharedKeyAuthentication):
def sign_request(self, request):
string_to_sign = \
self._get_verb(request) + \
self._get_headers(
request,
[
'content-encoding', 'content-language', 'content-length',
'content-md5', 'content-type', 'date', 'if-modified-since',
'if-match', 'if-none-match', 'if-unmodified-since', 'byte_range'
]
) + \
self._get_canonicalized_headers(request) + \
self._get_canonicalized_resource(request) + \
self._get_canonicalized_resource_query(request)
self._add_authorization_header(request, string_to_sign)
logger.debug("String_to_sign=%s", string_to_sign)
def _get_canonicalized_resource_query(self, request):
sorted_queries = [(name, value) for name, value in request.query.items()]
sorted_queries.sort()
string_to_sign = ''
for name, value in sorted_queries:
if value:
string_to_sign += '\n' + name.lower() + ':' + value
return string_to_sign
class _StorageTableSharedKeyAuthentication(_StorageSharedKeyAuthentication):
def sign_request(self, request):
string_to_sign = \
self._get_verb(request) + \
self._get_headers(
request,
['content-md5', 'content-type', 'x-ms-date'],
) + \
self._get_canonicalized_resource(request) + \
self._get_canonicalized_resource_query(request)
self._add_authorization_header(request, string_to_sign)
logger.debug("String_to_sign=%s", string_to_sign)
def _get_canonicalized_resource_query(self, request):
for name, value in request.query.items():
if name == 'comp':
return '?comp=' + value
return ''
class _StorageNoAuthentication(object):
def sign_request(self, request):
pass
class _StorageSASAuthentication(object):
def __init__(self, sas_token):
self.sas_token = sas_token
def sign_request(self, request):
# if 'sig=' is present, then the request has already been signed
# as is the case when performing retries
if 'sig=' in request.path:
return
if '?' in request.path:
request.path += '&'
else:
request.path += '?'
request.path += self.sas_token

Просмотреть файл

@ -0,0 +1,135 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import base64
import hashlib
import hmac
import sys
from io import (SEEK_SET)
from dateutil.tz import tzutc
from azure.cosmosdb.common._error import (
_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM,
_ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM,
)
from azure.cosmosdb.common.models import (
_unicode_type,
)
if sys.version_info < (3,):
def _str(value):
if isinstance(value, unicode):
return value.encode('utf-8')
return str(value)
else:
_str = str
def _to_str(value):
return _str(value) if value is not None else None
def _int_to_str(value):
return str(int(value)) if value is not None else None
def _bool_to_str(value):
if value is None:
return None
if isinstance(value, bool):
if value:
return 'true'
else:
return 'false'
return str(value)
def _to_utc_datetime(value):
return value.strftime('%Y-%m-%dT%H:%M:%SZ')
def _datetime_to_utc_string(value):
# Azure expects the date value passed in to be UTC.
# Azure will always return values as UTC.
# If a date is passed in without timezone info, it is assumed to be UTC.
if value is None:
return None
if value.tzinfo:
value = value.astimezone(tzutc())
return value.strftime('%a, %d %b %Y %H:%M:%S GMT')
def _encode_base64(data):
if isinstance(data, _unicode_type):
data = data.encode('utf-8')
encoded = base64.b64encode(data)
return encoded.decode('utf-8')
def _decode_base64_to_bytes(data):
if isinstance(data, _unicode_type):
data = data.encode('utf-8')
return base64.b64decode(data)
def _decode_base64_to_text(data):
decoded_bytes = _decode_base64_to_bytes(data)
return decoded_bytes.decode('utf-8')
def _sign_string(key, string_to_sign, key_is_base64=True):
if key_is_base64:
key = _decode_base64_to_bytes(key)
else:
if isinstance(key, _unicode_type):
key = key.encode('utf-8')
if isinstance(string_to_sign, _unicode_type):
string_to_sign = string_to_sign.encode('utf-8')
signed_hmac_sha256 = hmac.HMAC(key, string_to_sign, hashlib.sha256)
digest = signed_hmac_sha256.digest()
encoded_digest = _encode_base64(digest)
return encoded_digest
def _get_content_md5(data):
md5 = hashlib.md5()
if isinstance(data, bytes):
md5.update(data)
elif hasattr(data, 'read'):
pos = 0
try:
pos = data.tell()
except:
pass
for chunk in iter(lambda: data.read(4096), b""):
md5.update(chunk)
try:
data.seek(pos, SEEK_SET)
except (AttributeError, IOError):
raise ValueError(_ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM.format('data'))
else:
raise ValueError(_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM.format('data'))
return base64.b64encode(md5.digest()).decode('utf-8')
def _lower(text):
return text.lower()

Просмотреть файл

@ -0,0 +1,167 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
if sys.version_info >= (3,):
from urllib.parse import urlparse
else:
from urlparse import urlparse
from azure.cosmosdb.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
DEV_ACCOUNT_NAME,
DEV_ACCOUNT_KEY,
DEV_BLOB_HOST,
DEV_QUEUE_HOST,
DEV_TABLE_HOST
)
from azure.cosmosdb.common._error import (
_ERROR_STORAGE_MISSING_INFO,
)
_EMULATOR_ENDPOINTS = {
'blob': DEV_BLOB_HOST,
'queue': DEV_QUEUE_HOST,
'table': DEV_TABLE_HOST,
'file': '',
}
_CONNECTION_ENDPOINTS = {
'blob': 'BlobEndpoint',
'queue': 'QueueEndpoint',
'table': 'TableEndpoint',
'file': 'FileEndpoint',
}
_CONNECTION_ENDPOINTS_SECONDARY = {
'blob': 'BlobSecondaryEndpoint',
'queue': 'QueueSecondaryEndpoint',
'table': 'TableSecondaryEndpoint',
'file': 'FileSecondaryEndpoint',
}
class _ServiceParameters(object):
def __init__(self, service, account_name=None, account_key=None, sas_token=None,
is_emulated=False, protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
custom_domain=None, custom_domain_secondary=None):
self.account_name = account_name
self.account_key = account_key
self.sas_token = sas_token
self.protocol = protocol or DEFAULT_PROTOCOL
self.is_emulated = is_emulated
if is_emulated:
self.account_name = DEV_ACCOUNT_NAME
self.protocol = 'http'
# Only set the account key if a sas_token is not present to allow sas to be used with the emulator
self.account_key = DEV_ACCOUNT_KEY if not self.sas_token else None
self.primary_endpoint = '{}/{}'.format(_EMULATOR_ENDPOINTS[service], self.account_name)
self.secondary_endpoint = '{}/{}-secondary'.format(_EMULATOR_ENDPOINTS[service], self.account_name)
else:
# Strip whitespace from the key
if self.account_key:
self.account_key = self.account_key.strip()
endpoint_suffix = endpoint_suffix or SERVICE_HOST_BASE
# Setup the primary endpoint
if custom_domain:
parsed_url = urlparse(custom_domain)
# Trim any trailing slashes from the path
path = parsed_url.path.rstrip('/')
self.primary_endpoint = parsed_url.netloc + path
self.protocol = self.protocol if parsed_url.scheme is '' else parsed_url.scheme
else:
if not self.account_name:
raise ValueError(_ERROR_STORAGE_MISSING_INFO)
self.primary_endpoint = '{}.{}.{}'.format(self.account_name, service, endpoint_suffix)
# Setup the secondary endpoint
if custom_domain_secondary:
if not custom_domain:
raise ValueError(_ERROR_STORAGE_MISSING_INFO)
parsed_url = urlparse(custom_domain_secondary)
# Trim any trailing slashes from the path
path = parsed_url.path.rstrip('/')
self.secondary_endpoint = parsed_url.netloc + path
else:
if self.account_name:
self.secondary_endpoint = '{}-secondary.{}.{}'.format(self.account_name, service, endpoint_suffix)
else:
self.secondary_endpoint = None
@staticmethod
def get_service_parameters(service, account_name=None, account_key=None, sas_token=None, is_emulated=None,
protocol=None, endpoint_suffix=None, custom_domain=None, request_session=None,
connection_string=None, socket_timeout=None):
if connection_string:
params = _ServiceParameters._from_connection_string(connection_string, service)
elif is_emulated:
params = _ServiceParameters(service, is_emulated=True)
elif account_name:
params = _ServiceParameters(service,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
custom_domain=custom_domain)
else:
raise ValueError(_ERROR_STORAGE_MISSING_INFO)
params.request_session = request_session
params.socket_timeout = socket_timeout
return params
@staticmethod
def _from_connection_string(connection_string, service):
# Split into key=value pairs removing empties, then split the pairs into a dict
config = dict(s.split('=', 1) for s in connection_string.split(';') if s)
# Authentication
account_name = config.get('AccountName')
account_key = config.get('AccountKey')
sas_token = config.get('SharedAccessSignature')
# Emulator
is_emulated = config.get('UseDevelopmentStorage')
# Basic URL Configuration
protocol = config.get('DefaultEndpointsProtocol')
endpoint_suffix = config.get('EndpointSuffix')
# Custom URLs
endpoint = config.get(_CONNECTION_ENDPOINTS[service])
endpoint_secondary = config.get(_CONNECTION_ENDPOINTS_SECONDARY[service])
return _ServiceParameters(service,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
custom_domain=endpoint,
custom_domain_secondary=endpoint_secondary)

Просмотреть файл

@ -0,0 +1,45 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import platform
__author__ = 'Microsoft Corp. <ptvshelp@microsoft.com>'
__version__ = '0.36.0'
# x-ms-version for storage service.
X_MS_VERSION = '2017-04-17'
# UserAgent string sample: 'Azure-Storage/0.32.0 (Python CPython 3.4.2; Windows 8)'
USER_AGENT_STRING = 'Azure-Storage/{} (Python {} {}; {} {})'.format(__version__, platform.python_implementation(),
platform.python_version(), platform.system(),
platform.release())
# Live ServiceClient URLs
SERVICE_HOST_BASE = 'core.windows.net'
DEFAULT_PROTOCOL = 'https'
# Development ServiceClient URLs
DEV_BLOB_HOST = '127.0.0.1:10000'
DEV_QUEUE_HOST = '127.0.0.1:10001'
DEV_TABLE_HOST = '127.0.0.1:10002'
# Default credentials for Development Storage Service
DEV_ACCOUNT_NAME = 'devstoreaccount1'
DEV_ACCOUNT_KEY = 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=='
# Socket timeout in seconds
DEFAULT_SOCKET_TIMEOUT = 20
# Encryption constants
_ENCRYPTION_PROTOCOL_V1 = '1.0'

Просмотреть файл

@ -0,0 +1,353 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from dateutil import parser
from azure.cosmosdb.common._common_conversion import _to_str
try:
from xml.etree import cElementTree as ETree
except ImportError:
from xml.etree import ElementTree as ETree
from azure.cosmosdb.common.models import (
ServiceProperties,
Logging,
Metrics,
CorsRule,
AccessPolicy,
_dict,
GeoReplication,
ServiceStats,
)
def _int_to_str(value):
return value if value is None else int(value)
def _bool(value):
return value.lower() == 'true'
def _to_upper_str(value):
return _to_str(value).upper() if value is not None else None
def _get_download_size(start_range, end_range, resource_size):
if start_range is not None:
end_range = end_range if end_range else (resource_size if resource_size else None)
if end_range is not None:
return end_range - start_range
else:
return None
else:
return resource_size
GET_PROPERTIES_ATTRIBUTE_MAP = {
'last-modified': (None, 'last_modified', parser.parse),
'etag': (None, 'etag', _to_str),
'x-ms-blob-type': (None, 'blob_type', _to_str),
'content-length': (None, 'content_length', _int_to_str),
'content-range': (None, 'content_range', _to_str),
'x-ms-blob-sequence-number': (None, 'page_blob_sequence_number', _int_to_str),
'x-ms-blob-committed-block-count': (None, 'append_blob_committed_block_count', _int_to_str),
'x-ms-access-tier': (None, 'blob_tier', _to_str),
'x-ms-access-tier-inferred': (None, 'blob_tier_inferred', _bool),
'x-ms-archive-status': (None, 'rehydration_status', _to_str),
'x-ms-share-quota': (None, 'quota', _int_to_str),
'x-ms-server-encrypted': (None, 'server_encrypted', _bool),
'content-type': ('content_settings', 'content_type', _to_str),
'cache-control': ('content_settings', 'cache_control', _to_str),
'content-encoding': ('content_settings', 'content_encoding', _to_str),
'content-disposition': ('content_settings', 'content_disposition', _to_str),
'content-language': ('content_settings', 'content_language', _to_str),
'content-md5': ('content_settings', 'content_md5', _to_str),
'x-ms-lease-status': ('lease', 'status', _to_str),
'x-ms-lease-state': ('lease', 'state', _to_str),
'x-ms-lease-duration': ('lease', 'duration', _to_str),
'x-ms-copy-id': ('copy', 'id', _to_str),
'x-ms-copy-source': ('copy', 'source', _to_str),
'x-ms-copy-status': ('copy', 'status', _to_str),
'x-ms-copy-progress': ('copy', 'progress', _to_str),
'x-ms-copy-completion-time': ('copy', 'completion_time', parser.parse),
'x-ms-copy-destination-snapshot': ('copy', 'destination_snapshot_time', _to_str),
'x-ms-copy-status-description': ('copy', 'status_description', _to_str),
}
def _parse_metadata(response):
'''
Extracts out resource metadata information.
'''
if response is None or response.headers is None:
return None
metadata = _dict()
for key, value in response.headers.items():
if key.startswith('x-ms-meta-'):
metadata[key[10:]] = _to_str(value)
return metadata
def _parse_properties(response, result_class):
'''
Extracts out resource properties and metadata information.
Ignores the standard http headers.
'''
if response is None or response.headers is None:
return None
props = result_class()
for key, value in response.headers.items():
info = GET_PROPERTIES_ATTRIBUTE_MAP.get(key)
if info:
if info[0] is None:
setattr(props, info[1], info[2](value))
else:
attr = getattr(props, info[0])
setattr(attr, info[1], info[2](value))
if hasattr(props, 'blob_type') and props.blob_type == 'PageBlob' and hasattr(props, 'blob_tier') and props.blob_tier is not None:
props.blob_tier = _to_upper_str(props.blob_tier)
return props
def _parse_length_from_content_range(content_range):
'''
Parses the blob length from the content range header: bytes 1-3/65537
'''
if content_range is None:
return None
# First, split in space and take the second half: '1-3/65537'
# Next, split on slash and take the second half: '65537'
# Finally, convert to an int: 65537
return int(content_range.split(' ', 1)[1].split('/', 1)[1])
def _convert_xml_to_signed_identifiers(response):
'''
<?xml version="1.0" encoding="utf-8"?>
<SignedIdentifiers>
<SignedIdentifier>
<Id>unique-value</Id>
<AccessPolicy>
<Start>start-time</Start>
<Expiry>expiry-time</Expiry>
<Permission>abbreviated-permission-list</Permission>
</AccessPolicy>
</SignedIdentifier>
</SignedIdentifiers>
'''
if response is None or response.body is None:
return None
list_element = ETree.fromstring(response.body)
signed_identifiers = _dict()
for signed_identifier_element in list_element.findall('SignedIdentifier'):
# Id element
id = signed_identifier_element.find('Id').text
# Access policy element
access_policy = AccessPolicy()
access_policy_element = signed_identifier_element.find('AccessPolicy')
if access_policy_element is not None:
start_element = access_policy_element.find('Start')
if start_element is not None:
access_policy.start = parser.parse(start_element.text)
expiry_element = access_policy_element.find('Expiry')
if expiry_element is not None:
access_policy.expiry = parser.parse(expiry_element.text)
access_policy.permission = access_policy_element.findtext('Permission')
signed_identifiers[id] = access_policy
return signed_identifiers
def _convert_xml_to_service_stats(response):
'''
<?xml version="1.0" encoding="utf-8"?>
<StorageServiceStats>
<GeoReplication>
<Status>live|bootstrap|unavailable</Status>
<LastSyncTime>sync-time|<empty></LastSyncTime>
</GeoReplication>
</StorageServiceStats>
'''
if response is None or response.body is None:
return None
service_stats_element = ETree.fromstring(response.body)
geo_replication_element = service_stats_element.find('GeoReplication')
geo_replication = GeoReplication()
geo_replication.status = geo_replication_element.find('Status').text
last_sync_time = geo_replication_element.find('LastSyncTime').text
geo_replication.last_sync_time = parser.parse(last_sync_time) if last_sync_time else None
service_stats = ServiceStats()
service_stats.geo_replication = geo_replication
return service_stats
def _convert_xml_to_service_properties(response):
'''
<?xml version="1.0" encoding="utf-8"?>
<StorageServiceProperties>
<Logging>
<Version>version-number</Version>
<Delete>true|false</Delete>
<Read>true|false</Read>
<Write>true|false</Write>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</Logging>
<HourMetrics>
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</HourMetrics>
<MinuteMetrics>
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</MinuteMetrics>
<Cors>
<CorsRule>
<AllowedOrigins>comma-separated-list-of-allowed-origins</AllowedOrigins>
<AllowedMethods>comma-separated-list-of-HTTP-verb</AllowedMethods>
<MaxAgeInSeconds>max-caching-age-in-seconds</MaxAgeInSeconds>
<ExposedHeaders>comma-seperated-list-of-response-headers</ExposedHeaders>
<AllowedHeaders>comma-seperated-list-of-request-headers</AllowedHeaders>
</CorsRule>
</Cors>
</StorageServiceProperties>
'''
if response is None or response.body is None:
return None
service_properties_element = ETree.fromstring(response.body)
service_properties = ServiceProperties()
# Logging
logging = service_properties_element.find('Logging')
if logging is not None:
service_properties.logging = Logging()
service_properties.logging.version = logging.find('Version').text
service_properties.logging.delete = _bool(logging.find('Delete').text)
service_properties.logging.read = _bool(logging.find('Read').text)
service_properties.logging.write = _bool(logging.find('Write').text)
_convert_xml_to_retention_policy(logging.find('RetentionPolicy'),
service_properties.logging.retention_policy)
# HourMetrics
hour_metrics_element = service_properties_element.find('HourMetrics')
if hour_metrics_element is not None:
service_properties.hour_metrics = Metrics()
_convert_xml_to_metrics(hour_metrics_element, service_properties.hour_metrics)
# MinuteMetrics
minute_metrics_element = service_properties_element.find('MinuteMetrics')
if minute_metrics_element is not None:
service_properties.minute_metrics = Metrics()
_convert_xml_to_metrics(minute_metrics_element, service_properties.minute_metrics)
# CORS
cors = service_properties_element.find('Cors')
if cors is not None:
service_properties.cors = list()
for rule in cors.findall('CorsRule'):
allowed_origins = rule.find('AllowedOrigins').text.split(',')
allowed_methods = rule.find('AllowedMethods').text.split(',')
max_age_in_seconds = int(rule.find('MaxAgeInSeconds').text)
cors_rule = CorsRule(allowed_origins, allowed_methods, max_age_in_seconds)
exposed_headers = rule.find('ExposedHeaders').text
if exposed_headers is not None:
cors_rule.exposed_headers = exposed_headers.split(',')
allowed_headers = rule.find('AllowedHeaders').text
if allowed_headers is not None:
cors_rule.allowed_headers = allowed_headers.split(',')
service_properties.cors.append(cors_rule)
# Target version
target_version = service_properties_element.find('DefaultServiceVersion')
if target_version is not None:
service_properties.target_version = target_version.text
return service_properties
def _convert_xml_to_metrics(xml, metrics):
'''
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
'''
# Version
metrics.version = xml.find('Version').text
# Enabled
metrics.enabled = _bool(xml.find('Enabled').text)
# IncludeAPIs
include_apis_element = xml.find('IncludeAPIs')
if include_apis_element is not None:
metrics.include_apis = _bool(include_apis_element.text)
# RetentionPolicy
_convert_xml_to_retention_policy(xml.find('RetentionPolicy'), metrics.retention_policy)
def _convert_xml_to_retention_policy(xml, retention_policy):
'''
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
'''
# Enabled
retention_policy.enabled = _bool(xml.find('Enabled').text)
# Days
days_element = xml.find('Days')
if days_element is not None:
retention_policy.days = int(days_element.text)

Просмотреть файл

@ -0,0 +1,242 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from collections import OrderedDict
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import CBC
from azure.cosmosdb.common._common_conversion import (
_encode_base64,
_decode_base64_to_bytes,
)
from azure.cosmosdb.common._constants import (
_ENCRYPTION_PROTOCOL_V1,
__version__,
)
from azure.cosmosdb.common._error import (
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION,
_validate_not_none,
_validate_encryption_protocol_version,
_validate_key_encryption_key_unwrap,
_validate_kek_id,
)
class _EncryptionAlgorithm(object):
'''
Specifies which client encryption algorithm is used.
'''
AES_CBC_256 = 'AES_CBC_256'
class _WrappedContentKey:
'''
Represents the envelope key details stored on the service.
'''
def __init__(self, algorithm, encrypted_key, key_id):
'''
:param str algorithm:
The algorithm used for wrapping.
:param bytes encrypted_key:
The encrypted content-encryption-key.
:param str key_id:
The key-encryption-key identifier string.
'''
_validate_not_none('algorithm', algorithm)
_validate_not_none('encrypted_key', encrypted_key)
_validate_not_none('key_id', key_id)
self.algorithm = algorithm
self.encrypted_key = encrypted_key
self.key_id = key_id
class _EncryptionAgent:
'''
Represents the encryption agent stored on the service.
It consists of the encryption protocol version and encryption algorithm used.
'''
def __init__(self, encryption_algorithm, protocol):
'''
:param _EncryptionAlgorithm encryption_algorithm:
The algorithm used for encrypting the message contents.
:param str protocol:
The protocol version used for encryption.
'''
_validate_not_none('encryption_algorithm', encryption_algorithm)
_validate_not_none('protocol', protocol)
self.encryption_algorithm = str(encryption_algorithm)
self.protocol = protocol
class _EncryptionData:
'''
Represents the encryption data that is stored on the service.
'''
def __init__(self, content_encryption_IV, encryption_agent, wrapped_content_key,
key_wrapping_metadata):
'''
:param bytes content_encryption_IV:
The content encryption initialization vector.
:param _EncryptionAgent encryption_agent:
The encryption agent.
:param _WrappedContentKey wrapped_content_key:
An object that stores the wrapping algorithm, the key identifier,
and the encrypted key bytes.
:param dict key_wrapping_metadata:
A dict containing metadata related to the key wrapping.
'''
_validate_not_none('content_encryption_IV', content_encryption_IV)
_validate_not_none('encryption_agent', encryption_agent)
_validate_not_none('wrapped_content_key', wrapped_content_key)
self.content_encryption_IV = content_encryption_IV
self.encryption_agent = encryption_agent
self.wrapped_content_key = wrapped_content_key
self.key_wrapping_metadata = key_wrapping_metadata
def _generate_encryption_data_dict(kek, cek, iv):
'''
Generates and returns the encryption metadata as a dict.
:param object kek: The key encryption key. See calling functions for more information.
:param bytes cek: The content encryption key.
:param bytes iv: The initialization vector.
:return: A dict containing all the encryption metadata.
:rtype: dict
'''
# Encrypt the cek.
wrapped_cek = kek.wrap_key(cek)
# Build the encryption_data dict.
# Use OrderedDict to comply with Java's ordering requirement.
wrapped_content_key = OrderedDict()
wrapped_content_key['KeyId'] = kek.get_kid()
wrapped_content_key['EncryptedKey'] = _encode_base64(wrapped_cek)
wrapped_content_key['Algorithm'] = kek.get_key_wrap_algorithm()
encryption_agent = OrderedDict()
encryption_agent['Protocol'] = _ENCRYPTION_PROTOCOL_V1
encryption_agent['EncryptionAlgorithm'] = _EncryptionAlgorithm.AES_CBC_256
encryption_data_dict = OrderedDict()
encryption_data_dict['WrappedContentKey'] = wrapped_content_key
encryption_data_dict['EncryptionAgent'] = encryption_agent
encryption_data_dict['ContentEncryptionIV'] = _encode_base64(iv)
encryption_data_dict['KeyWrappingMetadata'] = {'EncryptionLibrary': 'Python ' + __version__}
return encryption_data_dict
def _dict_to_encryption_data(encryption_data_dict):
'''
Converts the specified dictionary to an EncryptionData object for
eventual use in decryption.
:param dict encryption_data_dict:
The dictionary containing the encryption data.
:return: an _EncryptionData object built from the dictionary.
:rtype: _EncryptionData
'''
try:
if encryption_data_dict['EncryptionAgent']['Protocol'] != _ENCRYPTION_PROTOCOL_V1:
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_VERSION)
except KeyError:
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_VERSION)
wrapped_content_key = encryption_data_dict['WrappedContentKey']
wrapped_content_key = _WrappedContentKey(wrapped_content_key['Algorithm'],
_decode_base64_to_bytes(wrapped_content_key['EncryptedKey']),
wrapped_content_key['KeyId'])
encryption_agent = encryption_data_dict['EncryptionAgent']
encryption_agent = _EncryptionAgent(encryption_agent['EncryptionAlgorithm'],
encryption_agent['Protocol'])
if 'KeyWrappingMetadata' in encryption_data_dict:
key_wrapping_metadata = encryption_data_dict['KeyWrappingMetadata']
else:
key_wrapping_metadata = None
encryption_data = _EncryptionData(_decode_base64_to_bytes(encryption_data_dict['ContentEncryptionIV']),
encryption_agent,
wrapped_content_key,
key_wrapping_metadata)
return encryption_data
def _generate_AES_CBC_cipher(cek, iv):
'''
Generates and returns an encryption cipher for AES CBC using the given cek and iv.
:param bytes[] cek: The content encryption key for the cipher.
:param bytes[] iv: The initialization vector for the cipher.
:return: A cipher for encrypting in AES256 CBC.
:rtype: ~cryptography.hazmat.primitives.ciphers.Cipher
'''
backend = default_backend()
algorithm = AES(cek)
mode = CBC(iv)
return Cipher(algorithm, mode, backend)
def _validate_and_unwrap_cek(encryption_data, key_encryption_key=None, key_resolver=None):
'''
Extracts and returns the content_encryption_key stored in the encryption_data object
and performs necessary validation on all parameters.
:param _EncryptionData encryption_data:
The encryption metadata of the retrieved value.
:param obj key_encryption_key:
The key_encryption_key used to unwrap the cek. Please refer to high-level service object
(i.e. TableService) instance variables for more details.
:param func key_resolver:
A function used that, given a key_id, will return a key_encryption_key. Please refer
to high service object (i.e. TableService) instance variables for more details.
:return: the content_encryption_key stored in the encryption_data object.
:rtype: bytes[]
'''
_validate_not_none('content_encryption_IV', encryption_data.content_encryption_IV)
_validate_not_none('encrypted_key', encryption_data.wrapped_content_key.encrypted_key)
_validate_encryption_protocol_version(encryption_data.encryption_agent.protocol)
content_encryption_key = None
# If the resolver exists, give priority to the key it finds.
if key_resolver is not None:
key_encryption_key = key_resolver(encryption_data.wrapped_content_key.key_id)
_validate_not_none('key_encryption_key', key_encryption_key)
_validate_key_encryption_key_unwrap(key_encryption_key)
_validate_kek_id(encryption_data.wrapped_content_key.key_id, key_encryption_key.get_kid())
# Will throw an exception if the specified algorithm is not supported.
content_encryption_key = key_encryption_key.unwrap_key(encryption_data.wrapped_content_key.encrypted_key,
encryption_data.wrapped_content_key.algorithm)
_validate_not_none('content_encryption_key', content_encryption_key)
return content_encryption_key

Просмотреть файл

@ -0,0 +1,186 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from sys import version_info
if version_info < (3,):
def _str(value):
if isinstance(value, unicode):
return value.encode('utf-8')
return str(value)
else:
_str = str
def _to_str(value):
return _str(value) if value is not None else None
from azure.common import (
AzureHttpError,
AzureConflictHttpError,
AzureMissingResourceHttpError,
AzureException,
)
from azure.cosmosdb.common._constants import (
_ENCRYPTION_PROTOCOL_V1,
)
_ERROR_CONFLICT = 'Conflict ({0})'
_ERROR_NOT_FOUND = 'Not found ({0})'
_ERROR_UNKNOWN = 'Unknown error ({0})'
_ERROR_STORAGE_MISSING_INFO = \
'You need to provide an account name and either an account_key or sas_token when creating a storage service.'
_ERROR_EMULATOR_DOES_NOT_SUPPORT_FILES = \
'The emulator does not support the file service.'
_ERROR_ACCESS_POLICY = \
'share_access_policy must be either SignedIdentifier or AccessPolicy ' + \
'instance'
_ERROR_PARALLEL_NOT_SEEKABLE = 'Parallel operations require a seekable stream.'
_ERROR_VALUE_SHOULD_BE_BYTES = '{0} should be of type bytes.'
_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM = '{0} should be of type bytes or a readable file-like/io.IOBase stream object.'
_ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM = '{0} should be a seekable file-like/io.IOBase type stream object.'
_ERROR_VALUE_SHOULD_BE_STREAM = '{0} should be a file-like/io.IOBase type stream object with a read method.'
_ERROR_VALUE_NONE = '{0} should not be None.'
_ERROR_VALUE_NONE_OR_EMPTY = '{0} should not be None or empty.'
_ERROR_VALUE_NEGATIVE = '{0} should not be negative.'
_ERROR_NO_SINGLE_THREAD_CHUNKING = \
'To use {0} chunk downloader more than 1 thread must be ' + \
'used since get_{0}_to_bytes should be called for single threaded ' + \
'{0} downloads.'
_ERROR_START_END_NEEDED_FOR_MD5 = \
'Both end_range and start_range need to be specified ' + \
'for getting content MD5.'
_ERROR_RANGE_TOO_LARGE_FOR_MD5 = \
'Getting content MD5 for a range greater than 4MB ' + \
'is not supported.'
_ERROR_MD5_MISMATCH = \
'MD5 mismatch. Expected value is \'{0}\', computed value is \'{1}\'.'
_ERROR_TOO_MANY_ACCESS_POLICIES = \
'Too many access policies provided. The server does not support setting more than 5 access policies on a single resource.'
_ERROR_OBJECT_INVALID = \
'{0} does not define a complete interface. Value of {1} is either missing or invalid.'
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION = \
'Encryption version is not supported.'
_ERROR_DECRYPTION_FAILURE = \
'Decryption failed'
_ERROR_ENCRYPTION_REQUIRED = \
'Encryption required but no key was provided.'
_ERROR_DECRYPTION_REQUIRED = \
'Decryption required but neither key nor resolver was provided.' + \
' If you do not want to decypt, please do not set the require encryption flag.'
_ERROR_INVALID_KID = \
'Provided or resolved key-encryption-key does not match the id of key used to encrypt.'
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM = \
'Specified encryption algorithm is not supported.'
_ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION = 'The require_encryption flag is set, but encryption is not supported' + \
' for this method.'
_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM = 'Unknown key wrap algorithm.'
_ERROR_DATA_NOT_ENCRYPTED = 'Encryption required, but received data does not contain appropriate metatadata.' + \
'Data was either not encrypted or metadata has been lost.'
def _dont_fail_on_exist(error):
''' don't throw exception if the resource exists.
This is called by create_* APIs with fail_on_exist=False'''
if isinstance(error, AzureConflictHttpError):
return False
else:
raise error
def _dont_fail_not_exist(error):
''' don't throw exception if the resource doesn't exist.
This is called by create_* APIs with fail_on_exist=False'''
if isinstance(error, AzureMissingResourceHttpError):
return False
else:
raise error
def _http_error_handler(http_error):
''' Simple error handler for azure.'''
message = str(http_error)
if http_error.respbody is not None:
message += '\n' + http_error.respbody.decode('utf-8-sig')
raise AzureHttpError(message, http_error.status)
def _validate_type_bytes(param_name, param):
if not isinstance(param, bytes):
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES.format(param_name))
def _validate_type_bytes_or_stream(param_name, param):
if not (isinstance(param, bytes) or hasattr(param, 'read')):
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM.format(param_name))
def _validate_not_none(param_name, param):
if param is None:
raise ValueError(_ERROR_VALUE_NONE.format(param_name))
def _validate_content_match(server_md5, computed_md5):
if server_md5 != computed_md5:
raise AzureException(_ERROR_MD5_MISMATCH.format(server_md5, computed_md5))
def _validate_access_policies(identifiers):
if identifiers and len(identifiers) > 5:
raise AzureException(_ERROR_TOO_MANY_ACCESS_POLICIES)
def _validate_key_encryption_key_wrap(kek):
# Note that None is not callable and so will fail the second clause of each check.
if not hasattr(kek, 'wrap_key') or not callable(kek.wrap_key):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'wrap_key'))
if not hasattr(kek, 'get_kid') or not callable(kek.get_kid):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'get_kid'))
if not hasattr(kek, 'get_key_wrap_algorithm') or not callable(kek.get_key_wrap_algorithm):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'get_key_wrap_algorithm'))
def _validate_key_encryption_key_unwrap(kek):
if not hasattr(kek, 'get_kid') or not callable(kek.get_kid):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'get_kid'))
if not hasattr(kek, 'unwrap_key') or not callable(kek.unwrap_key):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'unwrap_key'))
def _validate_encryption_required(require_encryption, kek):
if require_encryption and (kek is None):
raise ValueError(_ERROR_ENCRYPTION_REQUIRED)
def _validate_decryption_required(require_encryption, kek, resolver):
if (require_encryption and (kek is None) and
(resolver is None)):
raise ValueError(_ERROR_DECRYPTION_REQUIRED)
def _validate_encryption_protocol_version(encryption_protocol):
if not (_ENCRYPTION_PROTOCOL_V1 == encryption_protocol):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_VERSION)
def _validate_kek_id(kid, resolved_id):
if not (kid == resolved_id):
raise ValueError(_ERROR_INVALID_KID)
def _validate_encryption_unsupported(require_encryption, key_encryption_key):
if require_encryption or (key_encryption_key is not None):
raise ValueError(_ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION)

Просмотреть файл

@ -0,0 +1,83 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
class HTTPError(Exception):
'''
Represents an HTTP Exception when response status code >= 300.
:ivar int status:
the status code of the response
:ivar str message:
the message
:ivar list headers:
the returned headers, as a list of (name, value) pairs
:ivar bytes body:
the body of the response
'''
def __init__(self, status, message, respheader, respbody):
self.status = status
self.respheader = respheader
self.respbody = respbody
Exception.__init__(self, message)
class HTTPResponse(object):
'''
Represents a response from an HTTP request.
:ivar int status:
the status code of the response
:ivar str message:
the message
:ivar dict headers:
the returned headers
:ivar bytes body:
the body of the response
'''
def __init__(self, status, message, headers, body):
self.status = status
self.message = message
self.headers = headers
self.body = body
class HTTPRequest(object):
'''
Represents an HTTP Request.
:ivar str host:
the host name to connect to
:ivar str method:
the method to use to connect (string such as GET, POST, PUT, etc.)
:ivar str path:
the uri fragment
:ivar dict query:
query parameters
:ivar dict headers:
header values
:ivar bytes body:
the body of the request.
'''
def __init__(self):
self.host = ''
self.method = ''
self.path = ''
self.query = {} # list of (name, value)
self.headers = {} # list of (header name, header value)
self.body = ''

Просмотреть файл

@ -0,0 +1,112 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import logging
from azure.cosmosdb.common._http import HTTPResponse
from azure.cosmosdb.common._serialization import _get_data_bytes_or_stream_only
logger = logging.getLogger(__name__)
class _HTTPClient(object):
'''
Takes the request and sends it to cloud service and returns the response.
'''
def __init__(self, protocol=None, session=None, timeout=None):
'''
:param str protocol:
http or https.
:param requests.Session session:
session object created with requests library (or compatible).
:param int timeout:
timeout for the http request, in seconds.
'''
self.protocol = protocol
self.session = session
self.timeout = timeout
# By default, requests adds an Accept:*/* and Accept-Encoding to the session,
# which causes issues with some Azure REST APIs. Removing these here gives us
# the flexibility to add it back on a case by case basis.
if 'Accept' in self.session.headers:
del self.session.headers['Accept']
if 'Accept-Encoding' in self.session.headers:
del self.session.headers['Accept-Encoding']
self.proxies = None
def set_proxy(self, host, port, user, password):
'''
Sets the proxy server host and port for the HTTP CONNECT Tunnelling.
Note that we set the proxies directly on the request later on rather than
using the session object as requests has a bug where session proxy is ignored
in favor of environment proxy. So, auth will not work unless it is passed
directly when making the request as this overrides both.
:param str host:
Address of the proxy. Ex: '192.168.0.100'
:param int port:
Port of the proxy. Ex: 6000
:param str user:
User for proxy authorization.
:param str password:
Password for proxy authorization.
'''
if user and password:
proxy_string = '{}:{}@{}:{}'.format(user, password, host, port)
else:
proxy_string = '{}:{}'.format(host, port)
self.proxies = {'http': 'http://{}'.format(proxy_string),
'https': 'https://{}'.format(proxy_string)}
def perform_request(self, request):
'''
Sends an HTTPRequest to Azure Storage and returns an HTTPResponse. If
the response code indicates an error, raise an HTTPError.
:param HTTPRequest request:
The request to serialize and send.
:return: An HTTPResponse containing the parsed HTTP response.
:rtype: :class:`~azure.storage.common._http.HTTPResponse`
'''
# Verify the body is in bytes or either a file-like/stream object
if request.body:
request.body = _get_data_bytes_or_stream_only('request.body', request.body)
# Construct the URI
uri = self.protocol.lower() + '://' + request.host + request.path
# Send the request
response = self.session.request(request.method,
uri,
params=request.query,
headers=request.headers,
data=request.body or None,
timeout=self.timeout,
proxies=self.proxies)
# Parse the response
status = int(response.status_code)
response_headers = {}
for key, name in response.headers.items():
response_headers[key.lower()] = name
wrap = HTTPResponse(status, response.reason, response_headers, response.content)
response.close()
return wrap

Просмотреть файл

@ -0,0 +1,352 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
import uuid
from datetime import date
from io import (BytesIO, IOBase, SEEK_SET, SEEK_END, UnsupportedOperation)
from os import fstat
from time import time
from wsgiref.handlers import format_date_time
from dateutil.tz import tzutc
if sys.version_info >= (3,):
from urllib.parse import quote as url_quote
else:
from urllib2 import quote as url_quote
try:
from xml.etree import cElementTree as ETree
except ImportError:
from xml.etree import ElementTree as ETree
from azure.cosmosdb.common._error import (
_ERROR_VALUE_SHOULD_BE_BYTES,
_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM,
_ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM
)
from azure.cosmosdb.common._constants import (
X_MS_VERSION,
USER_AGENT_STRING,
)
from azure.cosmosdb.common.models import (
_unicode_type,
)
from azure.cosmosdb.common._common_conversion import (
_str,
)
def _to_utc_datetime(value):
# Azure expects the date value passed in to be UTC.
# Azure will always return values as UTC.
# If a date is passed in without timezone info, it is assumed to be UTC.
if value.tzinfo:
value = value.astimezone(tzutc())
return value.strftime('%Y-%m-%dT%H:%M:%SZ')
def _update_request(request):
# Verify body
if request.body:
request.body = _get_data_bytes_or_stream_only('request.body', request.body)
length = _len_plus(request.body)
# only scenario where this case is plausible is if the stream object is not seekable.
if length is None:
raise ValueError(_ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM)
# if it is PUT, POST, MERGE, DELETE, need to add content-length to header.
if request.method in ['PUT', 'POST', 'MERGE', 'DELETE']:
request.headers['Content-Length'] = str(length)
# append addtional headers based on the service
request.headers['x-ms-version'] = X_MS_VERSION
request.headers['User-Agent'] = USER_AGENT_STRING
request.headers['x-ms-client-request-id'] = str(uuid.uuid1())
# If the host has a path component (ex local storage), move it
path = request.host.split('/', 1)
if len(path) == 2:
request.host = path[0]
request.path = '/{}{}'.format(path[1], request.path)
# Encode and optionally add local storage prefix to path
request.path = url_quote(request.path, '/()$=\',~')
def _add_metadata_headers(metadata, request):
if metadata:
if not request.headers:
request.headers = {}
for name, value in metadata.items():
request.headers['x-ms-meta-' + name] = value
def _add_date_header(request):
current_time = format_date_time(time())
request.headers['x-ms-date'] = current_time
def _get_data_bytes_only(param_name, param_value):
'''Validates the request body passed in and converts it to bytes
if our policy allows it.'''
if param_value is None:
return b''
if isinstance(param_value, bytes):
return param_value
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES.format(param_name))
def _get_data_bytes_or_stream_only(param_name, param_value):
'''Validates the request body passed in is a stream/file-like or bytes
object.'''
if param_value is None:
return b''
if isinstance(param_value, bytes) or hasattr(param_value, 'read'):
return param_value
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM.format(param_name))
def _get_request_body(request_body):
'''Converts an object into a request body. If it's None
we'll return an empty string, if it's one of our objects it'll
convert it to XML and return it. Otherwise we just use the object
directly'''
if request_body is None:
return b''
if isinstance(request_body, bytes) or isinstance(request_body, IOBase):
return request_body
if isinstance(request_body, _unicode_type):
return request_body.encode('utf-8')
request_body = str(request_body)
if isinstance(request_body, _unicode_type):
return request_body.encode('utf-8')
return request_body
def _convert_signed_identifiers_to_xml(signed_identifiers):
if signed_identifiers is None:
return ''
sis = ETree.Element('SignedIdentifiers')
for id, access_policy in signed_identifiers.items():
# Root signed identifers element
si = ETree.SubElement(sis, 'SignedIdentifier')
# Id element
ETree.SubElement(si, 'Id').text = id
# Access policy element
policy = ETree.SubElement(si, 'AccessPolicy')
if access_policy.start:
start = access_policy.start
if isinstance(access_policy.start, date):
start = _to_utc_datetime(start)
ETree.SubElement(policy, 'Start').text = start
if access_policy.expiry:
expiry = access_policy.expiry
if isinstance(access_policy.expiry, date):
expiry = _to_utc_datetime(expiry)
ETree.SubElement(policy, 'Expiry').text = expiry
if access_policy.permission:
ETree.SubElement(policy, 'Permission').text = _str(access_policy.permission)
# Add xml declaration and serialize
try:
stream = BytesIO()
ETree.ElementTree(sis).write(stream, xml_declaration=True, encoding='utf-8', method='xml')
except:
raise
finally:
output = stream.getvalue()
stream.close()
return output
def _convert_service_properties_to_xml(logging, hour_metrics, minute_metrics, cors, target_version=None):
'''
<?xml version="1.0" encoding="utf-8"?>
<StorageServiceProperties>
<Logging>
<Version>version-number</Version>
<Delete>true|false</Delete>
<Read>true|false</Read>
<Write>true|false</Write>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</Logging>
<HourMetrics>
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</HourMetrics>
<MinuteMetrics>
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
</MinuteMetrics>
<Cors>
<CorsRule>
<AllowedOrigins>comma-separated-list-of-allowed-origins</AllowedOrigins>
<AllowedMethods>comma-separated-list-of-HTTP-verb</AllowedMethods>
<MaxAgeInSeconds>max-caching-age-in-seconds</MaxAgeInSeconds>
<ExposedHeaders>comma-seperated-list-of-response-headers</ExposedHeaders>
<AllowedHeaders>comma-seperated-list-of-request-headers</AllowedHeaders>
</CorsRule>
</Cors>
</StorageServiceProperties>
'''
service_properties_element = ETree.Element('StorageServiceProperties')
# Logging
if logging:
logging_element = ETree.SubElement(service_properties_element, 'Logging')
ETree.SubElement(logging_element, 'Version').text = logging.version
ETree.SubElement(logging_element, 'Delete').text = str(logging.delete)
ETree.SubElement(logging_element, 'Read').text = str(logging.read)
ETree.SubElement(logging_element, 'Write').text = str(logging.write)
retention_element = ETree.SubElement(logging_element, 'RetentionPolicy')
_convert_retention_policy_to_xml(logging.retention_policy, retention_element)
# HourMetrics
if hour_metrics:
hour_metrics_element = ETree.SubElement(service_properties_element, 'HourMetrics')
_convert_metrics_to_xml(hour_metrics, hour_metrics_element)
# MinuteMetrics
if minute_metrics:
minute_metrics_element = ETree.SubElement(service_properties_element, 'MinuteMetrics')
_convert_metrics_to_xml(minute_metrics, minute_metrics_element)
# CORS
# Make sure to still serialize empty list
if cors is not None:
cors_element = ETree.SubElement(service_properties_element, 'Cors')
for rule in cors:
cors_rule = ETree.SubElement(cors_element, 'CorsRule')
ETree.SubElement(cors_rule, 'AllowedOrigins').text = ",".join(rule.allowed_origins)
ETree.SubElement(cors_rule, 'AllowedMethods').text = ",".join(rule.allowed_methods)
ETree.SubElement(cors_rule, 'MaxAgeInSeconds').text = str(rule.max_age_in_seconds)
ETree.SubElement(cors_rule, 'ExposedHeaders').text = ",".join(rule.exposed_headers)
ETree.SubElement(cors_rule, 'AllowedHeaders').text = ",".join(rule.allowed_headers)
# Target version
if target_version:
ETree.SubElement(service_properties_element, 'DefaultServiceVersion').text = target_version
# Add xml declaration and serialize
try:
stream = BytesIO()
ETree.ElementTree(service_properties_element).write(stream, xml_declaration=True, encoding='utf-8',
method='xml')
except:
raise
finally:
output = stream.getvalue()
stream.close()
return output
def _convert_metrics_to_xml(metrics, root):
'''
<Version>version-number</Version>
<Enabled>true|false</Enabled>
<IncludeAPIs>true|false</IncludeAPIs>
<RetentionPolicy>
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
</RetentionPolicy>
'''
# Version
ETree.SubElement(root, 'Version').text = metrics.version
# Enabled
ETree.SubElement(root, 'Enabled').text = str(metrics.enabled)
# IncludeAPIs
if metrics.enabled and metrics.include_apis is not None:
ETree.SubElement(root, 'IncludeAPIs').text = str(metrics.include_apis)
# RetentionPolicy
retention_element = ETree.SubElement(root, 'RetentionPolicy')
_convert_retention_policy_to_xml(metrics.retention_policy, retention_element)
def _convert_retention_policy_to_xml(retention_policy, root):
'''
<Enabled>true|false</Enabled>
<Days>number-of-days</Days>
'''
# Enabled
ETree.SubElement(root, 'Enabled').text = str(retention_policy.enabled)
# Days
if retention_policy.enabled and retention_policy.days:
ETree.SubElement(root, 'Days').text = str(retention_policy.days)
def _len_plus(data):
length = None
# Check if object implements the __len__ method, covers most input cases such as bytearray.
try:
length = len(data)
except:
pass
if not length:
# Check if the stream is a file-like stream object.
# If so, calculate the size using the file descriptor.
try:
fileno = data.fileno()
except (AttributeError, UnsupportedOperation):
pass
else:
return fstat(fileno).st_size
# If the stream is seekable and tell() is implemented, calculate the stream size.
try:
current_position = data.tell()
data.seek(0, SEEK_END)
length = data.tell() - current_position
data.seek(current_position, SEEK_SET)
except (AttributeError, UnsupportedOperation):
pass
return length

Просмотреть файл

@ -0,0 +1,130 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
# Note that we import TableService on demand
# because this module is imported by azure/storage/__init__
# ie. we don't want 'import azure.storage' to trigger an automatic import
# of table package.
from azure.cosmosdb.common._error import _validate_not_none
from azure.cosmosdb.common.models import (
ResourceTypes,
Services,
AccountPermissions,
)
from azure.cosmosdb.common.sharedaccesssignature import (
SharedAccessSignature,
)
class CloudStorageAccount(object):
"""
Provides a factory for creating the table service
with a common account name and account key or sas token. Users can either
use the factory or can construct the appropriate service directly.
"""
def __init__(self, account_name=None, account_key=None, sas_token=None, is_emulated=None):
'''
:param str account_name:
The storage account name. This is used to authenticate requests
signed with an account key and to construct the storage endpoint. It
is required unless is_emulated is used.
:param str account_key:
The storage account key. This is used for shared key authentication.
:param str sas_token:
A shared access signature token to use to authenticate requests
instead of the account key. If account key and sas token are both
specified, account key will be used to sign.
:param bool is_emulated:
Whether to use the emulator. Defaults to False. If specified, will
override all other parameters.
'''
self.account_name = account_name
self.account_key = account_key
self.sas_token = sas_token
self.is_emulated = is_emulated
def create_table_service(self):
'''
Creates a TableService object with the settings specified in the
CloudStorageAccount.
:return: A service object.
:rtype: :class:`~azure.storage.table.tableservice.TableService`
'''
try:
from azure.cosmosdb.table.tableservice import TableService
return TableService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-table is required. '
+ 'Please install it using "pip install azure-storage-table"')
def generate_shared_access_signature(self, services, resource_types,
permission, expiry, start=None,
ip=None, protocol=None):
'''
Generates a shared access signature for the account.
Use the returned signature with the sas_token parameter of the service
or to create a new account object.
:param Services services:
Specifies the services accessible with the account SAS. You can
combine values to provide access to more than one service.
:param ResourceTypes resource_types:
Specifies the resource types that are accessible with the account
SAS. You can combine values to provide access to more than one
resource type.
:param AccountPermissions permission:
The permissions associated with the shared access signature. The
user is restricted to operations allowed by the permissions.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has been
specified in an associated stored access policy. You can combine
values to provide more than one permission.
:param expiry:
The time at which the shared access signature becomes invalid.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has
been specified in an associated stored access policy. Azure will always
convert values to UTC. If a date is passed in without timezone info, it
is assumed to be UTC.
:type expiry: datetime or str
:param start:
The time at which the shared access signature becomes valid. If
omitted, start time for this call is assumed to be the time when the
storage service receives the request. Azure will always convert values
to UTC. If a date is passed in without timezone info, it is assumed to
be UTC.
:type start: datetime or str
:param str ip:
Specifies an IP address or a range of IP addresses from which to accept requests.
If the IP address from which the request originates does not match the IP address
or address range specified on the SAS token, the request is not authenticated.
For example, specifying sip=168.1.5.65 or sip=168.1.5.60-168.1.5.70 on the SAS
restricts the request to those IP addresses.
:param str protocol:
Specifies the protocol permitted for a request made. Possible values are
both HTTPS and HTTP (https,http) or HTTPS only (https). The default value
is https,http. Note that HTTP only is not a permitted value.
'''
_validate_not_none('self.account_name', self.account_name)
_validate_not_none('self.account_key', self.account_key)
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_account(services, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)

Просмотреть файл

@ -0,0 +1,629 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
if sys.version_info < (3,):
from collections import Iterable
_unicode_type = unicode
else:
from collections.abc import Iterable
_unicode_type = str
from azure.cosmosdb.common._error import (
_validate_not_none
)
class _HeaderDict(dict):
def __getitem__(self, index):
return super(_HeaderDict, self).__getitem__(index.lower())
class _list(list):
'''Used so that additional properties can be set on the return list'''
pass
class _dict(dict):
'''Used so that additional properties can be set on the return dictionary'''
pass
class _OperationContext(object):
'''
Contains information that lasts the lifetime of an operation. This operation
may span multiple calls to the Azure service.
:ivar bool location_lock:
Whether the location should be locked for this operation.
:ivar str location:
The location to lock to.
'''
def __init__(self, location_lock=False):
self.location_lock = location_lock
self.host_location = None
class ListGenerator(Iterable):
'''
A generator object used to list storage resources. The generator will lazily
follow the continuation tokens returned by the service and stop when all
resources have been returned or max_results is reached.
If max_results is specified and the account has more than that number of
resources, the generator will have a populated next_marker field once it
finishes. This marker can be used to create a new generator if more
results are desired.
'''
def __init__(self, resources, list_method, list_args, list_kwargs):
self.items = resources
self.next_marker = resources.next_marker
self._list_method = list_method
self._list_args = list_args
self._list_kwargs = list_kwargs
def __iter__(self):
# return results
for i in self.items:
yield i
while True:
# if no more results on the service, return
if not self.next_marker:
break
# update the marker args
self._list_kwargs['marker'] = self.next_marker
# handle max results, if present
max_results = self._list_kwargs.get('max_results')
if max_results is not None:
max_results = max_results - len(self.items)
# if we've reached max_results, return
# else, update the max_results arg
if max_results <= 0:
break
else:
self._list_kwargs['max_results'] = max_results
# get the next segment
resources = self._list_method(*self._list_args, **self._list_kwargs)
self.items = resources
self.next_marker = resources.next_marker
# return results
for i in self.items:
yield i
class RetryContext(object):
'''
Contains the request and response information that can be used to determine
whether and how to retry. This context is stored across retries and may be
used to store other information relevant to the retry strategy.
:ivar ~azure.storage.common._http.HTTPRequest request:
The request sent to the storage service.
:ivar ~azure.storage.common._http.HTTPResponse response:
The response returned by the storage service.
:ivar LocationMode location_mode:
The location the request was sent to.
'''
def __init__(self):
self.request = None
self.response = None
self.location_mode = None
class LocationMode(object):
'''
Specifies the location the request should be sent to. This mode only applies
for RA-GRS accounts which allow secondary read access. All other account types
must use PRIMARY.
'''
PRIMARY = 'primary'
''' Requests should be sent to the primary location. '''
SECONDARY = 'secondary'
''' Requests should be sent to the secondary location, if possible. '''
class RetentionPolicy(object):
'''
By default, Storage Analytics will not delete any logging or metrics data. Blobs
and table entities will continue to be written until the shared 20TB limit is
reached. Once the 20TB limit is reached, Storage Analytics will stop writing
new data and will not resume until free space is available. This 20TB limit
is independent of the total limit for your storage account.
There are two ways to delete Storage Analytics data: by manually making deletion
requests or by setting a data retention policy. Manual requests to delete Storage
Analytics data are billable, but delete requests resulting from a retention policy
are not billable.
'''
def __init__(self, enabled=False, days=None):
'''
:param bool enabled:
Indicates whether a retention policy is enabled for the
storage service. If disabled, logging and metrics data will be retained
infinitely by the service unless explicitly deleted.
:param int days:
Required if enabled is true. Indicates the number of
days that metrics or logging data should be retained. All data older
than this value will be deleted. The minimum value you can specify is 1;
the largest value is 365 (one year).
'''
_validate_not_none("enabled", enabled)
if enabled:
_validate_not_none("days", days)
self.enabled = enabled
self.days = days
class Logging(object):
'''
Storage Analytics logs detailed information about successful and failed requests
to a storage service. This information can be used to monitor individual requests
and to diagnose issues with a storage service. Requests are logged on a best-effort
basis.
All logs are stored in block blobs in a container named $logs, which is
automatically created when Storage Analytics is enabled for a storage account.
The $logs container is located in the blob namespace of the storage account.
This container cannot be deleted once Storage Analytics has been enabled, though
its contents can be deleted.
For more information, see https://msdn.microsoft.com/en-us/library/azure/hh343262.aspx
'''
def __init__(self, delete=False, read=False, write=False,
retention_policy=None):
'''
:param bool delete:
Indicates whether all delete requests should be logged.
:param bool read:
Indicates whether all read requests should be logged.
:param bool write:
Indicates whether all write requests should be logged.
:param RetentionPolicy retention_policy:
The retention policy for the metrics.
'''
_validate_not_none("read", read)
_validate_not_none("write", write)
_validate_not_none("delete", delete)
self.version = u'1.0'
self.delete = delete
self.read = read
self.write = write
self.retention_policy = retention_policy if retention_policy else RetentionPolicy()
class Metrics(object):
'''
Metrics include aggregated transaction statistics and capacity data about requests
to a storage service. Transactions are reported at both the API operation level
as well as at the storage service level, and capacity is reported at the storage
service level. Metrics data can be used to analyze storage service usage, diagnose
issues with requests made against the storage service, and to improve the
performance of applications that use a service.
For more information, see https://msdn.microsoft.com/en-us/library/azure/hh343258.aspx
'''
def __init__(self, enabled=False, include_apis=None,
retention_policy=None):
'''
:param bool enabled:
Indicates whether metrics are enabled for
the service.
:param bool include_apis:
Required if enabled is True. Indicates whether metrics
should generate summary statistics for called API operations.
:param RetentionPolicy retention_policy:
The retention policy for the metrics.
'''
_validate_not_none("enabled", enabled)
if enabled:
_validate_not_none("include_apis", include_apis)
self.version = u'1.0'
self.enabled = enabled
self.include_apis = include_apis
self.retention_policy = retention_policy if retention_policy else RetentionPolicy()
class CorsRule(object):
'''
CORS is an HTTP feature that enables a web application running under one domain
to access resources in another domain. Web browsers implement a security
restriction known as same-origin policy that prevents a web page from calling
APIs in a different domain; CORS provides a secure way to allow one domain
(the origin domain) to call APIs in another domain.
For more information, see https://msdn.microsoft.com/en-us/library/azure/dn535601.aspx
'''
def __init__(self, allowed_origins, allowed_methods, max_age_in_seconds=0,
exposed_headers=None, allowed_headers=None):
'''
:param allowed_origins:
A list of origin domains that will be allowed via CORS, or "*" to allow
all domains. The list of must contain at least one entry. Limited to 64
origin domains. Each allowed origin can have up to 256 characters.
:type allowed_origins: list(str)
:param allowed_methods:
A list of HTTP methods that are allowed to be executed by the origin.
The list of must contain at least one entry. For Azure Storage,
permitted methods are DELETE, GET, HEAD, MERGE, POST, OPTIONS or PUT.
:type allowed_methods: list(str)
:param int max_age_in_seconds:
The number of seconds that the client/browser should cache a
preflight response.
:param exposed_headers:
Defaults to an empty list. A list of response headers to expose to CORS
clients. Limited to 64 defined headers and two prefixed headers. Each
header can be up to 256 characters.
:type exposed_headers: list(str)
:param allowed_headers:
Defaults to an empty list. A list of headers allowed to be part of
the cross-origin request. Limited to 64 defined headers and 2 prefixed
headers. Each header can be up to 256 characters.
:type allowed_headers: list(str)
'''
_validate_not_none("allowed_origins", allowed_origins)
_validate_not_none("allowed_methods", allowed_methods)
_validate_not_none("max_age_in_seconds", max_age_in_seconds)
self.allowed_origins = allowed_origins if allowed_origins else list()
self.allowed_methods = allowed_methods if allowed_methods else list()
self.max_age_in_seconds = max_age_in_seconds
self.exposed_headers = exposed_headers if exposed_headers else list()
self.allowed_headers = allowed_headers if allowed_headers else list()
class ServiceProperties(object):
'''
Returned by get_*_service_properties functions. Contains the properties of a
storage service, including Analytics and CORS rules.
Azure Storage Analytics performs logging and provides metrics data for a storage
account. You can use this data to trace requests, analyze usage trends, and
diagnose issues with your storage account. To use Storage Analytics, you must
enable it individually for each service you want to monitor.
The aggregated data is stored in a well-known blob (for logging) and in well-known
tables (for metrics), which may be accessed using the Blob service and Table
service APIs.
For an in-depth guide on using Storage Analytics and other tools to identify,
diagnose, and troubleshoot Azure Storage-related issues, see
http://azure.microsoft.com/documentation/articles/storage-monitoring-diagnosing-troubleshooting/
For more information on CORS, see https://msdn.microsoft.com/en-us/library/azure/dn535601.aspx
'''
pass
class ServiceStats(object):
'''
Returned by get_*_service_stats functions. Contains statistics related to
replication for the given service. It is only available when read-access
geo-redundant replication is enabled for the storage account.
:ivar GeoReplication geo_replication:
An object containing statistics related to replication for the given service.
'''
pass
class GeoReplication(object):
'''
Contains statistics related to replication for the given service.
:ivar str status:
The status of the secondary location. Possible values are:
live: Indicates that the secondary location is active and operational.
bootstrap: Indicates initial synchronization from the primary location
to the secondary location is in progress. This typically occurs
when replication is first enabled.
unavailable: Indicates that the secondary location is temporarily
unavailable.
:ivar date last_sync_time:
A GMT date value, to the second. All primary writes preceding this value
are guaranteed to be available for read operations at the secondary.
Primary writes after this point in time may or may not be available for
reads. The value may be empty if LastSyncTime is not available. This can
happen if the replication status is bootstrap or unavailable. Although
geo-replication is continuously enabled, the LastSyncTime result may
reflect a cached value from the service that is refreshed every few minutes.
'''
pass
class AccessPolicy(object):
'''
Access Policy class used by the set and get acl methods in each service.
A stored access policy can specify the start time, expiry time, and
permissions for the Shared Access Signatures with which it's associated.
Depending on how you want to control access to your table resource, you can
specify all of these parameters within the stored access policy, and omit
them from the URL for the Shared Access Signature. Doing so permits you to
modify the associated signature's behavior at any time, as well as to revoke
it. Or you can specify one or more of the access policy parameters within
the stored access policy, and the others on the URL. Finally, you can
specify all of the parameters on the URL. In this case, you can use the
stored access policy to revoke the signature, but not to modify its behavior.
Together the Shared Access Signature and the stored access policy must
include all fields required to authenticate the signature. If any required
fields are missing, the request will fail. Likewise, if a field is specified
both in the Shared Access Signature URL and in the stored access policy, the
request will fail with status code 400 (Bad Request).
'''
def __init__(self, permission=None, expiry=None, start=None):
'''
:param str permission:
The permissions associated with the shared access signature. The
user is restricted to operations allowed by the permissions.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has been
specified in an associated stored access policy.
:param expiry:
The time at which the shared access signature becomes invalid.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has
been specified in an associated stored access policy. Azure will always
convert values to UTC. If a date is passed in without timezone info, it
is assumed to be UTC.
:type expiry: datetime or str
:param start:
The time at which the shared access signature becomes valid. If
omitted, start time for this call is assumed to be the time when the
storage service receives the request. Azure will always convert values
to UTC. If a date is passed in without timezone info, it is assumed to
be UTC.
:type start: datetime or str
'''
self.start = start
self.expiry = expiry
self.permission = permission
class Protocol(object):
'''
Specifies the protocol permitted for a SAS token. Note that HTTP only is
not allowed.
'''
HTTPS = 'https'
''' Allow HTTPS requests only. '''
HTTPS_HTTP = 'https,http'
''' Allow HTTP and HTTPS requests. '''
class ResourceTypes(object):
'''
Specifies the resource types that are accessible with the account SAS.
:ivar ResourceTypes ResourceTypes.CONTAINER:
Access to container-level APIs (e.g., Create/Delete Container,
Create/Delete Queue, Create/Delete Table, Create/Delete Share,
List Blobs/Files and Directories)
:ivar ResourceTypes ResourceTypes.OBJECT:
Access to object-level APIs for blobs, queue messages, table entities, and
files(e.g. Put Blob, Query Entity, Get Messages, Create File, etc.)
:ivar ResourceTypes ResourceTypes.SERVICE:
Access to service-level APIs (e.g., Get/Set Service Properties,
Get Service Stats, List Containers/Queues/Tables/Shares)
'''
def __init__(self, service=False, container=False, object=False, _str=None):
'''
:param bool service:
Access to service-level APIs (e.g., Get/Set Service Properties,
Get Service Stats, List Containers/Queues/Tables/Shares)
:param bool container:
Access to container-level APIs (e.g., Create/Delete Container,
Create/Delete Queue, Create/Delete Table, Create/Delete Share,
List Blobs/Files and Directories)
:param bool object:
Access to object-level APIs for blobs, queue messages, table entities, and
files(e.g. Put Blob, Query Entity, Get Messages, Create File, etc.)
:param str _str:
A string representing the resource types.
'''
if not _str:
_str = ''
self.service = service or ('s' in _str)
self.container = container or ('c' in _str)
self.object = object or ('o' in _str)
def __or__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __add__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __str__(self):
return (('s' if self.service else '') +
('c' if self.container else '') +
('o' if self.object else ''))
ResourceTypes.SERVICE = ResourceTypes(service=True)
ResourceTypes.CONTAINER = ResourceTypes(container=True)
ResourceTypes.OBJECT = ResourceTypes(object=True)
class Services(object):
'''
Specifies the services accessible with the account SAS.
:ivar Services Services.BLOB: The blob service.
:ivar Services Services.FILE: The file service
:ivar Services Services.QUEUE: The queue service.
:ivar Services Services.TABLE: The table service
'''
def __init__(self, blob=False, queue=False, table=False, file=False, _str=None):
'''
:param bool blob:
Access to any blob service, for example, the `.BlockBlobService`
:param bool queue:
Access to the `.QueueService`
:param bool table:
Access to the `.TableService`
:param bool file:
Access to the `.FileService`
:param str _str:
A string representing the services.
'''
if not _str:
_str = ''
self.blob = blob or ('b' in _str)
self.queue = queue or ('q' in _str)
self.table = table or ('t' in _str)
self.file = file or ('f' in _str)
def __or__(self, other):
return Services(_str=str(self) + str(other))
def __add__(self, other):
return Services(_str=str(self) + str(other))
def __str__(self):
return (('b' if self.blob else '') +
('q' if self.queue else '') +
('t' if self.table else '') +
('f' if self.file else ''))
Services.BLOB = Services(blob=True)
Services.QUEUE = Services(queue=True)
Services.TABLE = Services(table=True)
Services.FILE = Services(file=True)
class AccountPermissions(object):
'''
:class:`~ResourceTypes` class to be used with generate_shared_access_signature
method and for the AccessPolicies used with set_*_acl. There are two types of
SAS which may be used to grant resource access. One is to grant access to a
specific resource (resource-specific). Another is to grant access to the
entire service for a specific account and allow certain operations based on
perms found here.
:ivar AccountPermissions AccountPermissions.ADD:
Valid for the following Object resource types only: queue messages, table
entities, and append blobs.
:ivar AccountPermissions AccountPermissions.CREATE:
Valid for the following Object resource types only: blobs and files. Users
can create new blobs or files, but may not overwrite existing blobs or files.
:ivar AccountPermissions AccountPermissions.DELETE:
Valid for Container and Object resource types, except for queue messages.
:ivar AccountPermissions AccountPermissions.LIST:
Valid for Service and Container resource types only.
:ivar AccountPermissions AccountPermissions.PROCESS:
Valid for the following Object resource type only: queue messages.
:ivar AccountPermissions AccountPermissions.READ:
Valid for all signed resources types (Service, Container, and Object).
Permits read permissions to the specified resource type.
:ivar AccountPermissions AccountPermissions.UPDATE:
Valid for the following Object resource types only: queue messages and table
entities.
:ivar AccountPermissions AccountPermissions.WRITE:
Valid for all signed resources types (Service, Container, and Object).
Permits write permissions to the specified resource type.
'''
def __init__(self, read=False, write=False, delete=False, list=False,
add=False, create=False, update=False, process=False, _str=None):
'''
:param bool read:
Valid for all signed resources types (Service, Container, and Object).
Permits read permissions to the specified resource type.
:param bool write:
Valid for all signed resources types (Service, Container, and Object).
Permits write permissions to the specified resource type.
:param bool delete:
Valid for Container and Object resource types, except for queue messages.
:param bool list:
Valid for Service and Container resource types only.
:param bool add:
Valid for the following Object resource types only: queue messages,
table entities, and append blobs.
:param bool create:
Valid for the following Object resource types only: blobs and files.
Users can create new blobs or files, but may not overwrite existing
blobs or files.
:param bool update:
Valid for the following Object resource types only: queue messages and
table entities.
:param bool process:
Valid for the following Object resource type only: queue messages.
:param str _str:
A string representing the permissions.
'''
if not _str:
_str = ''
self.read = read or ('r' in _str)
self.write = write or ('w' in _str)
self.delete = delete or ('d' in _str)
self.list = list or ('l' in _str)
self.add = add or ('a' in _str)
self.create = create or ('c' in _str)
self.update = update or ('u' in _str)
self.process = process or ('p' in _str)
def __or__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __add__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('w' if self.write else '') +
('d' if self.delete else '') +
('l' if self.list else '') +
('a' if self.add else '') +
('c' if self.create else '') +
('u' if self.update else '') +
('p' if self.process else ''))
AccountPermissions.READ = AccountPermissions(read=True)
AccountPermissions.WRITE = AccountPermissions(write=True)
AccountPermissions.DELETE = AccountPermissions(delete=True)
AccountPermissions.LIST = AccountPermissions(list=True)
AccountPermissions.ADD = AccountPermissions(add=True)
AccountPermissions.CREATE = AccountPermissions(create=True)
AccountPermissions.UPDATE = AccountPermissions(update=True)
AccountPermissions.PROCESS = AccountPermissions(process=True)

Просмотреть файл

@ -0,0 +1,267 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from abc import ABCMeta
from math import pow
from azure.cosmosdb.common.models import LocationMode
class _Retry(object):
'''
The base class for Exponential and Linear retries containing shared code.
'''
__metaclass__ = ABCMeta
def __init__(self, max_attempts, retry_to_secondary):
'''
Constructs a base retry object.
:param int max_attempts:
The maximum number of retry attempts.
:param bool retry_to_secondary:
Whether the request should be retried to secondary, if able. This should
only be enabled of RA-GRS accounts are used and potentially stale data
can be handled.
'''
self.max_attempts = max_attempts
self.retry_to_secondary = retry_to_secondary
def _should_retry(self, context):
'''
A function which determines whether or not to retry.
:param ~azure.storage.models.RetryContext context:
The retry context. This contains the request, response, and other data
which can be used to determine whether or not to retry.
:return:
A boolean indicating whether or not to retry the request.
:rtype: bool
'''
# If max attempts are reached, do not retry.
if context.count >= self.max_attempts:
return False
status = None
if context.response and context.response.status:
status = context.response.status
if status is None:
'''
If status is None, retry as this request triggered an exception. For
example, network issues would trigger this.
'''
return True
elif 200 <= status < 300:
'''
This method is called after a successful response, meaning we failed
during the response body download or parsing. So, success codes should
be retried.
'''
return True
elif 300 <= status < 500:
'''
An exception occured, but in most cases it was expected. Examples could
include a 309 Conflict or 412 Precondition Failed.
'''
if status == 404 and context.location_mode == LocationMode.SECONDARY:
# Response code 404 should be retried if secondary was used.
return True
if status == 408:
# Response code 408 is a timeout and should be retried.
return True
return False
elif status >= 500:
'''
Response codes above 500 with the exception of 501 Not Implemented and
505 Version Not Supported indicate a server issue and should be retried.
'''
if status == 501 or status == 505:
return False
return True
else:
# If something else happened, it's unexpected. Retry.
return True
def _set_next_host_location(self, context):
'''
A function which sets the next host location on the request, if applicable.
:param ~azure.storage.models.RetryContext context:
The retry context containing the previous host location and the request
to evaluate and possibly modify.
'''
if len(context.request.host_locations) > 1:
# If there's more than one possible location, retry to the alternative
if context.location_mode == LocationMode.PRIMARY:
context.location_mode = LocationMode.SECONDARY
else:
context.location_mode = LocationMode.PRIMARY
context.request.host = context.request.host_locations.get(context.location_mode)
def _retry(self, context, backoff):
'''
A function which determines whether and how to retry.
:param ~azure.storage.models.RetryContext context:
The retry context. This contains the request, response, and other data
which can be used to determine whether or not to retry.
:param function() backoff:
A function which returns the backoff time if a retry is to be performed.
:return:
An integer indicating how long to wait before retrying the request,
or None to indicate no retry should be performed.
:rtype: int or None
'''
# If the context does not contain a count parameter, this request has not
# been retried yet. Add the count parameter to track the number of retries.
if not hasattr(context, 'count'):
context.count = 0
# Determine whether to retry, and if so increment the count, modify the
# request as desired, and return the backoff.
if self._should_retry(context):
backoff_interval = backoff(context)
context.count += 1
# If retry to secondary is enabled, attempt to change the host if the
# request allows it
if self.retry_to_secondary:
self._set_next_host_location(context)
return backoff_interval
return None
class ExponentialRetry(_Retry):
'''
Exponential retry.
'''
def __init__(self, initial_backoff=15, increment_power=3, max_attempts=3,
retry_to_secondary=False):
'''
Constructs an Exponential retry object. The initial_backoff is used for
the first retry. Subsequent retries are retried after initial_backoff +
increment_power^retry_count seconds. For example, by default the first retry
occurs after 15 seconds, the second after (15+3^1) = 18 seconds, and the
third after (15+3^2) = 24 seconds.
:param int initial_backoff:
The initial backoff interval, in seconds, for the first retry.
:param int increment_power:
The base, in seconds, to increment the initial_backoff by after the
first retry.
:param int max_attempts:
The maximum number of retry attempts.
:param bool retry_to_secondary:
Whether the request should be retried to secondary, if able. This should
only be enabled of RA-GRS accounts are used and potentially stale data
can be handled.
'''
self.initial_backoff = initial_backoff
self.increment_power = increment_power
super(ExponentialRetry, self).__init__(max_attempts, retry_to_secondary)
'''
A function which determines whether and how to retry.
:param ~azure.storage.models.RetryContext context:
The retry context. This contains the request, response, and other data
which can be used to determine whether or not to retry.
:return:
An integer indicating how long to wait before retrying the request,
or None to indicate no retry should be performed.
:rtype: int or None
'''
def retry(self, context):
return self._retry(context, self._backoff)
'''
Calculates how long to sleep before retrying.
:return:
An integer indicating how long to wait before retrying the request,
or None to indicate no retry should be performed.
:rtype: int or None
'''
def _backoff(self, context):
return self.initial_backoff + (0 if context.count == 0 else pow(self.increment_power, context.count))
class LinearRetry(_Retry):
'''
Linear retry.
'''
def __init__(self, backoff=15, max_attempts=3, retry_to_secondary=False):
'''
Constructs a Linear retry object.
:param int backoff:
The backoff interval, in seconds, between retries.
:param int max_attempts:
The maximum number of retry attempts.
:param bool retry_to_secondary:
Whether the request should be retried to secondary, if able. This should
only be enabled of RA-GRS accounts are used and potentially stale data
can be handled.
'''
self.backoff = backoff
self.max_attempts = max_attempts
super(LinearRetry, self).__init__(max_attempts, retry_to_secondary)
'''
A function which determines whether and how to retry.
:param ~azure.storage.models.RetryContext context:
The retry context. This contains the request, response, and other data
which can be used to determine whether or not to retry.
:return:
An integer indicating how long to wait before retrying the request,
or None to indicate no retry should be performed.
:rtype: int or None
'''
def retry(self, context):
return self._retry(context, self._backoff)
'''
Calculates how long to sleep before retrying.
:return:
An integer indicating how long to wait before retrying the request,
or None to indicate no retry should be performed.
:rtype: int or None
'''
def _backoff(self, context):
return self.backoff
def no_retry(context):
'''
Specifies never to retry.
:param ~azure.storage.models.RetryContext context:
The retry context.
:return:
Always returns None to indicate never to retry.
:rtype: None
'''
return None

Просмотреть файл

@ -0,0 +1,315 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from datetime import date
from azure.cosmosdb.common._common_conversion import (
_sign_string,
_to_str,
)
from azure.cosmosdb.common._constants import X_MS_VERSION
from azure.cosmosdb.common._serialization import (
url_quote,
_to_utc_datetime,
)
class SharedAccessSignature(object):
'''
Provides a factory for creating table access
signature tokens with a common account name and account key. Users can either
use the factory or can construct the appropriate service and use the
generate_*_shared_access_signature method directly.
'''
def __init__(self, account_name, account_key):
'''
:param str account_name:
The storage account name used to generate the shared access signatures.
:param str account_key:
The access key to genenerate the shares access signatures.
'''
self.account_name = account_name
self.account_key = account_key
def generate_table(self, table_name, permission=None,
expiry=None, start=None, id=None,
ip=None, protocol=None,
start_pk=None, start_rk=None,
end_pk=None, end_rk=None):
'''
Generates a shared access signature for the table.
Use the returned signature with the sas_token parameter of TableService.
:param str table_name:
Name of table.
:param TablePermissions permission:
The permissions associated with the shared access signature. The
user is restricted to operations allowed by the permissions.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has been
specified in an associated stored access policy.
:param expiry:
The time at which the shared access signature becomes invalid.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has
been specified in an associated stored access policy. Azure will always
convert values to UTC. If a date is passed in without timezone info, it
is assumed to be UTC.
:type expiry: datetime or str
:param start:
The time at which the shared access signature becomes valid. If
omitted, start time for this call is assumed to be the time when the
storage service receives the request. Azure will always convert values
to UTC. If a date is passed in without timezone info, it is assumed to
be UTC.
:type start: datetime or str
:param str id:
A unique value up to 64 characters in length that correlates to a
stored access policy. To create a stored access policy, use
set_table_service_properties.
:param str ip:
Specifies an IP address or a range of IP addresses from which to accept requests.
If the IP address from which the request originates does not match the IP address
or address range specified on the SAS token, the request is not authenticated.
For example, specifying sip=168.1.5.65 or sip=168.1.5.60-168.1.5.70 on the SAS
restricts the request to those IP addresses.
:param str protocol:
Specifies the protocol permitted for a request made. The default value
is https,http. See :class:`~azure.storage.common.models.Protocol` for possible values.
:param str start_pk:
The minimum partition key accessible with this shared access
signature. startpk must accompany startrk. Key values are inclusive.
If omitted, there is no lower bound on the table entities that can
be accessed.
:param str start_rk:
The minimum row key accessible with this shared access signature.
startpk must accompany startrk. Key values are inclusive. If
omitted, there is no lower bound on the table entities that can be
accessed.
:param str end_pk:
The maximum partition key accessible with this shared access
signature. endpk must accompany endrk. Key values are inclusive. If
omitted, there is no upper bound on the table entities that can be
accessed.
:param str end_rk:
The maximum row key accessible with this shared access signature.
endpk must accompany endrk. Key values are inclusive. If omitted,
there is no upper bound on the table entities that can be accessed.
'''
sas = _SharedAccessHelper()
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_id(id)
sas.add_table_access_ranges(table_name, start_pk, start_rk, end_pk, end_rk)
# Table names must be signed lower case
resource_path = table_name.lower()
sas.add_resource_signature(self.account_name, self.account_key, 'table', resource_path)
return sas.get_token()
def generate_account(self, services, resource_types, permission, expiry, start=None,
ip=None, protocol=None):
'''
Generates a shared access signature for the account.
Use the returned signature with the sas_token parameter of the service
or to create a new account object.
:param Services services:
Specifies the services accessible with the account SAS. You can
combine values to provide access to more than one service.
:param ResourceTypes resource_types:
Specifies the resource types that are accessible with the account
SAS. You can combine values to provide access to more than one
resource type.
:param AccountPermissions permission:
The permissions associated with the shared access signature. The
user is restricted to operations allowed by the permissions.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has been
specified in an associated stored access policy. You can combine
values to provide more than one permission.
:param expiry:
The time at which the shared access signature becomes invalid.
Required unless an id is given referencing a stored access policy
which contains this field. This field must be omitted if it has
been specified in an associated stored access policy. Azure will always
convert values to UTC. If a date is passed in without timezone info, it
is assumed to be UTC.
:type expiry: datetime or str
:param start:
The time at which the shared access signature becomes valid. If
omitted, start time for this call is assumed to be the time when the
storage service receives the request. Azure will always convert values
to UTC. If a date is passed in without timezone info, it is assumed to
be UTC.
:type start: datetime or str
:param str ip:
Specifies an IP address or a range of IP addresses from which to accept requests.
If the IP address from which the request originates does not match the IP address
or address range specified on the SAS token, the request is not authenticated.
For example, specifying sip=168.1.5.65 or sip=168.1.5.60-168.1.5.70 on the SAS
restricts the request to those IP addresses.
:param str protocol:
Specifies the protocol permitted for a request made. The default value
is https,http. See :class:`~azure.storage.common.models.Protocol` for possible values.
'''
sas = _SharedAccessHelper()
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_account(services, resource_types)
sas.add_account_signature(self.account_name, self.account_key)
return sas.get_token()
class _QueryStringConstants(object):
SIGNED_SIGNATURE = 'sig'
SIGNED_PERMISSION = 'sp'
SIGNED_START = 'st'
SIGNED_EXPIRY = 'se'
SIGNED_RESOURCE = 'sr'
SIGNED_IDENTIFIER = 'si'
SIGNED_IP = 'sip'
SIGNED_PROTOCOL = 'spr'
SIGNED_VERSION = 'sv'
SIGNED_CACHE_CONTROL = 'rscc'
SIGNED_CONTENT_DISPOSITION = 'rscd'
SIGNED_CONTENT_ENCODING = 'rsce'
SIGNED_CONTENT_LANGUAGE = 'rscl'
SIGNED_CONTENT_TYPE = 'rsct'
TABLE_NAME = 'tn'
START_PK = 'spk'
START_RK = 'srk'
END_PK = 'epk'
END_RK = 'erk'
SIGNED_RESOURCE_TYPES = 'srt'
SIGNED_SERVICES = 'ss'
class _SharedAccessHelper(object):
def __init__(self):
self.query_dict = {}
def _add_query(self, name, val):
if val:
self.query_dict[name] = _to_str(val)
def add_base(self, permission, expiry, start, ip, protocol):
if isinstance(start, date):
start = _to_utc_datetime(start)
if isinstance(expiry, date):
expiry = _to_utc_datetime(expiry)
self._add_query(_QueryStringConstants.SIGNED_START, start)
self._add_query(_QueryStringConstants.SIGNED_EXPIRY, expiry)
self._add_query(_QueryStringConstants.SIGNED_PERMISSION, permission)
self._add_query(_QueryStringConstants.SIGNED_IP, ip)
self._add_query(_QueryStringConstants.SIGNED_PROTOCOL, protocol)
self._add_query(_QueryStringConstants.SIGNED_VERSION, X_MS_VERSION)
def add_resource(self, resource):
self._add_query(_QueryStringConstants.SIGNED_RESOURCE, resource)
def add_id(self, id):
self._add_query(_QueryStringConstants.SIGNED_IDENTIFIER, id)
def add_account(self, services, resource_types):
self._add_query(_QueryStringConstants.SIGNED_SERVICES, services)
self._add_query(_QueryStringConstants.SIGNED_RESOURCE_TYPES, resource_types)
def add_table_access_ranges(self, table_name, start_pk, start_rk,
end_pk, end_rk):
self._add_query(_QueryStringConstants.TABLE_NAME, table_name)
self._add_query(_QueryStringConstants.START_PK, start_pk)
self._add_query(_QueryStringConstants.START_RK, start_rk)
self._add_query(_QueryStringConstants.END_PK, end_pk)
self._add_query(_QueryStringConstants.END_RK, end_rk)
def add_override_response_headers(self, cache_control,
content_disposition,
content_encoding,
content_language,
content_type):
self._add_query(_QueryStringConstants.SIGNED_CACHE_CONTROL, cache_control)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_DISPOSITION, content_disposition)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_ENCODING, content_encoding)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_LANGUAGE, content_language)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_TYPE, content_type)
def add_resource_signature(self, account_name, account_key, service, path):
def get_value_to_append(query):
return_value = self.query_dict.get(query) or ''
return return_value + '\n'
if path[0] != '/':
path = '/' + path
canonicalized_resource = '/' + service + '/' + account_name + path + '\n'
# Form the string to sign from shared_access_policy and canonicalized
# resource. The order of values is important.
string_to_sign = \
(get_value_to_append(_QueryStringConstants.SIGNED_PERMISSION) +
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
canonicalized_resource +
get_value_to_append(_QueryStringConstants.SIGNED_IDENTIFIER) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
if service == 'blob' or service == 'file':
string_to_sign += \
(get_value_to_append(_QueryStringConstants.SIGNED_CACHE_CONTROL) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_DISPOSITION) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_ENCODING) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_LANGUAGE) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_TYPE))
if service == 'table':
string_to_sign += \
(get_value_to_append(_QueryStringConstants.START_PK) +
get_value_to_append(_QueryStringConstants.START_RK) +
get_value_to_append(_QueryStringConstants.END_PK) +
get_value_to_append(_QueryStringConstants.END_RK))
# remove the trailing newline
if string_to_sign[-1] == '\n':
string_to_sign = string_to_sign[:-1]
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
_sign_string(account_key, string_to_sign))
def add_account_signature(self, account_name, account_key):
def get_value_to_append(query):
return_value = self.query_dict.get(query) or ''
return return_value + '\n'
string_to_sign = \
(account_name + '\n' +
get_value_to_append(_QueryStringConstants.SIGNED_PERMISSION) +
get_value_to_append(_QueryStringConstants.SIGNED_SERVICES) +
get_value_to_append(_QueryStringConstants.SIGNED_RESOURCE_TYPES) +
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
_sign_string(account_key, string_to_sign))
def get_token(self):
return '&'.join(['{0}={1}'.format(n, url_quote(v)) for n, v in self.query_dict.items() if v is not None])

Просмотреть файл

@ -0,0 +1,355 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from abc import ABCMeta
import logging
logger = logging.getLogger(__name__)
from time import sleep
import requests
from azure.common import (
AzureException,
)
from azure.cosmosdb.common._constants import (
DEFAULT_SOCKET_TIMEOUT
)
from azure.cosmosdb.common._error import (
_ERROR_DECRYPTION_FAILURE,
_http_error_handler,
)
from azure.cosmosdb.common._http import HTTPError
from azure.cosmosdb.common._http.httpclient import _HTTPClient
from azure.cosmosdb.common._serialization import (
_update_request,
_add_date_header,
)
from azure.cosmosdb.common.models import (
RetryContext,
LocationMode,
_OperationContext,
)
from azure.cosmosdb.common.retry import ExponentialRetry
class StorageClient(object):
'''
This is the base class for service objects. Service objects are used to do
all requests to Storage. This class cannot be instantiated directly.
:ivar str account_name:
The storage account name. This is used to authenticate requests
signed with an account key and to construct the storage endpoint. It
is required unless a connection string is given, or if a custom
domain is used with anonymous authentication.
:ivar str account_key:
The storage account key. This is used for shared key authentication.
If neither account key or sas token is specified, anonymous access
will be used.
:ivar str sas_token:
A shared access signature token to use to authenticate requests
instead of the account key. If account key and sas token are both
specified, account key will be used to sign. If neither are
specified, anonymous access will be used.
:ivar str primary_endpoint:
The endpoint to send storage requests to.
:ivar str secondary_endpoint:
The secondary endpoint to read storage data from. This will only be a
valid endpoint if the storage account used is RA-GRS and thus allows
reading from secondary.
:ivar function(context) retry:
A function which determines whether to retry. Takes as a parameter a
:class:`~azure.storage.common.models.RetryContext` object. Returns the number
of seconds to wait before retrying the request, or None to indicate not
to retry.
:ivar ~azure.storage.common.models.LocationMode location_mode:
The host location to use to make requests. Defaults to LocationMode.PRIMARY.
Note that this setting only applies to RA-GRS accounts as other account
types do not allow reading from secondary. If the location_mode is set to
LocationMode.SECONDARY, read requests will be sent to the secondary endpoint.
Write requests will continue to be sent to primary.
:ivar str protocol:
The protocol to use for requests. Defaults to https.
:ivar requests.Session request_session:
The session object to use for http requests.
:ivar function(request) request_callback:
A function called immediately before each request is sent. This function
takes as a parameter the request object and returns nothing. It may be
used to added custom headers or log request data.
:ivar function() response_callback:
A function called immediately after each response is received. This
function takes as a parameter the response object and returns nothing.
It may be used to log response data.
:ivar function() retry_callback:
A function called immediately after retry evaluation is performed. This
function takes as a parameter the retry context object and returns nothing.
It may be used to detect retries and log context information.
'''
__metaclass__ = ABCMeta
def __init__(self, connection_params):
'''
:param obj connection_params: The parameters to use to construct the client.
'''
self.account_name = connection_params.account_name
self.account_key = connection_params.account_key
self.sas_token = connection_params.sas_token
self.is_emulated = connection_params.is_emulated
self.primary_endpoint = connection_params.primary_endpoint
self.secondary_endpoint = connection_params.secondary_endpoint
protocol = connection_params.protocol
request_session = connection_params.request_session or requests.Session()
socket_timeout = connection_params.socket_timeout or DEFAULT_SOCKET_TIMEOUT
self._httpclient = _HTTPClient(
protocol=protocol,
session=request_session,
timeout=socket_timeout,
)
self.retry = ExponentialRetry().retry
self.location_mode = LocationMode.PRIMARY
self.request_callback = None
self.response_callback = None
self.retry_callback = None
@property
def socket_timeout(self):
return self._httpclient.timeout
@socket_timeout.setter
def socket_timeout(self, value):
self._httpclient.timeout = value
@property
def protocol(self):
return self._httpclient.protocol
@protocol.setter
def protocol(self, value):
self._httpclient.protocol = value
@property
def request_session(self):
return self._httpclient.session
@request_session.setter
def request_session(self, value):
self._httpclient.session = value
def set_proxy(self, host, port, user=None, password=None):
'''
Sets the proxy server host and port for the HTTP CONNECT Tunnelling.
:param str host: Address of the proxy. Ex: '192.168.0.100'
:param int port: Port of the proxy. Ex: 6000
:param str user: User for proxy authorization.
:param str password: Password for proxy authorization.
'''
self._httpclient.set_proxy(host, port, user, password)
def _get_host_locations(self, primary=True, secondary=False):
locations = {}
if primary:
locations[LocationMode.PRIMARY] = self.primary_endpoint
if secondary:
locations[LocationMode.SECONDARY] = self.secondary_endpoint
return locations
def _apply_host(self, request, operation_context, retry_context):
if operation_context.location_lock and operation_context.host_location:
# If this is a location locked operation and the location is set,
# override the request location and host_location.
request.host_locations = operation_context.host_location
request.host = list(operation_context.host_location.values())[0]
retry_context.location_mode = list(operation_context.host_location.keys())[0]
elif len(request.host_locations) == 1:
# If only one location is allowed, use that location.
request.host = list(request.host_locations.values())[0]
retry_context.location_mode = list(request.host_locations.keys())[0]
else:
# If multiple locations are possible, choose based on the location mode.
request.host = request.host_locations.get(self.location_mode)
retry_context.location_mode = self.location_mode
@staticmethod
def extract_date_and_request_id(retry_context):
if getattr(retry_context, 'response', None) is None:
return ""
resp = retry_context.response
if 'date' in resp.headers and 'x-ms-request-id' in resp.headers:
return str.format("Server-Timestamp={0}, Server-Request-ID={1}",
resp.headers['date'], resp.headers['x-ms-request-id'])
elif 'date' in resp.headers:
return str.format("Server-Timestamp={0}", resp.headers['date'])
elif 'x-ms-request-id' in resp.headers:
return str.format("Server-Request-ID={0}", resp.headers['x-ms-request-id'])
else:
return ""
def _perform_request(self, request, parser=None, parser_args=None, operation_context=None):
'''
Sends the request and return response. Catches HTTPError and hands it
to error handler
'''
operation_context = operation_context or _OperationContext()
retry_context = RetryContext()
# Apply the appropriate host based on the location mode
self._apply_host(request, operation_context, retry_context)
# Apply common settings to the request
_update_request(request)
client_request_id_prefix = str.format("Client-Request-ID={0}", request.headers['x-ms-client-request-id'])
while True:
try:
try:
# Execute the request callback
if self.request_callback:
self.request_callback(request)
# Add date and auth after the callback so date doesn't get too old and
# authentication is still correct if signed headers are added in the request
# callback. This also ensures retry policies with long back offs
# will work as it resets the time sensitive headers.
_add_date_header(request)
self.authentication.sign_request(request)
# Set the request context
retry_context.request = request
# Log the request before it goes out
logger.info("%s Outgoing request: Method=%s, Path=%s, Query=%s, Headers=%s.",
client_request_id_prefix,
request.method,
request.path,
request.query,
str(request.headers).replace('\n', ''))
# Perform the request
response = self._httpclient.perform_request(request)
# Execute the response callback
if self.response_callback:
self.response_callback(response)
# Set the response context
retry_context.response = response
# Log the response when it comes back
logger.info("%s Receiving Response: "
"%s, HTTP Status Code=%s, Message=%s, Headers=%s.",
client_request_id_prefix,
self.extract_date_and_request_id(retry_context),
response.status,
response.message,
str(request.headers).replace('\n', ''))
# Parse and wrap HTTP errors in AzureHttpError which inherits from AzureException
if response.status >= 300:
# This exception will be caught by the general error handler
# and raised as an azure http exception
_http_error_handler(
HTTPError(response.status, response.message, response.headers, response.body))
# Parse the response
if parser:
if parser_args:
args = [response]
args.extend(parser_args)
return parser(*args)
else:
return parser(response)
else:
return
except AzureException as ex:
raise ex
except Exception as ex:
if sys.version_info >= (3,):
# Automatic chaining in Python 3 means we keep the trace
raise AzureException(ex.args[0])
else:
# There isn't a good solution in 2 for keeping the stack trace
# in general, or that will not result in an error in 3
# However, we can keep the previous error type and message
# TODO: In the future we will log the trace
msg = ""
if len(ex.args) > 0:
msg = ex.args[0]
raise AzureException('{}: {}'.format(ex.__class__.__name__, msg))
except AzureException as ex:
# only parse the strings used for logging if logging is at least enabled for CRITICAL
if logger.isEnabledFor(logging.CRITICAL):
exception_str_in_one_line = str(ex).replace('\n', '')
status_code = retry_context.response.status if retry_context.response is not None else 'Unknown'
timestamp_and_request_id = self.extract_date_and_request_id(retry_context)
logger.info("%s Operation failed: checking if the operation should be retried. "
"Current retry count=%s, %s, HTTP status code=%s, Exception=%s.",
client_request_id_prefix,
retry_context.count if hasattr(retry_context, 'count') else 0,
timestamp_and_request_id,
status_code,
exception_str_in_one_line)
# Decryption failures (invalid objects, invalid algorithms, data unencrypted in strict mode, etc)
# will not be resolved with retries.
if str(ex) == _ERROR_DECRYPTION_FAILURE:
logger.error("%s Encountered decryption failure: this cannot be retried. "
"%s, HTTP status code=%s, Exception=%s.",
client_request_id_prefix,
timestamp_and_request_id,
status_code,
exception_str_in_one_line)
raise ex
# Determine whether a retry should be performed and if so, how
# long to wait before performing retry.
retry_interval = self.retry(retry_context)
if retry_interval is not None:
# Execute the callback
if self.retry_callback:
self.retry_callback(retry_context)
logger.info(
"%s Retry policy is allowing a retry: Retry count=%s, Interval=%s.",
client_request_id_prefix,
retry_context.count,
retry_interval)
# Sleep for the desired retry interval
sleep(retry_interval)
else:
logger.error("%s Retry policy did not allow for a retry: "
"%s, HTTP status code=%s, Exception=%s.",
client_request_id_prefix,
timestamp_and_request_id,
status_code,
exception_str_in_one_line)
raise ex
finally:
# If this is a location locked operation and the location is not set,
# this is the first request of that operation. Set the location to
# be used for subsequent requests in the operation.
if operation_context.location_lock and not operation_context.host_location:
operation_context.host_location = {retry_context.location_mode: request.host}

Просмотреть файл

@ -0,0 +1,26 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.table.models import (
Entity,
EntityProperty,
Table,
TablePermissions,
TablePayloadFormat,
EdmType,
AzureBatchOperationError,
AzureBatchValidationError,
)
from azure.cosmosdb.table.tablebatch import TableBatch
from azure.cosmosdb.table.tableservice import TableService

Просмотреть файл

@ -0,0 +1,349 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from dateutil import parser
if sys.version_info < (3,):
from urllib2 import quote as url_quote
else:
from urllib.parse import quote as url_quote
from json import (
loads,
)
from azure.cosmosdb.common._http import HTTPResponse
from azure.common import (
AzureException,
)
from azure.cosmosdb.common._common_conversion import (
_decode_base64_to_bytes,
)
from azure.cosmosdb.common._error import (
_ERROR_DECRYPTION_FAILURE,
_validate_decryption_required,
)
from azure.cosmosdb.table._error import (
_ERROR_TYPE_NOT_SUPPORTED,
_ERROR_INVALID_PROPERTY_RESOLVER,
)
from azure.cosmosdb.table.models import (
Entity,
EntityProperty,
Table,
EdmType,
AzureBatchOperationError,
)
from azure.cosmosdb.common.models import (
_list,
)
from azure.cosmosdb.table._encryption import (
_decrypt_entity,
_extract_encryption_metadata,
)
def _get_continuation_from_response_headers(response):
marker = {}
for name, value in response.headers.items():
if name.startswith('x-ms-continuation'):
marker[name[len('x-ms-continuation') + 1:]] = value
return marker
# Tables of conversions to and from entity types. We support specific
# datatypes, and beyond that the user can use an EntityProperty to get
# custom data type support.
def _from_entity_binary(value):
return EntityProperty(EdmType.BINARY, _decode_base64_to_bytes(value))
def _from_entity_int32(value):
return EntityProperty(EdmType.INT32, int(value))
def _from_entity_datetime(value):
# Note that Azure always returns UTC datetime, and dateutil parser
# will set the tzinfo on the date it returns
return parser.parse(value)
_EDM_TYPES = [EdmType.BINARY, EdmType.INT64, EdmType.GUID, EdmType.DATETIME,
EdmType.STRING, EdmType.INT32, EdmType.DOUBLE, EdmType.BOOLEAN]
_ENTITY_TO_PYTHON_CONVERSIONS = {
EdmType.BINARY: _from_entity_binary,
EdmType.INT32: _from_entity_int32,
EdmType.INT64: int,
EdmType.DOUBLE: float,
EdmType.DATETIME: _from_entity_datetime,
}
def _convert_json_response_to_entity(response, property_resolver, require_encryption,
key_encryption_key, key_resolver):
'''
:param bool require_encryption:
If set, will enforce that the retrieved entity is encrypted and decrypt it.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
unwrap_key(key, algorithm)--returns the unwrapped form of the specified symmetric key using the
string-specified algorithm.
get_kid()--returns a string key id for this key-encryption-key.
:param function key_resolver(kid):
The user-provided key resolver. Uses the kid string to return a key-encryption-key implementing
the interface defined above.
'''
if response is None or response.body is None:
return None
root = loads(response.body.decode('utf-8'))
return _decrypt_and_deserialize_entity(root, property_resolver, require_encryption,
key_encryption_key, key_resolver)
def _convert_json_to_entity(entry_element, property_resolver, encrypted_properties):
''' Convert json response to entity.
The entity format is:
{
"Address":"Mountain View",
"Age":23,
"AmountDue":200.23,
"CustomerCode@odata.type":"Edm.Guid",
"CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833",
"CustomerSince@odata.type":"Edm.DateTime",
"CustomerSince":"2008-07-10T00:00:00",
"IsActive":true,
"NumberOfOrders@odata.type":"Edm.Int64",
"NumberOfOrders":"255",
"PartitionKey":"mypartitionkey",
"RowKey":"myrowkey"
}
'''
entity = Entity()
properties = {}
edmtypes = {}
odata = {}
for name, value in entry_element.items():
if name.startswith('odata.'):
odata[name[6:]] = value
elif name.endswith('@odata.type'):
edmtypes[name[:-11]] = value
else:
properties[name] = value
# Partition key is a known property
partition_key = properties.pop('PartitionKey', None)
if partition_key:
entity['PartitionKey'] = partition_key
# Row key is a known property
row_key = properties.pop('RowKey', None)
if row_key:
entity['RowKey'] = row_key
# Timestamp is a known property
timestamp = properties.pop('Timestamp', None)
if timestamp:
entity['Timestamp'] = _from_entity_datetime(timestamp)
for name, value in properties.items():
mtype = edmtypes.get(name)
# use the property resolver if present
if property_resolver:
# Clients are not expected to resolve these interal fields.
# This check avoids unexpected behavior from the user-defined
# property resolver.
if not (name == '_ClientEncryptionMetadata1' or name == '_ClientEncryptionMetadata2'):
mtype = property_resolver(partition_key, row_key,
name, value, mtype)
# throw if the type returned is not a valid edm type
if mtype and mtype not in _EDM_TYPES:
raise AzureException(_ERROR_TYPE_NOT_SUPPORTED.format(mtype))
# If the property was encrypted, supercede the results of the resolver and set as binary
if encrypted_properties is not None and name in encrypted_properties:
mtype = EdmType.BINARY
# Add type for Int32
if type(value) is int:
mtype = EdmType.INT32
# no type info, property should parse automatically
if not mtype:
entity[name] = value
else: # need an object to hold the property
conv = _ENTITY_TO_PYTHON_CONVERSIONS.get(mtype)
if conv is not None:
try:
property = conv(value)
except Exception as e:
# throw if the type returned by the property resolver
# cannot be used in the conversion
if property_resolver:
raise AzureException(
_ERROR_INVALID_PROPERTY_RESOLVER.format(name, value, mtype))
else:
raise e
else:
property = EntityProperty(mtype, value)
entity[name] = property
# extract etag from entry
etag = odata.get('etag')
if timestamp:
etag = 'W/"datetime\'' + url_quote(timestamp) + '\'"'
entity['etag'] = etag
return entity
def _convert_json_response_to_tables(response):
''' Converts the response to tables class.
'''
if response is None or response.body is None:
return None
tables = _list()
continuation = _get_continuation_from_response_headers(response)
tables.next_marker = continuation.get('nexttablename')
root = loads(response.body.decode('utf-8'))
if 'TableName' in root:
table = Table()
table.name = root['TableName']
tables.append(table)
else:
for element in root['value']:
table = Table()
table.name = element['TableName']
tables.append(table)
return tables
def _convert_json_response_to_entities(response, property_resolver, require_encryption,
key_encryption_key, key_resolver):
''' Converts the response to tables class.
'''
if response is None or response.body is None:
return None
entities = _list()
entities.next_marker = _get_continuation_from_response_headers(response)
root = loads(response.body.decode('utf-8'))
for entity in root['value']:
entity = _decrypt_and_deserialize_entity(entity, property_resolver, require_encryption,
key_encryption_key, key_resolver)
entities.append(entity)
return entities
def _decrypt_and_deserialize_entity(entity, property_resolver, require_encryption,
key_encryption_key, key_resolver):
try:
_validate_decryption_required(require_encryption, key_encryption_key,
key_resolver)
entity_iv, encrypted_properties, content_encryption_key, isJavaV1 = None, None, None, False
if (key_encryption_key is not None) or (key_resolver is not None):
entity_iv, encrypted_properties, content_encryption_key, isJavaV1 = \
_extract_encryption_metadata(entity, require_encryption, key_encryption_key, key_resolver)
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
entity = _convert_json_to_entity(entity, property_resolver, encrypted_properties)
if entity_iv is not None and encrypted_properties is not None and \
content_encryption_key is not None:
try:
entity = _decrypt_entity(entity, encrypted_properties, content_encryption_key,
entity_iv, isJavaV1)
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
return entity
def _extract_etag(response):
''' Extracts the etag from the response headers. '''
if response and response.headers:
return response.headers.get('etag')
return None
def _parse_batch_response(response):
if response is None or response.body is None:
return None
parts = response.body.split(b'--changesetresponse_')
responses = []
for part in parts:
httpLocation = part.find(b'HTTP/')
if httpLocation > 0:
response_part = _parse_batch_response_part(part[httpLocation:])
if response_part.status >= 300:
_parse_batch_error(response_part)
responses.append(_extract_etag(response_part))
return responses
def _parse_batch_response_part(part):
lines = part.splitlines()
# First line is the HTTP status/reason
status, _, reason = lines[0].partition(b' ')[2].partition(b' ')
# Followed by headers and body
headers = {}
body = b''
isBody = False
for line in lines[1:]:
if line == b'' and not isBody:
isBody = True
elif isBody:
body += line
else:
headerName, _, headerVal = line.partition(b': ')
headers[headerName.lower().decode("utf-8")] = headerVal.decode("utf-8")
return HTTPResponse(int(status), reason.strip(), headers, body)
def _parse_batch_error(part):
doc = loads(part.body.decode('utf-8'))
code = ''
message = ''
error = doc.get('odata.error')
if error:
code = error.get('code')
if error.get('message'):
message = error.get('message').get('value')
raise AzureBatchOperationError(message, part.status, code)

Просмотреть файл

@ -0,0 +1,300 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import os
from copy import deepcopy
from json import (
dumps,
loads,
)
from azure.common import AzureException
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.hashes import (
Hash,
SHA256,
)
from cryptography.hazmat.primitives.padding import PKCS7
from azure.cosmosdb.common._common_conversion import (
_decode_base64_to_bytes,
)
from azure.cosmosdb.common._constants import (
_ENCRYPTION_PROTOCOL_V1,
)
from azure.cosmosdb.common._encryption import (
_generate_encryption_data_dict,
_dict_to_encryption_data,
_generate_AES_CBC_cipher,
_validate_and_unwrap_cek,
_EncryptionAlgorithm
)
from azure.cosmosdb.common._error import (
_ERROR_DECRYPTION_FAILURE,
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM,
_validate_not_none,
_validate_key_encryption_key_wrap,
)
from azure.cosmosdb.table._error import (
_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION,
_ERROR_ENTITY_NOT_ENCRYPTED
)
from azure.cosmosdb.table.models import (
Entity,
EntityProperty,
EdmType,
)
def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
'''
Encrypts the given entity using AES256 in CBC mode with 128 bit padding.
Will generate a content-encryption-key (cek) to encrypt the properties either
stored in an EntityProperty with the 'encrypt' flag set or those
specified by the encryption resolver. This cek is then wrapped using the
provided key_encryption_key (kek). Only strings may be encrypted and the
result is stored as binary on the service.
:param entity:
The entity to insert. Could be a dict or an entity object.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
wrap_key(key)--wraps the specified key using an algorithm of the user's choice.
get_key_wrap_algorithm()--returns the algorithm used to wrap the specified symmetric key.
get_kid()--returns a string key id for this key-encryption-key.
:param function(partition_key, row_key, property_name) encryption_resolver:
A function that takes in an entities partition key, row key, and property name and returns
a boolean that indicates whether that property should be encrypted.
:return: An entity with both the appropriate properties encrypted and the
encryption data.
:rtype: object
'''
_validate_not_none('entity', entity)
_validate_not_none('key_encryption_key', key_encryption_key)
_validate_key_encryption_key_wrap(key_encryption_key)
# AES256 uses 256 bit (32 byte) keys and always with 16 byte blocks
content_encryption_key = os.urandom(32)
entity_initialization_vector = os.urandom(16)
encrypted_properties = []
encrypted_entity = Entity()
for key, value in entity.items():
# If the property resolver says it should be encrypted
# or it is an EntityProperty with the 'encrypt' property set.
if (isinstance(value, EntityProperty) and value.encrypt) or \
(encryption_resolver is not None \
and encryption_resolver(entity['PartitionKey'], entity['RowKey'], key)):
# Only strings can be encrypted and None is not an instance of str.
if isinstance(value, EntityProperty):
if value.type == EdmType.STRING:
value = value.value
else:
raise ValueError(_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION)
if not isinstance(value, str):
raise ValueError(_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION)
# Value is now confirmed to hold a valid string value to be encrypted
# and should be added to the list of encrypted properties.
encrypted_properties.append(key)
propertyIV = _generate_property_iv(entity_initialization_vector,
entity['PartitionKey'], entity['RowKey'],
key, False)
# Encode the strings for encryption.
value = value.encode('utf-8')
cipher = _generate_AES_CBC_cipher(content_encryption_key, propertyIV)
# PKCS7 with 16 byte blocks ensures compatibility with AES.
padder = PKCS7(128).padder()
padded_data = padder.update(value) + padder.finalize()
# Encrypt the data.
encryptor = cipher.encryptor()
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
# Set the new value of this key to be a binary EntityProperty for proper serialization.
value = EntityProperty(EdmType.BINARY, encrypted_data)
encrypted_entity[key] = value
encrypted_properties = dumps(encrypted_properties)
# Generate the metadata iv.
metadataIV = _generate_property_iv(entity_initialization_vector,
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', False)
encrypted_properties = encrypted_properties.encode('utf-8')
cipher = _generate_AES_CBC_cipher(content_encryption_key, metadataIV)
padder = PKCS7(128).padder()
padded_data = padder.update(encrypted_properties) + padder.finalize()
encryptor = cipher.encryptor()
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
encrypted_entity['_ClientEncryptionMetadata2'] = EntityProperty(EdmType.BINARY, encrypted_data)
encryption_data = _generate_encryption_data_dict(key_encryption_key, content_encryption_key,
entity_initialization_vector)
encrypted_entity['_ClientEncryptionMetadata1'] = dumps(encryption_data)
return encrypted_entity
def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, entityIV, isJavaV1):
'''
Decrypts the specified entity using AES256 in CBC mode with 128 bit padding. Unwraps the CEK
using either the specified KEK or the key returned by the key_resolver. Properties
specified in the encrypted_properties_list, will be decrypted and decoded to utf-8 strings.
:param entity:
The entity being retrieved and decrypted. Could be a dict or an entity object.
:param list encrypted_properties_list:
The encrypted list of all the properties that are encrypted.
:param bytes[] content_encryption_key:
The key used internally to encrypt the entity. Extrated from the entity metadata.
:param bytes[] entityIV:
The intialization vector used to seed the encryption algorithm. Extracted from the
entity metadata.
:return: The decrypted entity
:rtype: Entity
'''
_validate_not_none('entity', entity)
decrypted_entity = deepcopy(entity)
try:
for property in entity.keys():
if property in encrypted_properties_list:
value = entity[property]
propertyIV = _generate_property_iv(entityIV,
entity['PartitionKey'], entity['RowKey'],
property, isJavaV1)
cipher = _generate_AES_CBC_cipher(content_encryption_key,
propertyIV)
# Decrypt the property.
decryptor = cipher.decryptor()
decrypted_data = (decryptor.update(value.value) + decryptor.finalize())
# Unpad the data.
unpadder = PKCS7(128).unpadder()
decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize())
decrypted_data = decrypted_data.decode('utf-8')
decrypted_entity[property] = decrypted_data
decrypted_entity.pop('_ClientEncryptionMetadata1')
decrypted_entity.pop('_ClientEncryptionMetadata2')
return decrypted_entity
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
def _extract_encryption_metadata(entity, require_encryption, key_encryption_key, key_resolver):
'''
Extracts the encryption metadata from the given entity, setting them to be utf-8 strings.
If no encryption metadata is present, will return None for all return values unless
require_encryption is true, in which case the method will throw.
:param entity:
The entity being retrieved and decrypted. Could be a dict or an entity object.
:param bool require_encryption:
If set, will enforce that the retrieved entity is encrypted and decrypt it.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
unwrap_key(key, algorithm)--returns the unwrapped form of the specified symmetric key using the
string-specified algorithm.
get_kid()--returns a string key id for this key-encryption-key.
:param function key_resolver(kid):
The user-provided key resolver. Uses the kid string to return a key-encryption-key implementing
the interface defined above.
:returns: a tuple containing the entity iv, the list of encrypted properties, the entity cek,
and whether the entity was encrypted using JavaV1.
:rtype: tuple (bytes[], list, bytes[], bool)
'''
_validate_not_none('entity', entity)
try:
encrypted_properties_list = _decode_base64_to_bytes(entity['_ClientEncryptionMetadata2'])
encryption_data = entity['_ClientEncryptionMetadata1']
encryption_data = _dict_to_encryption_data(loads(encryption_data))
except Exception:
# Message did not have properly formatted encryption metadata.
if require_encryption:
raise ValueError(_ERROR_ENTITY_NOT_ENCRYPTED)
else:
return None, None, None, None
if not (encryption_data.encryption_agent.encryption_algorithm == _EncryptionAlgorithm.AES_CBC_256):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM)
content_encryption_key = _validate_and_unwrap_cek(encryption_data, key_encryption_key, key_resolver)
# Special check for compatibility with Java V1 encryption protocol.
isJavaV1 = (encryption_data.key_wrapping_metadata is None) or \
((encryption_data.encryption_agent.protocol == _ENCRYPTION_PROTOCOL_V1) and
'EncryptionLibrary' in encryption_data.key_wrapping_metadata and
'Java' in encryption_data.key_wrapping_metadata['EncryptionLibrary'])
metadataIV = _generate_property_iv(encryption_data.content_encryption_IV,
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', isJavaV1)
cipher = _generate_AES_CBC_cipher(content_encryption_key, metadataIV)
# Decrypt the data.
decryptor = cipher.decryptor()
encrypted_properties_list = decryptor.update(encrypted_properties_list) + decryptor.finalize()
# Unpad the data.
unpadder = PKCS7(128).unpadder()
encrypted_properties_list = unpadder.update(encrypted_properties_list) + unpadder.finalize()
encrypted_properties_list = encrypted_properties_list.decode('utf-8')
if isJavaV1:
# Strip the square braces from the ends and split string into list.
encrypted_properties_list = encrypted_properties_list[1:-1]
encrypted_properties_list = encrypted_properties_list.split(', ')
else:
encrypted_properties_list = loads(encrypted_properties_list)
return encryption_data.content_encryption_IV, encrypted_properties_list, content_encryption_key, isJavaV1
def _generate_property_iv(entity_iv, pk, rk, property_name, isJavaV1):
'''
Uses the entity_iv, partition key, and row key to generate and return
the iv for the specified property.
'''
digest = Hash(SHA256(), default_backend())
if not isJavaV1:
digest.update(entity_iv +
(rk + pk + property_name).encode('utf-8'))
else:
digest.update(entity_iv +
(pk + rk + property_name).encode('utf-8'))
propertyIV = digest.finalize()
return propertyIV[:16]

Просмотреть файл

@ -0,0 +1,76 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.common._error import (
_validate_not_none,
_ERROR_VALUE_NONE_OR_EMPTY,
)
_ERROR_ATTRIBUTE_MISSING = '\'{0}\' object has no attribute \'{1}\''
_ERROR_BATCH_COMMIT_FAIL = 'Batch Commit Fail'
_ERROR_CANNOT_FIND_PARTITION_KEY = 'Cannot find partition key in request.'
_ERROR_CANNOT_FIND_ROW_KEY = 'Cannot find row key in request.'
_ERROR_CANNOT_SERIALIZE_VALUE_TO_ENTITY = \
'Cannot serialize the specified value ({0}) to an entity. Please use ' + \
'an EntityProperty (which can specify custom types), int, str, bool, ' + \
'or datetime.'
_ERROR_DUPLICATE_ROW_KEY_IN_BATCH = \
'Row Keys should not be the same in a batch operations'
_ERROR_INCORRECT_PARTITION_KEY_IN_BATCH = \
'Partition Key should be the same in a batch operations'
_ERROR_INVALID_ENTITY_TYPE = 'The entity must be either in dict format or an entity object.'
_ERROR_INVALID_PROPERTY_RESOLVER = \
'The specified property resolver returned an invalid type. Name: {0}, Value: {1}, ' + \
'EdmType: {2}'
_ERROR_PROPERTY_NAME_TOO_LONG = 'The property name exceeds the maximum allowed length.'
_ERROR_TOO_MANY_ENTITIES_IN_BATCH = \
'Batches may only contain 100 operations'
_ERROR_TOO_MANY_PROPERTIES = 'The entity contains more properties than allowed.'
_ERROR_TYPE_NOT_SUPPORTED = 'Type not supported when sending data to the service: {0}.'
_ERROR_VALUE_TOO_LARGE = '{0} is too large to be cast to type {1}.'
_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION = 'Encryption is only supported for not None strings.'
_ERROR_ENTITY_NOT_ENCRYPTED = 'Entity was not encrypted.'
def _validate_object_has_param(param_name, object):
if object.get(param_name) is None:
raise ValueError(_ERROR_VALUE_NONE_OR_EMPTY.format(param_name))
def _validate_entity(entity, encrypt=None):
# Validate entity exists
_validate_not_none('entity', entity)
# Entity inherits from dict, so just validating dict is fine
if not isinstance(entity, dict):
raise TypeError(_ERROR_INVALID_ENTITY_TYPE)
# Validate partition key and row key are present
_validate_object_has_param('PartitionKey', entity)
_validate_object_has_param('RowKey', entity)
# Two properties are added during encryption. Validate sufficient space
max_properties = 255
if encrypt:
max_properties = max_properties - 2
# Validate there are not more than 255 properties including Timestamp
if (len(entity) > max_properties) or (len(entity) == max_properties and 'Timestamp' not in entity):
raise ValueError(_ERROR_TOO_MANY_PROPERTIES)
# Validate the property names are not too long
for propname in entity:
if len(propname) > 255:
raise ValueError(_ERROR_PROPERTY_NAME_TOO_LONG)

Просмотреть файл

@ -0,0 +1,202 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.common._common_conversion import (
_to_str,
)
from azure.cosmosdb.common._error import (
_validate_not_none,
_validate_encryption_required,
_validate_encryption_unsupported,
)
from azure.cosmosdb.common._http import HTTPRequest
from azure.cosmosdb.common._serialization import (
_get_request_body,
)
from azure.cosmosdb.table._encryption import (
_encrypt_entity,
)
from azure.cosmosdb.table._error import (
_validate_entity,
)
from azure.cosmosdb.table._serialization import (
_convert_entity_to_json,
_DEFAULT_ACCEPT_HEADER,
_DEFAULT_CONTENT_TYPE_HEADER,
_DEFAULT_PREFER_HEADER,
)
def _get_entity(partition_key, row_key, select, accept):
'''
Constructs a get entity request.
'''
_validate_not_none('partition_key', partition_key)
_validate_not_none('row_key', row_key)
_validate_not_none('accept', accept)
request = HTTPRequest()
request.method = 'GET'
request.headers = {'Accept': _to_str(accept)}
request.query = {'$select': _to_str(select)}
return request
def _insert_entity(entity, encryption_required=False,
key_encryption_key=None, encryption_resolver=None):
'''
Constructs an insert entity request.
:param entity:
The entity to insert. Could be a dict or an entity object.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
wrap_key(key)--wraps the specified key using an algorithm of the user's choice.
get_key_wrap_algorithm()--returns the algorithm used to wrap the specified symmetric key.
get_kid()--returns a string key id for this key-encryption-key.
:param function(partition_key, row_key, property_name) encryption_resolver:
A function that takes in an entities partition key, row key, and property name and returns
a boolean that indicates whether that property should be encrypted.
'''
_validate_entity(entity, key_encryption_key is not None)
_validate_encryption_required(encryption_required, key_encryption_key)
request = HTTPRequest()
request.method = 'POST'
request.headers = {
_DEFAULT_CONTENT_TYPE_HEADER[0]: _DEFAULT_CONTENT_TYPE_HEADER[1],
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
_DEFAULT_PREFER_HEADER[0]: _DEFAULT_PREFER_HEADER[1]
}
if key_encryption_key:
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _update_entity(entity, if_match, encryption_required=False,
key_encryption_key=None, encryption_resolver=None):
'''
Constructs an update entity request.
:param entity:
The entity to insert. Could be a dict or an entity object.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
wrap_key(key)--wraps the specified key using an algorithm of the user's choice.
get_key_wrap_algorithm()--returns the algorithm used to wrap the specified symmetric key.
get_kid()--returns a string key id for this key-encryption-key.
:param function(partition_key, row_key, property_name) encryption_resolver:
A function that takes in an entities partition key, row key, and property name and returns
a boolean that indicates whether that property should be encrypted.
'''
_validate_not_none('if_match', if_match)
_validate_entity(entity, key_encryption_key is not None)
_validate_encryption_required(encryption_required, key_encryption_key)
request = HTTPRequest()
request.method = 'PUT'
request.headers = {
_DEFAULT_CONTENT_TYPE_HEADER[0]: _DEFAULT_CONTENT_TYPE_HEADER[1],
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
'If-Match': _to_str(if_match),
}
if key_encryption_key:
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _merge_entity(entity, if_match, require_encryption=False, key_encryption_key=None):
'''
Constructs a merge entity request.
'''
_validate_not_none('if_match', if_match)
_validate_entity(entity)
_validate_encryption_unsupported(require_encryption, key_encryption_key)
request = HTTPRequest()
request.method = 'MERGE'
request.headers = {
_DEFAULT_CONTENT_TYPE_HEADER[0]: _DEFAULT_CONTENT_TYPE_HEADER[1],
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
'If-Match': _to_str(if_match)
}
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _delete_entity(partition_key, row_key, if_match):
'''
Constructs a delete entity request.
'''
_validate_not_none('if_match', if_match)
_validate_not_none('partition_key', partition_key)
_validate_not_none('row_key', row_key)
request = HTTPRequest()
request.method = 'DELETE'
request.headers = {
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
'If-Match': _to_str(if_match)
}
return request
def _insert_or_replace_entity(entity, require_encryption=False,
key_encryption_key=None, encryption_resolver=None):
'''
Constructs an insert or replace entity request.
'''
_validate_entity(entity, key_encryption_key is not None)
_validate_encryption_required(require_encryption, key_encryption_key)
request = HTTPRequest()
request.method = 'PUT'
request.headers = {
_DEFAULT_CONTENT_TYPE_HEADER[0]: _DEFAULT_CONTENT_TYPE_HEADER[1],
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
}
if key_encryption_key:
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _insert_or_merge_entity(entity, require_encryption=False, key_encryption_key=None):
'''
Constructs an insert or merge entity request.
:param entity:
The entity to insert. Could be a dict or an entity object.
:param object key_encryption_key:
The user-provided key-encryption-key. Must implement the following methods:
wrap_key(key)--wraps the specified key using an algorithm of the user's choice.
get_key_wrap_algorithm()--returns the algorithm used to wrap the specified symmetric key.
get_kid()--returns a string key id for this key-encryption-key.
'''
_validate_entity(entity)
_validate_encryption_unsupported(require_encryption, key_encryption_key)
request = HTTPRequest()
request.method = 'MERGE'
request.headers = {
_DEFAULT_CONTENT_TYPE_HEADER[0]: _DEFAULT_CONTENT_TYPE_HEADER[1],
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
}
request.body = _get_request_body(_convert_entity_to_json(entity))
return request

Просмотреть файл

@ -0,0 +1,266 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
import types
import uuid
from datetime import datetime
from json import (
dumps,
)
from math import (
isnan,
)
from azure.cosmosdb.common._common_conversion import (
_encode_base64,
_to_str,
)
from azure.cosmosdb.common._serialization import (
_to_utc_datetime,
)
from azure.cosmosdb.table._error import (
_ERROR_CANNOT_SERIALIZE_VALUE_TO_ENTITY,
_ERROR_TYPE_NOT_SUPPORTED,
_ERROR_VALUE_TOO_LARGE,
)
from azure.cosmosdb.table.models import (
EntityProperty,
TablePayloadFormat,
EdmType,
)
if sys.version_info < (3,):
def _new_boundary():
return str(uuid.uuid1())
else:
def _new_boundary():
return str(uuid.uuid1()).encode('utf-8')
_DEFAULT_ACCEPT_HEADER = ('Accept', TablePayloadFormat.JSON_MINIMAL_METADATA)
_DEFAULT_CONTENT_TYPE_HEADER = ('Content-Type', 'application/json')
_DEFAULT_PREFER_HEADER = ('Prefer', 'return-no-content')
_SUB_HEADERS = ['If-Match', 'Prefer', 'Accept', 'Content-Type', 'DataServiceVersion']
def _get_entity_path(table_name, partition_key, row_key):
return '/{0}(PartitionKey=\'{1}\',RowKey=\'{2}\')'.format(
_to_str(table_name),
_to_str(partition_key),
_to_str(row_key))
def _update_storage_table_header(request):
''' add additional headers for storage table request. '''
# set service version
request.headers['DataServiceVersion'] = '3.0;NetFx'
request.headers['MaxDataServiceVersion'] = '3.0'
def _to_entity_binary(value):
return EdmType.BINARY, _encode_base64(value)
def _to_entity_bool(value):
return None, value
def _to_entity_datetime(value):
return EdmType.DATETIME, _to_utc_datetime(value)
def _to_entity_float(value):
if isnan(value):
return EdmType.DOUBLE, 'NaN'
if value == float('inf'):
return EdmType.DOUBLE, 'Infinity'
if value == float('-inf'):
return EdmType.DOUBLE, '-Infinity'
return None, value
def _to_entity_guid(value):
return EdmType.GUID, str(value)
def _to_entity_int32(value):
if sys.version_info < (3,):
value = long(value)
else:
value = int(value)
if value >= 2 ** 31 or value < -(2 ** 31):
raise TypeError(_ERROR_VALUE_TOO_LARGE.format(str(value), EdmType.INT32))
return None, value
def _to_entity_int64(value):
if sys.version_info < (3,):
ivalue = long(value)
else:
ivalue = int(value)
if ivalue >= 2 ** 63 or ivalue < -(2 ** 63):
raise TypeError(_ERROR_VALUE_TOO_LARGE.format(str(value), EdmType.INT64))
return EdmType.INT64, str(value)
def _to_entity_str(value):
return None, value
def _to_entity_none(value):
return None, None
# Conversion from Python type to a function which returns a tuple of the
# type string and content string.
_PYTHON_TO_ENTITY_CONVERSIONS = {
int: _to_entity_int64,
bool: _to_entity_bool,
datetime: _to_entity_datetime,
float: _to_entity_float,
str: _to_entity_str,
}
# Conversion from Edm type to a function which returns a tuple of the
# type string and content string.
_EDM_TO_ENTITY_CONVERSIONS = {
EdmType.BINARY: _to_entity_binary,
EdmType.BOOLEAN: _to_entity_bool,
EdmType.DATETIME: _to_entity_datetime,
EdmType.DOUBLE: _to_entity_float,
EdmType.GUID: _to_entity_guid,
EdmType.INT32: _to_entity_int32,
EdmType.INT64: _to_entity_int64,
EdmType.STRING: _to_entity_str,
}
if sys.version_info < (3,):
_PYTHON_TO_ENTITY_CONVERSIONS.update({
long: _to_entity_int64,
types.NoneType: _to_entity_none,
unicode: _to_entity_str,
})
def _convert_entity_to_json(source):
''' Converts an entity object to json to send.
The entity format is:
{
"Address":"Mountain View",
"Age":23,
"AmountDue":200.23,
"CustomerCode@odata.type":"Edm.Guid",
"CustomerCode":"c9da6455-213d-42c9-9a79-3e9149a57833",
"CustomerSince@odata.type":"Edm.DateTime",
"CustomerSince":"2008-07-10T00:00:00",
"IsActive":true,
"NumberOfOrders@odata.type":"Edm.Int64",
"NumberOfOrders":"255",
"PartitionKey":"mypartitionkey",
"RowKey":"myrowkey"
}
'''
properties = {}
# set properties type for types we know if value has no type info.
# if value has type info, then set the type to value.type
for name, value in source.items():
mtype = ''
if isinstance(value, EntityProperty):
conv = _EDM_TO_ENTITY_CONVERSIONS.get(value.type)
if conv is None:
raise TypeError(
_ERROR_TYPE_NOT_SUPPORTED.format(value.type))
mtype, value = conv(value.value)
else:
conv = _PYTHON_TO_ENTITY_CONVERSIONS.get(type(value))
if conv is None and sys.version_info >= (3,) and value is None:
conv = _to_entity_none
if conv is None:
raise TypeError(
_ERROR_CANNOT_SERIALIZE_VALUE_TO_ENTITY.format(
type(value).__name__))
mtype, value = conv(value)
# form the property node
properties[name] = value
if mtype:
properties[name + '@odata.type'] = mtype
# generate the entity_body
return dumps(properties)
def _convert_table_to_json(table_name):
'''
Create json to send for a given table name. Since json format for table is
the same as entity and the only difference is that table has only one
property 'TableName', so we just call _convert_entity_to_json.
table_name:
the name of the table
'''
return _convert_entity_to_json({'TableName': table_name})
def _convert_batch_to_json(batch_requests):
'''
Create json to send for an array of batch requests.
batch_requests:
an array of requests
'''
batch_boundary = b'batch_' + _new_boundary()
changeset_boundary = b'changeset_' + _new_boundary()
body = [b'--' + batch_boundary + b'\n',
b'Content-Type: multipart/mixed; boundary=',
changeset_boundary + b'\n\n']
content_id = 1
# Adds each request body to the POST data.
for _, request in batch_requests:
body.append(b'--' + changeset_boundary + b'\n')
body.append(b'Content-Type: application/http\n')
body.append(b'Content-Transfer-Encoding: binary\n\n')
body.append(request.method.encode('utf-8'))
body.append(b' ')
body.append(request.path.encode('utf-8'))
body.append(b' HTTP/1.1\n')
body.append(b'Content-ID: ')
body.append(str(content_id).encode('utf-8') + b'\n')
content_id += 1
for name, value in request.headers.items():
if name in _SUB_HEADERS:
body.append(name.encode('utf-8') + b': ')
body.append(value.encode('utf-8') + b'\n')
# Add different headers for different request types.
if not request.method == 'DELETE':
body.append(b'Content-Length: ')
body.append(str(len(request.body)).encode('utf-8'))
body.append(b'\n\n')
body.append(request.body + b'\n')
body.append(b'\n')
body.append(b'--' + changeset_boundary + b'--' + b'\n')
body.append(b'--' + batch_boundary + b'--')
return b''.join(body), 'multipart/mixed; boundary=' + batch_boundary.decode('utf-8')

Просмотреть файл

@ -0,0 +1,210 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.common import (
AzureException,
AzureHttpError,
)
from azure.cosmosdb.table._error import (
_ERROR_ATTRIBUTE_MISSING,
)
class AzureBatchValidationError(AzureException):
'''
Indicates that a batch operation cannot proceed due to invalid input.
:ivar str message:
A detailed error message indicating the reason for the failure.
'''
class AzureBatchOperationError(AzureHttpError):
'''
Indicates that a batch operation failed.
:ivar str message:
A detailed error message indicating the index of the batch
request which failed and the reason for the failure. For example,
'0:One of the request inputs is out of range.' indicates the 0th batch
request failed as one of its property values was out of range.
:ivar int status_code:
The HTTP status code of the batch request. For example, 400.
:ivar str batch_code:
The batch status code. For example, 'OutOfRangeInput'.
'''
def __init__(self, message, status_code, batch_code):
super(AzureBatchOperationError, self).__init__(message, status_code)
self.code = batch_code
class Entity(dict):
'''
An entity object. Can be accessed as a dict or as an obj. The attributes of
the entity will be created dynamically. For example, the following are both
valid::
entity = Entity()
entity.a = 'b'
entity['x'] = 'y'
'''
def __getattr__(self, name):
try:
return self[name]
except KeyError:
raise AttributeError(_ERROR_ATTRIBUTE_MISSING.format('Entity', name))
__setattr__ = dict.__setitem__
def __delattr__(self, name):
try:
del self[name]
except KeyError:
raise AttributeError(_ERROR_ATTRIBUTE_MISSING.format('Entity', name))
def __dir__(self):
return dir({}) + list(self.keys())
class EntityProperty(object):
'''
An entity property. Used to explicitly set :class:`~EdmType` when necessary.
Values which require explicit typing are GUID, INT32, and BINARY. Other EdmTypes
may be explicitly create as EntityProperty objects but need not be. For example,
the below with both create STRING typed properties on the entity::
entity = Entity()
entity.a = 'b'
entity.x = EntityProperty(EdmType.STRING, 'y')
'''
def __init__(self, type=None, value=None, encrypt=False):
'''
Represents an Azure Table. Returned by list_tables.
:param str type: The type of the property.
:param EdmType value: The value of the property.
:param bool encrypt: Indicates whether or not the property should be encrypted.
'''
self.type = type
self.value = value
self.encrypt = encrypt
class Table(object):
'''
Represents an Azure Table. Returned by list_tables.
:ivar str name: The name of the table.
'''
pass
class TablePayloadFormat(object):
'''
Specifies the accepted content type of the response payload. More information
can be found here: https://msdn.microsoft.com/en-us/library/azure/dn535600.aspx
'''
JSON_NO_METADATA = 'application/json;odata=nometadata'
'''Returns no type information for the entity properties.'''
JSON_MINIMAL_METADATA = 'application/json;odata=minimalmetadata'
'''Returns minimal type information for the entity properties.'''
JSON_FULL_METADATA = 'application/json;odata=fullmetadata'
'''Returns minimal type information for the entity properties plus some extra odata properties.'''
class EdmType(object):
'''
Used by :class:`~.EntityProperty` to represent the type of the entity property
to be stored by the Table service.
'''
BINARY = 'Edm.Binary'
''' Represents byte data. Must be specified. '''
INT64 = 'Edm.Int64'
''' Represents a number between -(2^31) and 2^31. This is the default type for Python numbers. '''
GUID = 'Edm.Guid'
''' Represents a GUID. Must be specified. '''
DATETIME = 'Edm.DateTime'
''' Represents a date. This type will be inferred for Python datetime objects. '''
STRING = 'Edm.String'
''' Represents a string. This type will be inferred for Python strings. '''
INT32 = 'Edm.Int32'
''' Represents a number between -(2^15) and 2^15. Must be specified or numbers will default to INT64. '''
DOUBLE = 'Edm.Double'
''' Represents a double. This type will be inferred for Python floating point numbers. '''
BOOLEAN = 'Edm.Boolean'
''' Represents a boolean. This type will be inferred for Python bools. '''
class TablePermissions(object):
'''
TablePermissions class to be used with the :func:`~azure.storage.table.tableservice.TableService.generate_table_shared_access_signature`
method and for the AccessPolicies used with :func:`~azure.storage.table.tableservice.TableService.set_table_acl`.
:ivar TablePermissions TablePermissions.QUERY: Get entities and query entities.
:ivar TablePermissions TablePermissions.ADD: Add entities.
:ivar TablePermissions TablePermissions.UPDATE: Update entities.
:ivar TablePermissions TablePermissions.DELETE: Delete entities.
'''
def __init__(self, query=False, add=False, update=False, delete=False, _str=None):
'''
:param bool query:
Get entities and query entities.
:param bool add:
Add entities. Add and Update permissions are required for upsert operations.
:param bool update:
Update entities. Add and Update permissions are required for upsert operations.
:param bool delete:
Delete entities.
:param str _str:
A string representing the permissions.
'''
if not _str:
_str = ''
self.query = query or ('r' in _str)
self.add = add or ('a' in _str)
self.update = update or ('u' in _str)
self.delete = delete or ('d' in _str)
def __or__(self, other):
return TablePermissions(_str=str(self) + str(other))
def __add__(self, other):
return TablePermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.query else '') +
('a' if self.add else '') +
('u' if self.update else '') +
('d' if self.delete else ''))
TablePermissions.QUERY = TablePermissions(query=True)
TablePermissions.ADD = TablePermissions(add=True)
TablePermissions.UPDATE = TablePermissions(update=True)
TablePermissions.DELETE = TablePermissions(delete=True)

Просмотреть файл

@ -0,0 +1,209 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.table._error import (
_ERROR_INCORRECT_PARTITION_KEY_IN_BATCH,
_ERROR_DUPLICATE_ROW_KEY_IN_BATCH,
_ERROR_TOO_MANY_ENTITIES_IN_BATCH,
)
from azure.cosmosdb.table._request import (
_insert_entity,
_update_entity,
_merge_entity,
_delete_entity,
_insert_or_replace_entity,
_insert_or_merge_entity,
)
from azure.cosmosdb.table.models import (
AzureBatchValidationError,
)
class TableBatch(object):
'''
This is the class that is used for batch operation for storage table service.
The Table service supports batch transactions on entities that are in the
same table and belong to the same partition group. Multiple operations are
supported within a single transaction. The batch can include at most 100
entities, and its total payload may be no more than 4 MB in size.
'''
def __init__(self, require_encryption=False, key_encryption_key=None,
encryption_resolver=None):
self._requests = []
self._partition_key = None
self._row_keys = []
self._require_encryption = require_encryption
self._key_encryption_key = key_encryption_key
self._encryption_resolver = encryption_resolver
def insert_entity(self, entity):
'''
Adds an insert entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.insert_entity` for more
information on inserts.
The operation will not be executed until the batch is committed.
:param entity:
The entity to insert. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
'''
request = _insert_entity(entity, self._require_encryption, self._key_encryption_key,
self._encryption_resolver)
self._add_to_batch(entity['PartitionKey'], entity['RowKey'], request)
def update_entity(self, entity, if_match='*'):
'''
Adds an update entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.update_entity` for more
information on updates.
The operation will not be executed until the batch is committed.
:param entity:
The entity to update. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The update operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional update, set If-Match to the wildcard character (*).
'''
request = _update_entity(entity, if_match, self._require_encryption,
self._key_encryption_key, self._encryption_resolver)
self._add_to_batch(entity['PartitionKey'], entity['RowKey'], request)
def merge_entity(self, entity, if_match='*'):
'''
Adds a merge entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.merge_entity` for more
information on merges.
The operation will not be executed until the batch is committed.
:param entity:
The entity to merge. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The merge operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional merge, set If-Match to the wildcard character (*).
'''
request = _merge_entity(entity, if_match, self._require_encryption,
self._key_encryption_key)
self._add_to_batch(entity['PartitionKey'], entity['RowKey'], request)
def delete_entity(self, partition_key, row_key,
if_match='*'):
'''
Adds a delete entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.delete_entity` for more
information on deletes.
The operation will not be executed until the batch is committed.
:param str partition_key:
The PartitionKey of the entity.
:param str row_key:
The RowKey of the entity.
:param str if_match:
The client may specify the ETag for the entity on the
request in order to compare to the ETag maintained by the service
for the purpose of optimistic concurrency. The delete operation
will be performed only if the ETag sent by the client matches the
value maintained by the server, indicating that the entity has
not been modified since it was retrieved by the client. To force
an unconditional delete, set If-Match to the wildcard character (*).
'''
request = _delete_entity(partition_key, row_key, if_match)
self._add_to_batch(partition_key, row_key, request)
def insert_or_replace_entity(self, entity):
'''
Adds an insert or replace entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.insert_or_replace_entity` for more
information on insert or replace operations.
The operation will not be executed until the batch is committed.
:param entity:
The entity to insert or replace. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
'''
request = _insert_or_replace_entity(entity, self._require_encryption, self._key_encryption_key,
self._encryption_resolver)
self._add_to_batch(entity['PartitionKey'], entity['RowKey'], request)
def insert_or_merge_entity(self, entity):
'''
Adds an insert or merge entity operation to the batch. See
:func:`~azure.storage.table.tableservice.TableService.insert_or_merge_entity` for more
information on insert or merge operations.
The operation will not be executed until the batch is committed.
:param entity:
The entity to insert or merge. Could be a dict or an entity object.
Must contain a PartitionKey and a RowKey.
:type entity: dict or :class:`~azure.storage.table.models.Entity`
'''
request = _insert_or_merge_entity(entity, self._require_encryption,
self._key_encryption_key)
self._add_to_batch(entity['PartitionKey'], entity['RowKey'], request)
def _add_to_batch(self, partition_key, row_key, request):
'''
Validates batch-specific rules.
:param str partition_key:
PartitionKey of the entity.
:param str row_key:
RowKey of the entity.
:param request:
the request to insert, update or delete entity
'''
# All same partition keys
if self._partition_key:
if self._partition_key != partition_key:
raise AzureBatchValidationError(_ERROR_INCORRECT_PARTITION_KEY_IN_BATCH)
else:
self._partition_key = partition_key
# All different row keys
if row_key in self._row_keys:
raise AzureBatchValidationError(_ERROR_DUPLICATE_ROW_KEY_IN_BATCH)
else:
self._row_keys.append(row_key)
# 100 entities
if len(self._requests) >= 100:
raise AzureBatchValidationError(_ERROR_TOO_MANY_ENTITIES_IN_BATCH)
# Add the request to the batch
self._requests.append((row_key, request))

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,533 @@
"""
"wheel" copyright (c) 2012-2017 Daniel Holth <dholth@fastmail.fm> and
contributors.
The MIT License
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Create a Azure wheel (.whl) distribution (a wheel is a built archive format).
This file is a copy of the official bdist_wheel file from wheel 0.30.0a0, enhanced
of the bottom with some Microsoft extension for Azure SDK for Python
"""
import csv
import hashlib
import os
import subprocess
import warnings
import shutil
import json
import sys
try:
import sysconfig
except ImportError: # pragma nocover
# Python < 2.7
import distutils.sysconfig as sysconfig
import pkg_resources
safe_name = pkg_resources.safe_name
safe_version = pkg_resources.safe_version
from shutil import rmtree
from email.generator import Generator
from distutils.core import Command
from distutils.sysconfig import get_python_version
from distutils import log as logger
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag, get_platform
from wheel.util import native, open_for_csv
from wheel.archive import archive_wheelfile
from wheel.pkginfo import read_pkg_info, write_pkg_info
from wheel.metadata import pkginfo_to_dict
from wheel import pep425tags, metadata
from wheel import __version__ as wheel_version
def safer_name(name):
return safe_name(name).replace('-', '_')
def safer_version(version):
return safe_version(version).replace('-', '_')
class bdist_wheel(Command):
description = 'create a wheel distribution'
user_options = [('bdist-dir=', 'b',
"temporary directory for creating the distribution"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('keep-temp', 'k',
"keep the pseudo-installation tree around after " +
"creating the distribution archive"),
('dist-dir=', 'd',
"directory to put final built distributions in"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
('relative', None,
"build the archive using relative paths"
"(default: false)"),
('owner=', 'u',
"Owner name used when creating a tar file"
" [default: current user]"),
('group=', 'g',
"Group name used when creating a tar file"
" [default: current group]"),
('universal', None,
"make a universal wheel"
" (default: false)"),
('python-tag=', None,
"Python implementation compatibility tag"
" (default: py%s)" % get_impl_ver()[0]),
]
boolean_options = ['keep-temp', 'skip-build', 'relative', 'universal']
def initialize_options(self):
self.bdist_dir = None
self.data_dir = None
self.plat_name = None
self.plat_tag = None
self.format = 'zip'
self.keep_temp = False
self.dist_dir = None
self.distinfo_dir = None
self.egginfo_dir = None
self.root_is_pure = None
self.skip_build = None
self.relative = False
self.owner = None
self.group = None
self.universal = False
self.python_tag = 'py' + get_impl_ver()[0]
self.plat_name_supplied = False
def finalize_options(self):
if self.bdist_dir is None:
bdist_base = self.get_finalized_command('bdist').bdist_base
self.bdist_dir = os.path.join(bdist_base, 'wheel')
self.data_dir = self.wheel_dist_name + '.data'
self.plat_name_supplied = self.plat_name is not None
need_options = ('dist_dir', 'plat_name', 'skip_build')
self.set_undefined_options('bdist',
*zip(need_options, need_options))
self.root_is_pure = not (self.distribution.has_ext_modules()
or self.distribution.has_c_libraries())
# Support legacy [wheel] section for setting universal
wheel = self.distribution.get_option_dict('wheel')
if 'universal' in wheel:
# please don't define this in your global configs
val = wheel['universal'][1].strip()
if val.lower() in ('1', 'true', 'yes'):
self.universal = True
@property
def wheel_dist_name(self):
"""Return distribution full name with - replaced with _"""
return '-'.join((safer_name(self.distribution.get_name()),
safer_version(self.distribution.get_version())))
def get_tag(self):
# bdist sets self.plat_name if unset, we should only use it for purepy
# wheels if the user supplied it.
if self.plat_name_supplied:
plat_name = self.plat_name
elif self.root_is_pure:
plat_name = 'any'
else:
plat_name = self.plat_name or get_platform()
if plat_name in ('linux-x86_64', 'linux_x86_64') and sys.maxsize == 2147483647:
plat_name = 'linux_i686'
plat_name = plat_name.replace('-', '_').replace('.', '_')
if self.root_is_pure:
if self.universal:
impl = 'py2.py3'
else:
impl = self.python_tag
tag = (impl, 'none', plat_name)
else:
impl_name = get_abbr_impl()
impl_ver = get_impl_ver()
# PEP 3149
abi_tag = str(get_abi_tag()).lower()
tag = (impl_name + impl_ver, abi_tag, plat_name)
supported_tags = pep425tags.get_supported(
supplied_platform=plat_name if self.plat_name_supplied else None)
# XXX switch to this alternate implementation for non-pure:
assert tag == supported_tags[0], "%s != %s" % (tag, supported_tags[0])
return tag
def get_archive_basename(self):
"""Return archive name without extension"""
impl_tag, abi_tag, plat_tag = self.get_tag()
archive_basename = "%s-%s-%s-%s" % (
self.wheel_dist_name,
impl_tag,
abi_tag,
plat_tag)
return archive_basename
def run(self):
build_scripts = self.reinitialize_command('build_scripts')
build_scripts.executable = 'python'
if not self.skip_build:
self.run_command('build')
install = self.reinitialize_command('install',
reinit_subcommands=True)
install.root = self.bdist_dir
install.compile = False
install.skip_build = self.skip_build
install.warn_dir = False
# A wheel without setuptools scripts is more cross-platform.
# Use the (undocumented) `no_ep` option to setuptools'
# install_scripts command to avoid creating entry point scripts.
install_scripts = self.reinitialize_command('install_scripts')
install_scripts.no_ep = True
# Use a custom scheme for the archive, because we have to decide
# at installation time which scheme to use.
for key in ('headers', 'scripts', 'data', 'purelib', 'platlib'):
setattr(install,
'install_' + key,
os.path.join(self.data_dir, key))
basedir_observed = ''
if os.name == 'nt':
# win32 barfs if any of these are ''; could be '.'?
# (distutils.command.install:change_roots bug)
basedir_observed = os.path.normpath(os.path.join(self.data_dir, '..'))
self.install_libbase = self.install_lib = basedir_observed
setattr(install,
'install_purelib' if self.root_is_pure else 'install_platlib',
basedir_observed)
logger.info("installing to %s", self.bdist_dir)
self.run_command('install')
archive_basename = self.get_archive_basename()
pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
if not self.relative:
archive_root = self.bdist_dir
else:
archive_root = os.path.join(
self.bdist_dir,
self._ensure_relative(install.install_base))
self.set_undefined_options(
'install_egg_info', ('target', 'egginfo_dir'))
self.distinfo_dir = os.path.join(self.bdist_dir,
'%s.dist-info' % self.wheel_dist_name)
self.egg2dist(self.egginfo_dir,
self.distinfo_dir)
self.write_wheelfile(self.distinfo_dir)
self.write_record(self.bdist_dir, self.distinfo_dir)
# Make the archive
if not os.path.exists(self.dist_dir):
os.makedirs(self.dist_dir)
wheel_name = archive_wheelfile(pseudoinstall_root, archive_root)
# Sign the archive
if 'WHEEL_TOOL' in os.environ:
subprocess.call([os.environ['WHEEL_TOOL'], 'sign', wheel_name])
# Add to 'Distribution.dist_files' so that the "upload" command works
getattr(self.distribution, 'dist_files', []).append(
('bdist_wheel', get_python_version(), wheel_name))
if not self.keep_temp:
if self.dry_run:
logger.info('removing %s', self.bdist_dir)
else:
rmtree(self.bdist_dir)
def write_wheelfile(self, wheelfile_base, generator='bdist_wheel (' + wheel_version + ')'):
from email.message import Message
msg = Message()
msg['Wheel-Version'] = '1.0' # of the spec
msg['Generator'] = generator
msg['Root-Is-Purelib'] = str(self.root_is_pure).lower()
# Doesn't work for bdist_wininst
impl_tag, abi_tag, plat_tag = self.get_tag()
for impl in impl_tag.split('.'):
for abi in abi_tag.split('.'):
for plat in plat_tag.split('.'):
msg['Tag'] = '-'.join((impl, abi, plat))
wheelfile_path = os.path.join(wheelfile_base, 'WHEEL')
logger.info('creating %s', wheelfile_path)
with open(wheelfile_path, 'w') as f:
Generator(f, maxheaderlen=0).flatten(msg)
def _ensure_relative(self, path):
# copied from dir_util, deleted
drive, path = os.path.splitdrive(path)
if path[0:1] == os.sep:
path = drive + path[1:]
return path
def _pkginfo_to_metadata(self, egg_info_path, pkginfo_path):
return metadata.pkginfo_to_metadata(egg_info_path, pkginfo_path)
def license_file(self):
"""Return license filename from a license-file key in setup.cfg, or None."""
metadata = self.distribution.get_option_dict('metadata')
if not 'license_file' in metadata:
return None
return metadata['license_file'][1]
def setupcfg_requirements(self):
"""Generate requirements from setup.cfg as
('Requires-Dist', 'requirement; qualifier') tuples. From a metadata
section in setup.cfg:
[metadata]
provides-extra = extra1
extra2
requires-dist = requirement; qualifier
another; qualifier2
unqualified
Yields
('Provides-Extra', 'extra1'),
('Provides-Extra', 'extra2'),
('Requires-Dist', 'requirement; qualifier'),
('Requires-Dist', 'another; qualifier2'),
('Requires-Dist', 'unqualified')
"""
metadata = self.distribution.get_option_dict('metadata')
# our .ini parser folds - to _ in key names:
for key, title in (('provides_extra', 'Provides-Extra'),
('requires_dist', 'Requires-Dist')):
if not key in metadata:
continue
field = metadata[key]
for line in field[1].splitlines():
line = line.strip()
if not line:
continue
yield (title, line)
def add_requirements(self, metadata_path):
"""Add additional requirements from setup.cfg to file metadata_path"""
additional = list(self.setupcfg_requirements())
if not additional: return
pkg_info = read_pkg_info(metadata_path)
if 'Provides-Extra' in pkg_info or 'Requires-Dist' in pkg_info:
warnings.warn('setup.cfg requirements overwrite values from setup.py')
del pkg_info['Provides-Extra']
del pkg_info['Requires-Dist']
for k, v in additional:
pkg_info[k] = v
write_pkg_info(metadata_path, pkg_info)
def egg2dist(self, egginfo_path, distinfo_path):
"""Convert an .egg-info directory into a .dist-info directory"""
def adios(p):
"""Appropriately delete directory, file or link."""
if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p):
shutil.rmtree(p)
elif os.path.exists(p):
os.unlink(p)
adios(distinfo_path)
if not os.path.exists(egginfo_path):
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
import glob
pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info')
possible = glob.glob(pat)
err = "Egg metadata expected at %s but not found" % (egginfo_path,)
if possible:
alt = os.path.basename(possible[0])
err += " (%s found - possible misnamed archive file?)" % (alt,)
raise ValueError(err)
if os.path.isfile(egginfo_path):
# .egg-info is a single file
pkginfo_path = egginfo_path
pkg_info = self._pkginfo_to_metadata(egginfo_path, egginfo_path)
os.mkdir(distinfo_path)
else:
# .egg-info is a directory
pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO')
pkg_info = self._pkginfo_to_metadata(egginfo_path, pkginfo_path)
# ignore common egg metadata that is useless to wheel
shutil.copytree(egginfo_path, distinfo_path,
ignore=lambda x, y: set(('PKG-INFO',
'requires.txt',
'SOURCES.txt',
'not-zip-safe',)))
# delete dependency_links if it is only whitespace
dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt')
with open(dependency_links_path, 'r') as dependency_links_file:
dependency_links = dependency_links_file.read().strip()
if not dependency_links:
adios(dependency_links_path)
write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info)
# XXX deprecated. Still useful for current distribute/setuptools.
metadata_path = os.path.join(distinfo_path, 'METADATA')
self.add_requirements(metadata_path)
# XXX intentionally a different path than the PEP.
metadata_json_path = os.path.join(distinfo_path, 'metadata.json')
pymeta = pkginfo_to_dict(metadata_path,
distribution=self.distribution)
if 'description' in pymeta:
description_filename = 'DESCRIPTION.rst'
description_text = pymeta.pop('description')
description_path = os.path.join(distinfo_path,
description_filename)
with open(description_path, "wb") as description_file:
description_file.write(description_text.encode('utf-8'))
pymeta['extensions']['python.details']['document_names']['description'] = description_filename
# XXX heuristically copy any LICENSE/LICENSE.txt?
license = self.license_file()
if license:
license_filename = 'LICENSE.txt'
shutil.copy(license, os.path.join(self.distinfo_dir, license_filename))
pymeta['extensions']['python.details']['document_names']['license'] = license_filename
with open(metadata_json_path, "w") as metadata_json:
json.dump(pymeta, metadata_json, sort_keys=True)
adios(egginfo_path)
def write_record(self, bdist_dir, distinfo_dir):
from wheel.util import urlsafe_b64encode
record_path = os.path.join(distinfo_dir, 'RECORD')
record_relpath = os.path.relpath(record_path, bdist_dir)
def walk():
for dir, dirs, files in os.walk(bdist_dir):
dirs.sort()
for f in sorted(files):
yield os.path.join(dir, f)
def skip(path):
"""Wheel hashes every possible file."""
return (path == record_relpath)
with open_for_csv(record_path, 'w+') as record_file:
writer = csv.writer(record_file)
for path in walk():
relpath = os.path.relpath(path, bdist_dir)
if skip(relpath):
hash = ''
size = ''
else:
with open(path, 'rb') as f:
data = f.read()
digest = hashlib.sha256(data).digest()
hash = 'sha256=' + native(urlsafe_b64encode(digest))
size = len(data)
record_path = os.path.relpath(
path, bdist_dir).replace(os.path.sep, '/')
writer.writerow((record_path, hash, size))
#-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
from distutils import log as logger
import os.path
#from wheel.bdist_wheel import bdist_wheel
class azure_bdist_wheel(bdist_wheel):
description = "Create an Azure wheel distribution"
user_options = bdist_wheel.user_options + \
[('azure-namespace-package=', None,
"Name of the deepest nspkg used")]
def initialize_options(self):
bdist_wheel.initialize_options(self)
self.azure_namespace_package = None
def finalize_options(self):
bdist_wheel.finalize_options(self)
if self.azure_namespace_package and not self.azure_namespace_package.endswith("-nspkg"):
raise ValueError("azure_namespace_package must finish by -nspkg")
def run(self):
if not self.distribution.install_requires:
self.distribution.install_requires = []
self.distribution.install_requires.append(
"{}>=2.0.0".format(self.azure_namespace_package))
bdist_wheel.run(self)
def write_record(self, bdist_dir, distinfo_dir):
if self.azure_namespace_package:
# Split and remove last part, assuming it's "nspkg"
subparts = self.azure_namespace_package.split('-')[0:-1]
folder_with_init = [os.path.join(*subparts[0:i+1]) for i in range(len(subparts))]
for azure_sub_package in folder_with_init:
init_file = os.path.join(bdist_dir, azure_sub_package, '__init__.py')
if os.path.isfile(init_file):
logger.info("manually remove {} while building the wheel".format(init_file))
os.remove(init_file)
else:
raise ValueError("Unable to find {}. Are you sure of your namespace package?".format(init_file))
bdist_wheel.write_record(self, bdist_dir, distinfo_dir)
cmdclass = {
'bdist_wheel': azure_bdist_wheel,
}

Просмотреть файл

@ -0,0 +1,3 @@
call InstallDocDependencies.bat
sphinx-apidoc -e -o .\ref ..\azure
call make.bat html

Просмотреть файл

@ -0,0 +1 @@
pip install -r requirements.txt

Просмотреть файл

@ -0,0 +1,177 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pydocumentdb.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pydocumentdb.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/pydocumentdb"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pydocumentdb"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Просмотреть файл

Просмотреть файл

@ -0,0 +1,238 @@
# -*- coding: utf-8 -*-
#
# azure-storage-python documentation build configuration file, created by
# sphinx-quickstart on Fri Jun 27 15:42:45 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import pip
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../azure-storage-blob'))
sys.path.insert(0, os.path.abspath('../azure-storage-file'))
sys.path.insert(0, os.path.abspath('../azure-storage-table'))
sys.path.insert(0, os.path.abspath('../azure-storage-queue'))
sys.path.insert(0, os.path.abspath('../azure-storage-common'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.autosummary', 'sphinx.ext.doctest',
'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Azure Storage SDK for Python'
copyright = u'2015, Microsoft'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.36.0'
# The full version, including alpha/beta/rc tags.
release = '0.36.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for extensions ----------------------------------------------------
autoclass_content = 'both'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#html_theme = 'default'
#html_theme_options = {'collapsiblesidebar': True}
# Activate the theme.
#pip.main(['install', 'sphinx_bootstrap_theme'])
#import sphinx_bootstrap_theme
#html_theme = 'bootstrap'
#html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'azure-storage-python-doc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'azure-storage-python.tex', u'Azure SDK for Python Documentation',
u'Microsoft', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True

Просмотреть файл

@ -0,0 +1,101 @@
Azure Storage SDK for Python.
=============================
Installation:
-------------
Please refer to the Github page for the installation instructions: https://github.com/Azure/azure-storage-python
If you are looking for Azure Service Bus or the Azure management libraries,
please visit https://github.com/Azure/azure-sdk-for-python
Documentation:
--------------
* `Table Getting Started Doc <https://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-table-storage/>`__ -- (:doc:`API <ref/azure.storage.table>`)
* :ref:`Reference Documentation - All Services<modindex>`
Features:
---------
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
System Requirements:
--------------------
The supported Python versions are 2.7.x, 3.3.x, 3.4.x, and 3.5.x.
To download Python, please visit
https://www.python.org/download/
We recommend Python Tools for Visual Studio as a development environment for developing your applications. Please visit http://aka.ms/python for more information.
Need Help?:
-----------
Be sure to check out the Microsoft Azure `Developer Forums on Stack
Overflow <http://go.microsoft.com/fwlink/?LinkId=234489>`__ if you have
trouble with the provided code.
Contributing:
-------------
Contribute Code or Provide Feedback:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you would like to become an active contributor to this project, please
follow the instructions provided in `Microsoft Azure Projects
Contribution
Guidelines <http://windowsazure.github.com/guidelines.html>`__.
If you encounter any bugs with the library, please file an issue in the
`Issues <https://github.com/Azure/azure-storage-python/issues>`__
section of the project.
Learn More
==========
`Microsoft Azure Python Developer
Center <http://azure.microsoft.com/en-us/develop/python/>`__
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. toctree::
:hidden:
.. toctree::
:hidden:
:glob:
upgrade
ref/*

Просмотреть файл

@ -0,0 +1,242 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\pydocumentdb.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\pydocumentdb.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.common.cloudstorageaccount module
===============================================
.. automodule:: azure.storage.common.cloudstorageaccount
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.common.models module
==================================
.. automodule:: azure.storage.common.models
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,20 @@
azure.storage.common package
============================
Submodules
----------
.. toctree::
azure.storage.common.cloudstorageaccount
azure.storage.common.models
azure.storage.common.sharedaccesssignature
azure.storage.common.storageclient
Module contents
---------------
.. automodule:: azure.storage.common
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.common.sharedaccesssignature module
=================================================
.. automodule:: azure.storage.common.sharedaccesssignature
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.common.storageclient module
=========================================
.. automodule:: azure.storage.common.storageclient
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.table.models module
=================================
.. automodule:: azure.storage.table.models
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,19 @@
azure.storage.table package
===========================
Submodules
----------
.. toctree::
azure.storage.table.models
azure.storage.table.tablebatch
azure.storage.table.tableservice
Module contents
---------------
.. automodule:: azure.storage.table
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.table.tablebatch module
=====================================
.. automodule:: azure.storage.table.tablebatch
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,7 @@
azure.storage.table.tableservice module
=======================================
.. automodule:: azure.storage.table.tableservice
:members:
:undoc-members:
:show-inheritance:

Просмотреть файл

@ -0,0 +1,8 @@
azure
=====
.. toctree::
:maxdepth: 4
azure.storage.common
azure.storage.table

Просмотреть файл

@ -0,0 +1,2 @@
sphinx
azure-common

Просмотреть файл

@ -0,0 +1,79 @@
Upgrade Guide
===============================
This guide is intended to help upgrade code written for the Azure Storage Python
library before version 0.30.0.
The best way to see how to upgrade a specific API is to take a look at the usage
samples in the `Samples <https://github.com/Azure/azure-storage-python/tree/master/samples>`__
directory on GitHub. A very specific set of changes as well as additions can be
found in the ChangeLog and BreakingChanges documents. The below is a summary of
those documents containing the most relevant pieces for the upgrade scenario.
General Changes
===============================
In general, we attempted to use more appropriate Python types for parameter and
return values rather than always taking and receiving strings. Parameter and return
values previously prefixed with x_ms were simplified by removing this prefix, and
parameter values are divided by '_' between words as is idiomatic.
Listing returns a generator rather than a result segment. This generator automatically
follows continuation tokens as more results are requested.
SAS methods take several individual parameters rather than a single paramter
object. Similarly, ACL getters and setters take dictionaries mapping id to
AccessPolicy rather than a list of SignedIdentifiers each holding an id and an
AccessPolicy.
Blob
===============================
The single BlobService object was divided into three subservices for the different
blob types (BlockBlobService, PageBlobService, AppendBlobService) with common
methods in the abstract BaseBlobService these inherit from. This was done for two
reasons. First, to reduce confusion when blob-type specific methods were used on
the incorrect blob type. Second, to simplify the BlobService object which had grown
to be quite extensive when append blob was added.
ContentSettings objects have replaced all content_* and cache_control parameters
and return values for applicable APIs. This is intended to highlight that the
semantics of setting content properties is replace rather than merge, meaning that
setting any one property will cause any unspecified properties to be cleared on
the blob.
On the performance side, single-threaded blob download APIs will now download
the blob without chunking to improve perf and not do an initial get to find the
size of the blob. However, as a result the progress_callback may receive None
for its total parameter when parallelism is off to allow for this optimization.
Queue
===============================
The largest change to the QueueService class is that queue messages are both XML
encoded and decoded by default. In past versions, either messages were not encoded
or decoded by default, or only encoded. Encoding and decoding methods can be
modified using the QueueService encode_function and decode_function instance variables.
Methods operating on queue messages and which have return values will return
QueueMessage objects. QueueMessages objects contain essentially the same fields
as previously, but the times are returned as dates rather than strings and the
dequeue count is returned as an int.
Table
===============================
Rather than having a boolean switch for turning batching on and off, batches are
an object which can be populated and then committed. Entities can be sent as dictionaries
or as Entity objects, and returned entities are accessible as either objects or
dictionaries. Methods which access and modify entites have been simplified so that
if they take an entity object they extract the partition key and row key from that
object rather than requiring these be sent separately.
All table entity integer values are stored on the service with type Edm.Int64
unless the type is explicitly overridden as Edm.Int32. Formerly, the type was
decided based on the size of the number, but this resulted in hard to predict
types on the service. So, the more consistent option was chosen.
Operations no longer echo content from the service and JSON is used instead of
AtomPub, improving performance.

Просмотреть файл

@ -0,0 +1 @@


Просмотреть файл

@ -0,0 +1,16 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from .authentication import AuthenticationSamples
from .client import ClientSamples

Просмотреть файл

@ -0,0 +1,61 @@
# coding: utf-8
from azure.cosmosdb.table import TableService
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this blob except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from azure.cosmosdb.common import CloudStorageAccount
class AuthenticationSamples():
def __init__(self):
pass
def run_all_samples(self):
self.key_auth()
self.sas_auth()
self.emulator()
self.connection_string()
def key_auth(self):
# With account
account = CloudStorageAccount(account_name='<account_name>', account_key='<account_key>')
client = account.create_table_service()
# Directly
client = TableService(account_name='<account_name>', account_key='<account_key>')
def sas_auth(self):
# With account
account = CloudStorageAccount(account_name='<account_name>', sas_token='<sas_token>')
client = account.create_table_service()
# Directly
client = TableService(account_name='<account_name>', sas_token='<sas_token>')
def emulator(self):
# With account
account = CloudStorageAccount(is_emulated=True)
client = account.create_table_service()
# Directly
client = TableService(is_emulated=True)
# The emulator does not at the time of writing support append blobs or
# the file service.
def connection_string(self):
# Connection strings may be retrieved from the Portal or constructed manually
connection_string = 'AccountName=<account_name>;AccountKey=<account_key>;'
client = TableService(connection_string=connection_string)

Просмотреть файл

@ -0,0 +1,139 @@
# coding: utf-8
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this blob except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import requests
from azure.cosmosdb.table import TableService
from azure.cosmosdb.common.models import LocationMode
from azure.cosmosdb.common.retry import (
ExponentialRetry,
LinearRetry,
no_retry,
)
class ClientSamples():
def __init__(self):
pass
def run_all_samples(self):
self.retries()
self.read_from_secondary()
self.custom_endpoint()
self.protocol()
self.request_session()
self.proxy()
self.callbacks()
def retries(self):
# By default, retries are performed with an exponential backoff.
# Any custom retry logic may be used by simply defining a retry function,
# but several easy pre-written options are available with modifiable settings.
client = TableService(account_name='<account_name>', account_key='<account_key>')
# Use an exponential retry, but modify the backoff settings
# Here, we increase the initial back off, increase the number of retry attempts
# and decrease the base of the exponential backoff.
client.retry = ExponentialRetry(initial_backoff=30, increment_power=2, max_attempts=5).retry
# Use a default linear retry policy instead
client.retry = LinearRetry().retry
# Turn off retries
client.retry = no_retry
def read_from_secondary(self):
# If you are using RA-GRS accounts, you may want to enable reading from the
# secondary endpoint. Note that your application will have to handle this
# data potentially being out of date as the secondary may be behind the
# primary.
client = TableService(account_name='<account_name>', account_key='<account_key>')
# The location mode is set to primary by default meaning that all requests
# are sent to the primary endpoint. If you'd like to instead read from the
# secondary endpoint by default, set location mode to secondary. Note that
# writes will continue to go to primary as they are not allowed on secondary.
client.location_mode = LocationMode.SECONDARY
# You may also decide you want to retry to secondary. This is useful if
# you'd like to automatically handle the primary being temporarily down.
# Again, your application will have to handle data being potentially out
# of date. Retry to secondary logic may be built into a custom retry policy,
# but our retry policies have a flag to enable it. Here we use the same
# exponential retry as by default, but allow it to retry to secondary if
# the initial request to primary fails.
client.location_mode = LocationMode.PRIMARY # Reset the location_mode to start with primary
client.retry = ExponentialRetry(retry_to_secondary=True).retry
def custom_endpoint(self):
# Custom endpoints are necessary for certain regions.
# The most common usage is to connect to the China cloud.
client = TableService(account_name='<account_name>', account_key='<account_key>',
endpoint_suffix='core.chinacloudapi.cn')
def protocol(self):
# https is the default protocol and is strongly recommended for security
# However, http may be used if desired
client = TableService(account_name='<account_name>', account_key='<account_key>',
protocol='http')
# Set later
client = TableService(account_name='<account_name>', account_key='<account_key>')
client.protocol = 'http'
def request_session(self):
# A custom request session may be used to set special network options
session = requests.Session()
client = TableService(account_name='<account_name>', account_key='<account_key>',
request_session=session)
# Set later
client = TableService(account_name='<account_name>', account_key='<account_key>')
client.request_session = session
def proxy(self):
# Unauthenticated
client = TableService(account_name='<account_name>', account_key='<account_key>')
client.set_proxy('127.0.0.1', '8888')
# Authenticated
client = TableService(account_name='<account_name>', account_key='<account_key>')
proxy_user = '1'
proxy_password = '1'
client.set_proxy('127.0.0.1', '8888', user=proxy_user, password=proxy_password)
def callbacks(self):
# Callbacks may be used read or modify the request and response.
# The request_callback is called when the request is complete except for
# adding the authentication and date headers.
# The response_callback is called when the HTTP response is received before
# any parsing is done.
# Custom client request id
client = TableService(account_name='<account_name>', account_key='<account_key>')
def request_callback(request):
request.headers['x-ms-client-request-id'] = '<my custom id>'
client.request_callback = request_callback
# View data from the response
def response_callback(response):
status = response.status
headers = response.headers
# Force an exists call to succeed by resetting the status
client.response_callback = response_callback

Просмотреть файл

@ -0,0 +1,19 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
STORAGE_ACCOUNT_NAME = ''
STORAGE_ACCOUNT_KEY = ''
SAS = ''
IS_EMULATED = False

Просмотреть файл

@ -0,0 +1,17 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from .encryption_usage import TableEncryptionSamples
from .sas_usage import TableSasSamples
from .table_usage import TableSamples

Просмотреть файл

@ -0,0 +1,430 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import uuid
from os import urandom
from azure.common import AzureException
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric.padding import (
OAEP,
MGF1,
)
from cryptography.hazmat.primitives.asymmetric.rsa import generate_private_key
from cryptography.hazmat.primitives.hashes import SHA1
from cryptography.hazmat.primitives.keywrap import (
aes_key_wrap,
aes_key_unwrap,
)
from azure.cosmosdb.table import (
Entity,
TableBatch,
EdmType,
EntityProperty,
TablePayloadFormat,
)
# Sample implementations of the encryption-related interfaces.
class KeyWrapper:
def __init__(self, kid):
self.kek = urandom(32)
self.backend = default_backend()
self.kid = 'local:' + kid
def wrap_key(self, key, algorithm='A256KW'):
if algorithm == 'A256KW':
return aes_key_wrap(self.kek, key, self.backend)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def unwrap_key(self, key, algorithm):
if algorithm == 'A256KW':
return aes_key_unwrap(self.kek, key, self.backend)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def get_key_wrap_algorithm(self):
return 'A256KW'
def get_kid(self):
return self.kid
class KeyResolver:
def __init__(self):
self.keys = {}
def put_key(self, key):
self.keys[key.get_kid()] = key
def resolve_key(self, kid):
return self.keys[kid]
class RSAKeyWrapper:
def __init__(self, kid):
self.private_key = generate_private_key(public_exponent=65537,
key_size=2048,
backend=default_backend())
self.public_key = self.private_key.public_key()
self.kid = 'local:' + kid
def wrap_key(self, key, algorithm='RSA'):
if algorithm == 'RSA':
return self.public_key.encrypt(key,
OAEP(
mgf=MGF1(algorithm=SHA1()),
algorithm=SHA1(),
label=None)
)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def unwrap_key(self, key, algorithm):
if algorithm == 'RSA':
return self.private_key.decrypt(key,
OAEP(
mgf=MGF1(algorithm=SHA1()),
algorithm=SHA1(),
label=None)
)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def get_key_wrap_algorithm(self):
return 'RSA'
def get_kid(self):
return self.kid
class TableEncryptionSamples():
def __init__(self, account):
self.account = account
def run_all_samples(self):
self.service = self.account.create_table_service()
self.put_encrypted_entity_properties()
self.put_encrypted_entity_encryption_resolver()
self.get_encrypted_entity()
self.get_encrypted_entity_key_encryption_key()
self.replace_encrypted_entity()
self.query_encrypted_entities()
self.batch_encrypted_entities()
self.require_encryption()
self.alternate_key_encryption_algorithms()
self.merge_not_supported()
def _get_table_reference(self, prefix='table'):
table_name = '{}{}'.format(prefix, str(uuid.uuid4()).replace('-', ''))
return table_name
def _create_table(self, prefix='table'):
table_name = self._get_table_reference(prefix)
self.service.create_table(table_name)
return table_name
def _create_base_entity_dict(self):
entity = {}
# Partition key and row key must be strings and are required
entity['PartitionKey'] = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity['RowKey'] = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
return entity
def _create_base_entity_class(self):
# Partition key and row key must be strings and are required
entity = Entity()
entity['PartitionKey'] = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity['RowKey'] = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
return entity
def _create_entity_for_encryption(self):
entity = self._create_base_entity_class()
entity['foo'] = EntityProperty(EdmType.STRING, 'bar', True)
return entity
def _create_query_table_encrypted(self, entity_count):
'''
Creates a table with the specified name and adds entities with the
default set of values. PartitionKey is set to 'MyPartition' and RowKey
is set to a unique counter value starting at 1 (as a string). The
'foo' attribute is set to be encrypted.
'''
table_name = self._create_table(prefix='querytable')
self.service.require_encryption = True
entity = self._create_entity_for_encryption()
with self.service.batch(table_name) as batch:
for i in range(1, entity_count + 1):
entity['RowKey'] = entity['RowKey'] + str(i)
batch.insert_entity(entity)
return table_name
# A sample encryption resolver. This resolver is a simple case that will mark
# any property named 'foo' for encryption, regardless of the partition or row
# it is in.
def encryption_resolver(self, pk, rk, property):
return property == 'foo'
def put_encrypted_entity_properties(self):
table_name = self._create_table()
# Can use a dict or the Entity class to encrypt entities.
# The EntityProperty object takes an optional parameteter, 'encrypt'
# that marks the property for encryption when set to true.
entity1 = self._create_base_entity_dict()
entity1['foo'] = EntityProperty(EdmType.STRING, 'bar', True)
entity2 = self._create_base_entity_class()
entity2.foo = EntityProperty(EdmType.STRING, 'bar', True)
entity3 = self._create_base_entity_class()
entity3['badValue'] = EntityProperty(EdmType.INT64, 12, True)
entity4 = self._create_base_entity_class()
# KeyWrapper implements the key encryption key interface outlined
# in the insert/get entity documentation.
# Setting this property will tell these APIs to encrypt the entity.
self.service.key_encryption_key = KeyWrapper('key1')
self.service.insert_entity(table_name, entity1)
self.service.insert_entity(table_name, entity2)
# Note: The internal encryption process requires two properties, so there
# are only 250 custom properties available when encrypting.
# Note: str is the only type valid for encryption. Trying to encrypt other
# properties will throw.
self.service.delete_table(table_name)
def put_encrypted_entity_encryption_resolver(self):
table_name = self._create_table()
entity = self._create_base_entity_class()
entity['foo'] = 'bar'
self.service.key_encryption_key = KeyWrapper('key1')
# An encryption resolver is a function that takes in the Partition Key,
# Row Key, and property name and returns true if the property should be
# encrypted and false otherwise. This can be used in place of explictly
# setting each property to be encrypted through the EntityProperty class.
self.service.encryption_resolver_function = self.encryption_resolver
self.service.insert_entity(table_name, entity)
self.service.delete_table(table_name)
def get_encrypted_entity(self):
table_name = self._create_table()
entity = self._create_entity_for_encryption()
self.service.key_encryption_key = KeyWrapper('key1')
self.service.insert_entity(table_name, entity)
# Entities can be decrypted by setting a key_resolver function on the service
# without directly setting the key_encryption_key property itself. The function takes
# in the key_id (retrieved from the encrypted entity metadata) and returns the
# corresponding key_encryption_key.
key_resolver = KeyResolver()
key_resolver.put_key(self.service.key_encryption_key)
self.service.key_resolver_function = key_resolver.resolve_key
self.service.key_encryption_key = None
# Decrypted entities are stored in their raw string form, regardless of whether
# they were stored in an EntityProperty when encrypted.
# Retrieving and decrypting an encrypted entity works regardless of the accepted
# payload format.
entity_full = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'],
accept=TablePayloadFormat.JSON_FULL_METADATA)
entity_none = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'],
accept=TablePayloadFormat.JSON_NO_METADATA)
# Note: Properties that are encrypted on upload but not decrypted on download due to lack
# of an encryption policy are stored in an EntityProperty with as an EdmBinary type.
# Note: The encryption metadata headers are preserved on the entity if
# it is not decrypted when downloaded.
# Note: Decrypted entities are stored in their raw string form, regardless of whether
# they were stored in an EntityProperty when encrypted.
self.service.key_resolver_function = None
self.service.delete_table(table_name)
def query_encrypted_entities(self):
self.service.key_encryption_key = KeyWrapper('key1')
key_resolver = KeyResolver()
key_resolver.put_key(self.service.key_encryption_key)
self.service.key_resolver_function = key_resolver.resolve_key
table_name = self._create_query_table_encrypted(5)
# Querying for entire entities will transparently decrypt retrieved entities.
response = self.service.query_entities(table_name, num_results=5)
# Performing a projection on a subset of properties will also implicilty
# retrieve the encryption metatdata properties when an encryption policy is set.
response = self.service.query_entities(table_name, num_results=5,
select='PartitionKey,RowKey,foo')
self.service.delete_table(table_name)
def batch_encrypted_entities(self):
table_name = self._create_table()
entity1 = self._create_entity_for_encryption()
entity2 = self._create_entity_for_encryption()
entity2['PartitionKey'] = entity1['PartitionKey']
# Batches will encrypt the entities at the time of inserting into the batch, not
# committing the batch to the service, so the encryption policy must be
# passed in at the time of batch creation.
kek = KeyWrapper('key1')
batch = TableBatch(require_encryption=True, key_encryption_key=kek)
batch.insert_entity(entity1)
batch.insert_entity(entity2)
self.service.commit_batch(table_name, batch)
# When using the batch as a context manager, the tableservice object will
# automatically apply its encryption policy to the batch.
entity3 = self._create_entity_for_encryption()
entity4 = self._create_entity_for_encryption()
entity4['PartitionKey'] = entity3['PartitionKey']
self.service.key_encryption_key = KeyWrapper('key1')
with self.service.batch(table_name) as batch:
batch.insert_entity(entity3)
batch.insert_entity(entity4)
# Note that batches follow all the same client-side-encryption behavior as
# the corresponding individual table operations.
self.service.delete_table(table_name)
def require_encryption(self):
self.service.key_encryption_key = None
self.service.key_resolver_function = None
self.service.require_encryption = False
table_name = self._create_table()
entity_unencrypted = self._create_base_entity_class()
entity_unencrypted['foo'] = 'bar'
self.service.insert_entity(table_name, entity_unencrypted)
# If the require_encryption flag is set, the service object will throw if there
# is no encryption policy set on upload.
self.service.key_encryption_key = None
self.service.require_encryption = True
try:
self.service.insert_entity(table_name, entity_unencrypted)
raise Exception
except ValueError:
pass
# If the require_encryption flag is set, the service object will throw if there
# is no encryption policy set on download.
kek = KeyWrapper('key1')
self.service.key_encryption_key = kek
key_resolver = KeyResolver()
key_resolver.put_key(self.service.key_encryption_key)
self.service.key_resolver_function = key_resolver.resolve_key
entity_encrypted = self._create_entity_for_encryption()
self.service.insert_entity(table_name, entity_encrypted)
self.service.key_encryption_key = None
self.service.key_resolver_function = None
try:
self.service.get_entity(table_name, entity_encrypted['PartitionKey'],
entity_encrypted['RowKey'])
raise Exception
except ValueError:
pass
# If the require_encryption flag is set, but the retrieved object is not encrypted,
# the service object will throw.
self.service.key_resolver_function = key_resolver.resolve_key
try:
self.service.get_entity(table_name, entity_unencrypted['PartitionKey'],
entity_unencrypted['RowKey'])
raise Exception
except AzureException:
pass
self.service.delete_table(table_name)
def alternate_key_encryption_algorithms(self):
table_name = self._create_table()
entity = self._create_entity_for_encryption()
# The key wrapping algorithm used by the key_encryption_key is entirely
# up to the choice of the user. For instance, RSA may be used.
self.service.key_encryption_key = RSAKeyWrapper('key2')
self.service.insert_entity(table_name, entity)
key_resolver = KeyResolver()
key_resolver.put_key(self.service.key_encryption_key)
self.service.key_resolver_function = key_resolver.resolve_key
entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
self.service.delete_table(table_name)
def merge_not_supported(self):
table_name = self._create_table()
entity = self._create_entity_for_encryption()
self.service.key_encryption_key = KeyWrapper('key1')
self.service.insert_entity(table_name, entity)
# Merging encrypted entities is not supported. Calling merge with
# an encryption policy set will cause merge entities to fail.
# If the require_encryption flag is set, merge_entities will fail.
# Note that insert_or_merge exhibits the same encryption behavior.
self.service.require_encryption = True
try:
self.service.merge_entity(table_name, entity)
raise Exception
except ValueError:
pass
self.service.require_encryption = False
try:
self.service.merge_entity(table_name, entity)
raise Exception
except ValueError:
pass
self.service.delete_table(table_name)
def get_encrypted_entity_key_encryption_key(self):
table_name = self._create_table()
entity = self._create_entity_for_encryption()
kek = KeyWrapper('key1')
self.service.key_encryption_key = kek
self.service.insert_entity(table_name, entity)
# If the key_encryption_key property is set, the tableservice object will
# try to decrypt entities using that key. If both the key_resolver and key_encryption_key
# properties are set, the result of the key_resolver will take precedence and the decryption
# will fail if that key is not successful.
entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
self.service.delete_table(table_name)
def replace_encrypted_entity(self):
table_name = self._create_table()
entity = self._create_entity_for_encryption()
self.service.key_encryption_key = KeyWrapper('key1')
self.service.insert_entity(table_name, entity)
# An entity, encrypted or decrypted, may be replaced by an encrypted entity.
entity['foo'].value = 'updated'
self.service.update_entity(table_name, entity)
self.service.delete_table(table_name)

Просмотреть файл

@ -0,0 +1,173 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import time
import uuid
from datetime import datetime, timedelta
from azure.cosmosdb.common import (
AccessPolicy,
ResourceTypes,
AccountPermissions,
)
from azure.cosmosdb.table import (
TableService,
TablePermissions,
)
class TableSasSamples():
def __init__(self, account):
self.account = account
def run_all_samples(self):
self.service = self.account.create_table_service()
self.table_sas()
self.account_sas()
self.table_acl()
self.sas_with_signed_identifiers()
def _create_table(self, prefix='table'):
table_name = '{}{}'.format(prefix, str(uuid.uuid4()).replace('-', ''))
self.service.create_table(table_name)
return table_name
def table_sas(self):
table_name = self._create_table()
entity = {
'PartitionKey': 'test',
'RowKey': 'test1',
'text': 'hello world',
}
self.service.insert_entity(table_name, entity)
# Access only to the entities in the given table
# Query permissions to access entities
# Expires in an hour
token = self.service.generate_table_shared_access_signature(
table_name,
TablePermissions.QUERY,
datetime.utcnow() + timedelta(hours=1),
)
# Create a service and use the SAS
sas_service = TableService(
account_name=self.account.account_name,
sas_token=token,
)
entities = sas_service.query_entities(table_name)
for entity in entities:
print(entity.text) # hello world
self.service.delete_table(table_name)
def account_sas(self):
table_name = self._create_table()
entity = {
'PartitionKey': 'test',
'RowKey': 'test1',
'text': 'hello world',
}
self.service.insert_entity(table_name, entity)
# Access to all entities in all the tables
# Expires in an hour
token = self.service.generate_account_shared_access_signature(
ResourceTypes.OBJECT,
AccountPermissions.READ,
datetime.utcnow() + timedelta(hours=1),
)
# Create a service and use the SAS
sas_service = TableService(
account_name=self.account.account_name,
sas_token=token,
)
entities = list(sas_service.query_entities(table_name))
for entity in entities:
print(entity.text) # hello world
self.service.delete_table(table_name)
def table_acl(self):
table_name = self._create_table()
# Basic
access_policy = AccessPolicy(permission=TablePermissions.QUERY,
expiry=datetime.utcnow() + timedelta(hours=1))
identifiers = {'id': access_policy}
self.service.set_table_acl(table_name, identifiers)
# Wait 30 seconds for acl to propagate
time.sleep(30)
acl = self.service.get_table_acl(table_name) # {id: AccessPolicy()}
# Replaces values, does not merge
access_policy = AccessPolicy(permission=TablePermissions.QUERY,
expiry=datetime.utcnow() + timedelta(hours=1))
identifiers = {'id2': access_policy}
self.service.set_table_acl(table_name, identifiers)
# Wait 30 seconds for acl to propagate
time.sleep(30)
acl = self.service.get_table_acl(table_name) # {id2: AccessPolicy()}
# Clear
self.service.set_table_acl(table_name)
# Wait 30 seconds for acl to propagate
time.sleep(30)
acl = self.service.get_table_acl(table_name) # {}
self.service.delete_table(table_name)
def sas_with_signed_identifiers(self):
table_name = self._create_table()
entity = {
'PartitionKey': 'test',
'RowKey': 'test1',
'text': 'hello world',
}
self.service.insert_entity(table_name, entity)
# Set access policy on table
access_policy = AccessPolicy(permission=TablePermissions.QUERY,
expiry=datetime.utcnow() + timedelta(hours=1))
identifiers = {'id': access_policy}
acl = self.service.set_table_acl(table_name, identifiers)
# Wait 30 seconds for acl to propagate
time.sleep(30)
# Indicates to use the access policy set on the table
token = self.service.generate_table_shared_access_signature(
table_name,
id='id'
)
# Create a service and use the SAS
sas_service = TableService(
account_name=self.account.account_name,
sas_token=token,
)
entities = list(sas_service.query_entities(table_name))
for entity in entities:
print(entity.text) # hello world
self.service.delete_table(table_name)

Просмотреть файл

@ -0,0 +1,484 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import time
import uuid
from datetime import datetime
from azure.common import (
AzureHttpError,
AzureConflictHttpError,
AzureMissingResourceHttpError,
)
from azure.cosmosdb.common import (
Logging,
Metrics,
CorsRule,
)
from azure.cosmosdb.table import (
Entity,
TableBatch,
EdmType,
EntityProperty,
TablePayloadFormat,
)
class TableSamples():
def __init__(self, account):
self.account = account
def run_all_samples(self):
self.service = self.account.create_table_service()
self.create_table()
self.delete_table()
self.exists()
self.query_entities()
self.batch()
self.create_entity_class()
self.create_entity_dict()
self.insert_entity()
self.get_entity()
self.update_entity()
self.merge_entity()
self.insert_or_merge_entity()
self.insert_or_replace_entity()
self.delete_entity()
self.list_tables()
# This method contains sleeps, so don't run by default
# self.service_properties()
def _get_table_reference(self, prefix='table'):
table_name = '{}{}'.format(prefix, str(uuid.uuid4()).replace('-', ''))
return table_name
def _create_table(self, prefix='table'):
table_name = self._get_table_reference(prefix)
self.service.create_table(table_name)
return table_name
def create_table(self):
# Basic
table_name1 = self._get_table_reference()
created = self.service.create_table(table_name1) # True
# Fail on exist
table_name2 = self._get_table_reference()
created = self.service.create_table(table_name2) # True
created = self.service.create_table(table_name2) # False
try:
self.service.create_table(table_name2, fail_on_exist=True)
except AzureConflictHttpError:
pass
self.service.delete_table(table_name1)
self.service.delete_table(table_name2)
def delete_table(self):
# Basic
table_name = self._create_table()
deleted = self.service.delete_table(table_name) # True
# Fail not exist
table_name = self._get_table_reference()
deleted = self.service.delete_table(table_name) # False
try:
self.service.delete_table(table_name, fail_not_exist=True)
except AzureMissingResourceHttpError:
pass
def exists(self):
table_name = self._get_table_reference()
# Does not exist
exists = self.service.exists(table_name) # False
# Exists
self.service.create_table(table_name)
exists = self.service.exists(table_name) # True
self.service.delete_table(table_name)
def query_entities(self):
table_name = self._create_table()
entities = []
for i in range(1, 5):
entity = {'PartitionKey': 'John',
'RowKey': 'Doe the {}'.format(i),
'deceased': False,
'birthday': datetime(1991, 10, i)}
self.service.insert_entity(table_name, entity)
entities.append(entity)
# Basic
# Can access properties as dict or like an object
queried_entities = list(self.service.query_entities(table_name))
for entity in queried_entities:
print(entity.RowKey) # All 4 John Doe characters
# Num results
queried_entities = list(self.service.query_entities(table_name, num_results=2))
for entity in queried_entities:
print(entity.RowKey) # Doe the 1, Doe the 2
# Filter
filter = "RowKey eq '{}'".format(entities[1]['RowKey'])
queried_entities = list(self.service.query_entities(table_name, filter=filter))
for entity in queried_entities:
print(entity.RowKey) # Doe the 2
# Select
# Get only the column(s) specified
queried_entities = list(self.service.query_entities(table_name, select='birthday'))
for entity in queried_entities:
print(entity.birthday) # All 4 John Doe character's birthdays
queried_entities[0].get('RowKey') # None
# Accept
# Default contains all necessary type info. JSON_NO_METADATA returns no type info, though we can guess some client side.
# If type cannot be inferred, the value is simply returned as a string.
queried_entities = list(self.service.query_entities(table_name,
accept=TablePayloadFormat.JSON_NO_METADATA)) # entities w/ all properties, missing type
queried_entities[0].birthday # (string)
queried_entities[0].deceased # (boolean)
# Accept w/ Resolver
# A resolver can be specified to give type info client side if JSON_NO_METADATA is used.
def resolver(pk, rk, name, value, type):
if name == 'birthday':
return EdmType.DATETIME
queried_entities = list(self.service.query_entities(table_name,
accept=TablePayloadFormat.JSON_NO_METADATA,
property_resolver=resolver)) # entityentities w/ all properties, missing type resolved client side
queried_entities[0].birthday # (datetime)
queried_entities[0].deceased # (boolean)
self.service.delete_table(table_name)
def batch(self):
table_name = self._create_table()
entity = Entity()
entity.PartitionKey = 'batch'
entity.test = True
# All operations in the same batch must have the same partition key but different row keys
# Batches can hold from 1 to 100 entities
# Batches are atomic. All operations completed simulatenously. If one operation fails, they all fail.
# Insert, update, merge, insert or merge, insert or replace, and delete entity operations are supported
# Context manager style
with self.service.batch(table_name) as batch:
for i in range(0, 5):
entity.RowKey = 'context_{}'.format(i)
batch.insert_entity(entity)
# Commit style
batch = TableBatch()
for i in range(0, 5):
entity.RowKey = 'commit_{}'.format(i)
batch.insert_entity(entity)
self.service.commit_batch(table_name, batch)
self.service.delete_table(table_name)
def create_entity_class(self):
'''
Creates a class-based entity with fixed values, using all of the supported data types.
'''
entity = Entity()
# Partition key and row key must be strings and are required
entity.PartitionKey = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity.RowKey = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity.age = 39 # EdmType.INT64
entity.large = 933311100 # EdmType.INT64
entity.sex = 'male' # EdmType.STRING
entity.married = True # EdmType.BOOLEAN
entity.ratio = 3.1 # EdmType.DOUBLE
entity.birthday = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity.binary = EntityProperty(EdmType.BINARY, b'xyz')
entity.other = EntityProperty(EdmType.INT32, 20)
entity.clsid = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
def create_entity_dict(self):
'''
Creates a dict-based entity with fixed values, using all of the supported data types.
'''
entity = {}
# Partition key and row key must be strings and are required
entity['PartitionKey'] = 'pk{}'.format(str(uuid.uuid4()).replace('-', ''))
entity['RowKey'] = 'rk{}'.format(str(uuid.uuid4()).replace('-', ''))
# Some basic types are inferred
entity['age'] = 39 # EdmType.INT64
entity['large'] = 933311100 # EdmType.INT64
entity['sex'] = 'male' # EdmType.STRING
entity['married'] = True # EdmType.BOOLEAN
entity['ratio'] = 3.1 # EdmType.DOUBLE
entity['birthday'] = datetime(1970, 10, 4) # EdmType.DATETIME
# Binary, Int32 and GUID must be explicitly typed
entity['binary'] = EntityProperty(EdmType.BINARY, b'xyz')
entity['other'] = EntityProperty(EdmType.INT32, 20)
entity['clsid'] = EntityProperty(EdmType.GUID, 'c9da6455-213d-42c9-9a79-3e9149a57833')
return entity
def insert_entity(self):
table_name = self._create_table()
# Basic w/ dict
entity = self.create_entity_dict()
etag = self.service.insert_entity(table_name, entity)
# Basic w/ class
entity = self.create_entity_class()
etag = self.service.insert_entity(table_name, entity)
self.service.delete_table(table_name)
def get_entity(self):
table_name = self._create_table()
insert_entity = self.create_entity_class()
etag = self.service.insert_entity(table_name, insert_entity)
# Basic
# Can access properties as dict or like an object
entity = self.service.get_entity(table_name, insert_entity.PartitionKey,
insert_entity.RowKey) # entity w/ all properties
entity.age # 39 (number)
entity['age'] # 39 (number)
entity.clsid.value # 'c9da6455-213d-42c9-9a79-3e9149a57833' (string)
entity.clsid.type # Edm.Guid
# Select
entity = self.service.get_entity(table_name, insert_entity.PartitionKey, insert_entity.RowKey,
select='age') # entity w/ just 'age'
entity['age'] # 39 (number)
entity.get('clsid') # None
# Accept
# Default contains all necessary type info. JSON_NO_METADATA returns no type info, though we can guess some client side.
# If type cannot be inferred, the value is simply returned as a string.
entity = self.service.get_entity(table_name, insert_entity.PartitionKey, insert_entity.RowKey,
accept=TablePayloadFormat.JSON_NO_METADATA) # entity w/ all properties, missing type
entity.age # '39' (string)
entity.clsid # 'c9da6455-213d-42c9-9a79-3e9149a57833' (string)
entity.married # True (boolean)
# Accept w/ Resolver
# A resolver can be specified to give type info client side if JSON_NO_METADATA is used.
def resolver(pk, rk, name, value, type):
if name == 'large' or name == 'age':
return EdmType.INT64
if name == 'birthday':
return EdmType.DATETIME
if name == 'clsid':
return EdmType.GUID
entity = self.service.get_entity(table_name, insert_entity.PartitionKey, insert_entity.RowKey,
accept=TablePayloadFormat.JSON_NO_METADATA,
property_resolver=resolver) # entity w/ all properties, missing type
entity.age # 39 (number)
entity.clsid.value # 'c9da6455-213d-42c9-9a79-3e9149a57833' (string)
entity.clsid.type # Edm.Guid
entity.married # True (boolean)
self.service.delete_table(table_name)
def update_entity(self):
table_name = self._create_table()
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': False,
'birthday': datetime(1991, 10, 4)}
etag = self.service.insert_entity(table_name, entity)
# Basic
# Replaces entity entirely
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': True}
etag = self.service.update_entity(table_name, entity)
received_entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
received_entity.get('deceased') # True
received_entity.get('birthday') # None
# If match
# Replaces entity entirely if etag matches
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'id': 'abc12345'}
self.service.update_entity(table_name, entity, if_match=etag) # Succeeds
try:
self.service.update_entity(table_name, entity, if_match=etag) # Throws as previous update changes etag
except AzureHttpError:
pass
self.service.delete_table(table_name)
def merge_entity(self):
table_name = self._create_table()
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': False,
'birthday': datetime(1991, 10, 4)}
etag = self.service.insert_entity(table_name, entity)
# Basic
# Replaces entity entirely
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': True}
etag = self.service.merge_entity(table_name, entity)
received_entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
received_entity.get('deceased') # True
received_entity.get('birthday') # datetime(1991, 10, 4)
# If match
# Merges entity if etag matches
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'id': 'abc12345'}
self.service.merge_entity(table_name, entity, if_match=etag) # Succeeds
try:
self.service.merge_entity(table_name, entity, if_match=etag) # Throws as previous update changes etag
except AzureHttpError:
pass
self.service.delete_table(table_name)
def insert_or_merge_entity(self):
table_name = self._create_table()
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': False,
'birthday': datetime(1991, 10, 4)}
# Basic
# Inserts if entity does not already exist
etag = self.service.insert_or_merge_entity(table_name, entity)
# Merges if entity already exists
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'id': 'abc12345'}
etag = self.service.insert_or_merge_entity(table_name, entity)
received_entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
received_entity.get('id') # 'abc12345'
received_entity.get('deceased') # False
self.service.delete_table(table_name)
def insert_or_replace_entity(self):
table_name = self._create_table()
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'deceased': False,
'birthday': datetime(1991, 10, 4)}
# Basic
# Inserts if entity does not already exist
etag = self.service.insert_or_replace_entity(table_name, entity)
# Replaces if entity already exists
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'id': 'abc12345'}
etag = self.service.insert_or_replace_entity(table_name, entity)
received_entity = self.service.get_entity(table_name, entity['PartitionKey'], entity['RowKey'])
received_entity.get('id') # 'abc12345'
received_entity.get('deceased') # None
self.service.delete_table(table_name)
def delete_entity(self):
table_name = self._create_table()
entity = {'PartitionKey': 'John',
'RowKey': 'Doe'}
etag = self.service.insert_entity(table_name, entity)
# Basic
# Deletes entity
self.service.delete_entity(table_name, entity['PartitionKey'], entity['RowKey'])
# If match
# Deletes entity only if etag matches
entity = {'PartitionKey': 'John',
'RowKey': 'Doe',
'id': 'abc12345'}
etag = self.service.insert_entity(table_name, entity)
self.service.update_entity(table_name, entity, if_match=etag) # Succeeds
try:
self.service.delete_entity(table_name, entity['PartitionKey'], entity['RowKey'],
if_match=etag) # Throws as update changes etag
except AzureHttpError:
pass
self.service.delete_table(table_name)
def list_tables(self):
table_name1 = self._create_table('table1')
table_name2 = self._create_table('secondtable')
# Basic
# Commented out as this will list every table in your account
# tables = list(self.service.list_tables())
# for table in tables:
# print(table.name) # secondtable, table1, all other tables created in the self.service
# Num results
# Will return in alphabetical order.
tables = list(self.service.list_tables(num_results=2))
for table in tables:
print(table.name) # secondtable, table1, or whichever 2 queues are alphabetically first in your account
self.service.delete_table(table_name1)
self.service.delete_table(table_name2)
def service_properties(self):
# Basic
self.service.set_table_service_properties(logging=Logging(delete=True),
hour_metrics=Metrics(enabled=True, include_apis=True),
minute_metrics=Metrics(enabled=True, include_apis=False),
cors=[CorsRule(allowed_origins=['*'], allowed_methods=['GET'])])
# Wait 30 seconds for settings to propagate
time.sleep(30)
props = self.service.get_table_service_properties() # props = ServiceProperties() w/ all properties specified above
# Omitted properties will not overwrite what's already on the self.service
# Empty properties will clear
self.service.set_table_service_properties(cors=[])
# Wait 30 seconds for settings to propagate
time.sleep(30)
props = self.service.get_table_service_properties() # props = ServiceProperties() w/ CORS rules cleared

Просмотреть файл

@ -0,0 +1,72 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import unittest
from azure.cosmosdb.common import CloudStorageAccount
from samples.advanced import (
AuthenticationSamples,
ClientSamples,
)
from samples.table import (
TableSasSamples,
TableEncryptionSamples,
TableSamples,
)
@unittest.skip('Skip sample tests.')
class SampleTest(unittest.TestCase):
def setUp(self):
super(SampleTest, self).setUp()
try:
from samples.config import config
except:
raise ValueError('Please specify configuration settings in config.py.')
if config.IS_EMULATED:
self.account = CloudStorageAccount(is_emulated=True)
else:
# Note that account key and sas should not both be included
account_name = config.STORAGE_ACCOUNT_NAME
account_key = config.STORAGE_ACCOUNT_KEY
sas = config.SAS
self.account = CloudStorageAccount(account_name=account_name,
account_key=account_key,
sas_token=sas)
def test_table_samples(self):
table = TableSamples(self.account)
table.run_all_samples()
def test_table_sas_samples(self):
sas = TableSasSamples(self.account)
sas.run_all_samples()
def test_authentication_samples(self):
auth = AuthenticationSamples()
auth.run_all_samples()
def test_client_samples(self):
client = ClientSamples()
client.run_all_samples()
def test_table_encryption_samples(self):
encryption = TableEncryptionSamples(self.account)
encryption.run_all_samples()
# ------------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()

Просмотреть файл

@ -0,0 +1,3 @@
[bdist_wheel]
universal=1
azure-namespace-package=azure-cosmosdb-nspkg

Просмотреть файл

@ -0,0 +1,76 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from setuptools import setup, find_packages
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-cosmosdb-table',
version='0.36.0',
description='Microsoft Azure CosmosDB Table Client Library for Python',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-cosmosdb-python',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=find_packages(),
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
] + (['futures'] if sys.version_info < (3, 0) else []),
cmdclass=cmdclass
)

Просмотреть файл

@ -0,0 +1 @@


Просмотреть файл

@ -0,0 +1,99 @@
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric.padding import (
OAEP,
MGF1,
)
from cryptography.hazmat.primitives.asymmetric.rsa import generate_private_key
from cryptography.hazmat.primitives.hashes import SHA1
from cryptography.hazmat.primitives.keywrap import (
aes_key_wrap,
aes_key_unwrap,
)
class KeyWrapper:
def __init__(self, kid='local:key1'):
# Must have constant key value for recorded tests, otherwise we could use a random generator.
self.kek = b'\xbe\xa4\x11K\x9eJ\x07\xdafF\x83\xad+\xadvA C\xe8\xbc\x90\xa4\x11}G\xc3\x0f\xd4\xb4\x19m\x11'
self.backend = default_backend()
self.kid = kid
def wrap_key(self, key, algorithm='A256KW'):
if algorithm == 'A256KW':
return aes_key_wrap(self.kek, key, self.backend)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def unwrap_key(self, key, algorithm):
if algorithm == 'A256KW':
return aes_key_unwrap(self.kek, key, self.backend)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def get_key_wrap_algorithm(self):
return 'A256KW'
def get_kid(self):
return self.kid
class KeyResolver:
def __init__(self):
self.keys = {}
def put_key(self, key):
self.keys[key.get_kid()] = key
def resolve_key(self, kid):
return self.keys[kid]
class RSAKeyWrapper:
def __init__(self, kid='local:key2'):
self.private_key = generate_private_key(public_exponent=65537,
key_size=2048,
backend=default_backend())
self.public_key = self.private_key.public_key()
self.kid = kid
def wrap_key(self, key, algorithm='RSA'):
if algorithm == 'RSA':
return self.public_key.encrypt(key,
OAEP(
mgf=MGF1(algorithm=SHA1()),
algorithm=SHA1(),
label=None)
)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def unwrap_key(self, key, algorithm):
if algorithm == 'RSA':
return self.private_key.decrypt(key,
OAEP(
mgf=MGF1(algorithm=SHA1()),
algorithm=SHA1(),
label=None)
)
else:
raise ValueError(_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM)
def get_key_wrap_algorithm(self):
return 'RSA'
def get_kid(self):
return self.kid

Просмотреть файл

@ -0,0 +1,57 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tablea4ed0b50"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ebe37ae1-a294-11e7-a735-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:15 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tablea4ed0b50'')']
date: ['Tue, 26 Sep 2017 08:30:14 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tablea4ed0b50'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [d49a4587-0002-0031-01a1-367328000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ec2d57bd-a294-11e7-8d3e-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:15 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('tablea4ed0b50')
response:
body: {string: !!python/unicode '{"TableName":"tablea4ed0b50"}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:14 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [d49a458f-0002-0031-07a1-367328000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,57 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table6d7c1113"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ec800dc2-a294-11e7-a141-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:16 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table6d7c1113'')']
date: ['Tue, 26 Sep 2017 08:30:15 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table6d7c1113'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [fc04a134-0002-0036-62a1-361f4b000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ecc14685-a294-11e7-9915-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:16 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('table6d7c1113')
response:
body: {string: !!python/unicode '{"TableName":"table6d7c1113"}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:15 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [fc04a13c-0002-0036-68a1-361f4b000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,87 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table898d16dd"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ed147f66-a294-11e7-9cf2-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:17 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table898d16dd'')']
date: ['Tue, 26 Sep 2017 08:30:16 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table898d16dd'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [98eb2554-0002-0026-4fa1-36da23000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '{"TableName": "table898d16dd"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ed5715f8-a294-11e7-9c0f-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:17 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:98eb255c-0002-0026-55a1-36da23000000\nTime:2017-09-26T08:30:17.3956232Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:16 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [98eb255c-0002-0026-55a1-36da23000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ed67572e-a294-11e7-ad5f-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:17 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('table898d16dd')
response:
body: {string: !!python/unicode '{"TableName":"table898d16dd"}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:17 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [98eb2561-0002-0026-5aa1-36da23000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,62 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tablef3d21ca0"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [edb999d9-a294-11e7-a103-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:18 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tablef3d21ca0'')']
date: ['Tue, 26 Sep 2017 08:30:17 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tablef3d21ca0'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [ca1b404d-0002-0017-0ba1-363b30000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '{"TableName": "tablef3d21ca0"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [edfae4a8-a294-11e7-8e56-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:18 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:ca1b4053-0002-0017-0fa1-363b30000000\nTime:2017-09-26T08:30:18.0560235Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:17 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [ca1b4053-0002-0017-0fa1-363b30000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
version: 1

Просмотреть файл

@ -0,0 +1,83 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tableded1139b"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ee4e34fa-a294-11e7-b52c-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:19 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tableded1139b'')']
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tableded1139b'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [065706ac-0002-0007-71a1-36fe58000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['0']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ee8f9166-a294-11e7-8092-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:19 GMT']
x-ms-version: ['2017-04-17']
method: DELETE
uri: https://storagename.table.core.windows.net/Tables('tableded1139b')
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [065706b2-0002-0007-75a1-36fe58000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [eea06fa3-a294-11e7-94a4-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:20 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('tableded1139b')
response:
body: {string: !!python/unicode '{"odata.error":{"code":"ResourceNotFound","message":{"lang":"en-US","value":"The
specified resource does not exist.\nRequestId:065706b8-0002-0007-7ba1-36fe58000000\nTime:2017-09-26T08:30:21.1046089Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [065706b8-0002-0007-7ba1-36fe58000000]
x-ms-version: ['2017-04-17']
status: {code: 404, message: Not Found}
version: 1

Просмотреть файл

@ -0,0 +1,83 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table358619d2"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [eefd7b7a-a294-11e7-8345-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:20 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table358619d2'')']
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table358619d2'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [76ac8bbc-0002-003d-70a1-36e420000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['0']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ef3fd4de-a294-11e7-b079-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:21 GMT']
x-ms-version: ['2017-04-17']
method: DELETE
uri: https://storagename.table.core.windows.net/Tables('table358619d2')
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [76ac8bc1-0002-003d-73a1-36e420000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [ef5153e6-a294-11e7-9eb1-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:21 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('table358619d2')
response:
body: {string: !!python/unicode '{"odata.error":{"code":"ResourceNotFound","message":{"lang":"en-US","value":"The
specified resource does not exist.\nRequestId:76ac8bc9-0002-003d-7ba1-36e420000000\nTime:2017-09-26T08:30:21.3297298Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [76ac8bc9-0002-003d-7ba1-36e420000000]
x-ms-version: ['2017-04-17']
status: {code: 404, message: Not Found}
version: 1

Просмотреть файл

@ -0,0 +1,29 @@
interactions:
- request:
body: null
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['0']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [efb06002-a294-11e7-a240-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:21 GMT']
x-ms-version: ['2017-04-17']
method: DELETE
uri: https://storagename.table.core.windows.net/Tables('table31901545')
response:
body: {string: !!python/unicode '{"odata.error":{"code":"ResourceNotFound","message":{"lang":"en-US","value":"The
specified resource does not exist.\nRequestId:d83a73a3-0002-0042-11a1-362bbb000000\nTime:2017-09-26T08:30:21.3359503Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:20 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [d83a73a3-0002-0042-11a1-362bbb000000]
x-ms-version: ['2017-04-17']
status: {code: 404, message: Not Found}
version: 1

Просмотреть файл

@ -0,0 +1,29 @@
interactions:
- request:
body: null
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['0']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f04437c7-a294-11e7-b374-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:22 GMT']
x-ms-version: ['2017-04-17']
method: DELETE
uri: https://storagename.table.core.windows.net/Tables('tablea12c1b7c')
response:
body: {string: !!python/unicode '{"odata.error":{"code":"ResourceNotFound","message":{"lang":"en-US","value":"The
specified resource does not exist.\nRequestId:23f1a303-0002-0046-54a1-36a6bc000000\nTime:2017-09-26T08:30:21.4075520Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:21 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [23f1a303-0002-0046-54a1-36a6bc000000]
x-ms-version: ['2017-04-17']
status: {code: 404, message: Not Found}
version: 1

Просмотреть файл

@ -0,0 +1,55 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tableb07a0bab"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f106c5fd-a294-11e7-944a-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:24 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tableb07a0bab'')']
date: ['Tue, 26 Sep 2017 08:30:22 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tableb07a0bab'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [2cf570f4-0002-0022-41a1-365724000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f149c138-a294-11e7-9f48-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:24 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/tableb07a0bab?comp=acl
response:
body: {string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><SignedIdentifiers\
\ />"}
headers:
content-type: [application/xml]
date: ['Tue, 26 Sep 2017 08:30:22 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-ms-request-id: [2cf570fc-0002-0022-47a1-365724000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,174 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "listtable3f561007"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['34']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f2dd0b19-a294-11e7-8a72-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:27 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''listtable3f561007'')']
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''listtable3f561007'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [a59fa173-0002-002b-45a1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '{"TableName": "listtable3f561007"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['34']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f31f25c0-a294-11e7-88d5-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:27 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:a59fa17c-0002-002b-4ca1-3612f7000000\nTime:2017-09-26T08:30:27.0647836Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [a59fa17c-0002-002b-4ca1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: !!python/unicode '{"TableName": "listtable3f561007"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['34']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f332a419-a294-11e7-8fbc-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:27 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:a59fa183-0002-002b-52a1-3612f7000000\nTime:2017-09-26T08:30:27.1817909Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [a59fa183-0002-002b-52a1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: !!python/unicode '{"TableName": "listtable3f561007"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['34']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f3445570-a294-11e7-a089-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:27 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:a59fa187-0002-002b-56a1-3612f7000000\nTime:2017-09-26T08:30:27.2877980Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [a59fa187-0002-002b-56a1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f354ace1-a294-11e7-b34c-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:27 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables?%24top=2
response:
body: {string: !!python/unicode '{"value":[{"TableName":"CAPStable00d907f790c94492bed923b86adc98a6"},{"TableName":"CAPStable05d06c40bb8c41a4bffc8e2eddfb1419"}]}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-continuation-nexttablename: [1!80!Y2Fwc3RhYmxlMGVlZmE4NDg1ZGJlNDhkYjk4OTYxMTAyZWJhZDhlYjQBMDFkMjRmOTI5ZDljZjY2Yg--]
x-ms-request-id: [a59fa18b-0002-002b-5aa1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f365393d-a294-11e7-99b2-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:28 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables?%24top=2&NextTableName=1%2180%21Y2Fwc3RhYmxlMGVlZmE4NDg1ZGJlNDhkYjk4OTYxMTAyZWJhZDhlYjQBMDFkMjRmOTI5ZDljZjY2Yg--
response:
body: {string: !!python/unicode '{"value":[{"TableName":"CAPStable0eefa8485dbe48db98961102ebad8eb4"},{"TableName":"CAPStable13aeaf83f86243a78d5fb63b91d5ebf0"}]}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:26 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-continuation-nexttablename: [1!80!Y2Fwc3RhYmxlMWEwY2E4NjY4NzcwNGNiMGEyYzg3MzczMzk0YjRkM2IBMDFkMjRkMWQ5YjBjMjU3Zg--]
x-ms-request-id: [a59fa191-0002-002b-60a1-3612f7000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,148 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table967e1246"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f3ef548c-a294-11e7-97bc-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:28 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table967e1246'')']
date: ['Tue, 26 Sep 2017 08:30:28 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table967e1246'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [cbe8ab89-0002-0039-57a1-366927000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '{"TableName": "table967e1246"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f43168ba-a294-11e7-b930-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:29 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:cbe8ab94-0002-0039-5fa1-366927000000\nTime:2017-09-26T08:30:28.7933090Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:28 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [cbe8ab94-0002-0039-5fa1-366927000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: !!python/unicode '{"TableName": "table967e1246"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f441bf57-a294-11e7-a0c8-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:29 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:cbe8ab98-0002-0039-63a1-366927000000\nTime:2017-09-26T08:30:28.9023157Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:28 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [cbe8ab98-0002-0039-63a1-366927000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: !!python/unicode '{"TableName": "table967e1246"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f4524bc0-a294-11e7-96f9-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:29 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"TableAlreadyExists","message":{"lang":"en-US","value":"The
table specified already exists.\nRequestId:cbe8ab9c-0002-0039-67a1-366927000000\nTime:2017-09-26T08:30:29.0083242Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:28 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [cbe8ab9c-0002-0039-67a1-366927000000]
x-ms-version: ['2017-04-17']
status: {code: 409, message: Conflict}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f4627be3-a294-11e7-92b8-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:29 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables?%24top=3
response:
body: {string: !!python/unicode '{"value":[{"TableName":"CAPStable00d907f790c94492bed923b86adc98a6"},{"TableName":"CAPStable05d06c40bb8c41a4bffc8e2eddfb1419"},{"TableName":"CAPStable0eefa8485dbe48db98961102ebad8eb4"}]}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:29 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-continuation-nexttablename: [1!80!Y2Fwc3RhYmxlMTNhZWFmODNmODYyNDNhNzhkNWZiNjNiOTFkNWViZjABMDFkMmQ0YzllODk1YTlhMw--]
x-ms-request-id: [cbe8aba1-0002-0039-6ca1-366927000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -0,0 +1,77 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tableb1160bb7"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f578356e-a294-11e7-a82c-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:31 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tableb1160bb7'')']
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tableb1160bb7'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [8739aa5d-0002-003b-57a1-36d79f000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
Content-Length: ['0']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f5b9c94a-a294-11e7-a5a4-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:31 GMT']
x-ms-version: ['2017-04-17']
method: PUT
uri: https://storagename.table.core.windows.net/tableb1160bb7?comp=acl
response:
body: {string: !!python/unicode ''}
headers:
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-ms-request-id: [8739aa6e-0002-003b-66a1-36d79f000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f5ca2978-a294-11e7-80e9-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:32 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/tableb1160bb7?comp=acl
response:
body: {string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><SignedIdentifiers\
\ />"}
headers:
content-type: [application/xml]
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-ms-request-id: [8739aa75-0002-003b-6da1-36d79f000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,32 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table6f17111b"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f61cb59c-a294-11e7-9d34-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:32 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table6f17111b'')']
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table6f17111b'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [1e0623eb-0002-000d-63a1-365aef000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
version: 1

Просмотреть файл

@ -0,0 +1,78 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tableb9bd17bb"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f6a101e3-a294-11e7-82e0-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:33 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tableb9bd17bb'')']
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tableb9bd17bb'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [9a90073a-0002-0030-57a1-362cf4000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '<?xml version=''1.0'' encoding=''utf-8''?>
<SignedIdentifiers><SignedIdentifier><Id>empty</Id><AccessPolicy /></SignedIdentifier></SignedIdentifiers>'
headers:
Connection: [keep-alive]
Content-Length: ['145']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f6e1af51-a294-11e7-a899-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:33 GMT']
x-ms-version: ['2017-04-17']
method: PUT
uri: https://storagename.table.core.windows.net/tableb9bd17bb?comp=acl
response:
body: {string: !!python/unicode ''}
headers:
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-ms-request-id: [9a900745-0002-0030-60a1-362cf4000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f6f1edd4-a294-11e7-b1ea-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:33 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/tableb9bd17bb?comp=acl
response:
body: {string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><SignedIdentifiers><SignedIdentifier><Id>empty</Id></SignedIdentifier></SignedIdentifiers>"}
headers:
content-type: [application/xml]
date: ['Tue, 26 Sep 2017 08:30:32 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-ms-request-id: [9a90074f-0002-0030-6aa1-362cf4000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,79 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tabled1eb182e"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f7441c8c-a294-11e7-bd31-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:34 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tabled1eb182e'')']
date: ['Tue, 26 Sep 2017 08:30:33 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tabled1eb182e'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [1d3f39fb-0002-002a-21a1-364d2b000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '<?xml version=''1.0'' encoding=''utf-8''?>
<SignedIdentifiers />'
headers:
Connection: [keep-alive]
Content-Length: ['60']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f7869f57-a294-11e7-99f3-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:34 GMT']
x-ms-version: ['2017-04-17']
method: PUT
uri: https://storagename.table.core.windows.net/tabled1eb182e?comp=acl
response:
body: {string: !!python/unicode ''}
headers:
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:33 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-ms-request-id: [1d3f3a04-0002-002a-28a1-364d2b000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f798fe80-a294-11e7-a5c4-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:35 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/tabled1eb182e?comp=acl
response:
body: {string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><SignedIdentifiers\
\ />"}
headers:
content-type: [application/xml]
date: ['Tue, 26 Sep 2017 08:30:33 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-ms-request-id: [1d3f3a0c-0002-002a-30a1-364d2b000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,78 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "table45dd15a0"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f7ec050a-a294-11e7-86ec-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:35 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''table45dd15a0'')']
date: ['Tue, 26 Sep 2017 08:30:35 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''table45dd15a0'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [6c556c9f-0002-0010-6ca1-365753000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: !!python/unicode '<?xml version=''1.0'' encoding=''utf-8''?>
<SignedIdentifiers><SignedIdentifier><Id>testid</Id><AccessPolicy><Start>2011-10-11</Start><Expiry>2011-10-12</Expiry><Permission>r</Permission></AccessPolicy></SignedIdentifier></SignedIdentifiers>'
headers:
Connection: [keep-alive]
Content-Length: ['237']
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f82e5523-a294-11e7-9dbd-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:36 GMT']
x-ms-version: ['2017-04-17']
method: PUT
uri: https://storagename.table.core.windows.net/table45dd15a0?comp=acl
response:
body: {string: !!python/unicode ''}
headers:
content-length: ['0']
date: ['Tue, 26 Sep 2017 08:30:35 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-ms-request-id: [6c556ca6-0002-0010-71a1-365753000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f83f5b7d-a294-11e7-8d47-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:36 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/table45dd15a0?comp=acl
response:
body: {string: "\uFEFF<?xml version=\"1.0\" encoding=\"utf-8\"?><SignedIdentifiers><SignedIdentifier><Id>testid</Id><AccessPolicy><Start>2011-10-11T00:00:00.0000000Z</Start><Expiry>2011-10-12T00:00:00.0000000Z</Expiry><Permission>r</Permission></AccessPolicy></SignedIdentifier></SignedIdentifiers>"}
headers:
content-type: [application/xml]
date: ['Tue, 26 Sep 2017 08:30:36 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-ms-request-id: [6c556cac-0002-0010-77a1-365753000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,57 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "tablea54a0b7c"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['30']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f89331fd-a294-11e7-a6a0-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:36 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode ''}
headers:
cache-control: [no-cache]
content-length: ['0']
dataserviceid: ['https://storagename.table.core.windows.net/Tables(''tablea54a0b7c'')']
date: ['Tue, 26 Sep 2017 08:30:35 GMT']
location: ['https://storagename.table.core.windows.net/Tables(''tablea54a0b7c'')']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
x-content-type-options: [nosniff]
x-ms-request-id: [6376f4a9-0002-0047-6ea1-36f960000000]
x-ms-version: ['2017-04-17']
status: {code: 204, message: No Content}
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f8d5d61c-a294-11e7-ae5c-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:37 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('tablea54a0b7c')
response:
body: {string: !!python/unicode '{"TableName":"tablea54a0b7c"}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:35 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [6376f4b3-0002-0047-76a1-36f960000000]
x-ms-version: ['2017-04-17']
status: {code: 200, message: OK}
version: 1

Просмотреть файл

@ -0,0 +1,28 @@
interactions:
- request:
body: null
headers:
Accept: [application/json;odata=nometadata]
Connection: [keep-alive]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f928a8cc-a294-11e7-a83e-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:37 GMT']
x-ms-version: ['2017-04-17']
method: GET
uri: https://storagename.table.core.windows.net/Tables('tabled7260d2c')
response:
body: {string: !!python/unicode '{"odata.error":{"code":"ResourceNotFound","message":{"lang":"en-US","value":"The
specified resource does not exist.\nRequestId:e140a7fb-0002-0037-70a1-364097000000\nTime:2017-09-26T08:30:38.0006793Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=nometadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:37 GMT']
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [e140a7fb-0002-0037-70a1-364097000000]
x-ms-version: ['2017-04-17']
status: {code: 404, message: Not Found}
version: 1

Просмотреть файл

@ -0,0 +1,32 @@
interactions:
- request:
body: !!python/unicode '{"TableName": "\u554a\u9f44\u4e02\u72db\u72dc"}'
headers:
Accept: [application/json;odata=minimalmetadata]
Connection: [keep-alive]
Content-Length: ['47']
Content-Type: [application/json]
DataServiceVersion: [3.0;NetFx]
MaxDataServiceVersion: ['3.0']
Prefer: [return-no-content]
User-Agent: [Azure-Storage/0.36.0 (Python CPython 2.7.10; Darwin 16.7.0)]
x-ms-client-request-id: [f9b18a21-a294-11e7-940a-b8e8564491f6]
x-ms-date: ['Tue, 26 Sep 2017 08:30:38 GMT']
x-ms-version: ['2017-04-17']
method: POST
uri: https://storagename.table.core.windows.net/Tables
response:
body: {string: !!python/unicode '{"odata.error":{"code":"InvalidResourceName","message":{"lang":"en-US","value":"The
specifed resource name contains invalid characters.\nRequestId:8eabec64-0002-0043-4da1-367467000000\nTime:2017-09-26T08:30:39.5237842Z"}}}'}
headers:
cache-control: [no-cache]
content-type: [application/json;odata=minimalmetadata;streaming=true;charset=utf-8]
date: ['Tue, 26 Sep 2017 08:30:38 GMT']
preference-applied: [return-no-content]
server: [Windows-Azure-Table/1.0 Microsoft-HTTPAPI/2.0]
transfer-encoding: [chunked]
x-content-type-options: [nosniff]
x-ms-request-id: [8eabec64-0002-0043-4da1-367467000000]
x-ms-version: ['2017-04-17']
status: {code: 400, message: Bad Request}
version: 1

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше