This commit is contained in:
zezha-msft 2017-07-24 15:28:38 -07:00
Родитель dbdec015b5
Коммит 4a1c143f12
180 изменённых файлов: 6827 добавлений и 3153 удалений

Просмотреть файл

@ -2,6 +2,20 @@
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.
**Note: This changelog is deprecated starting with version XX.XX.XX, please refer to the ChangeLog.md in each package for future change logs.**
## Version XX.XX.XX:
### All:
- The library has been split into 4 different packages:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
- The package `azure-storage` is now deprecated.
- The classes that were directly under azure.storage, not under azure.storage.*(blob, file, queue, table), are now under azure.storage.common.
- Example: azure.storage.retry becomes azure.storage.common.retry
## Version 0.34.0:
### All:

Просмотреть файл

@ -2,11 +2,21 @@
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.
## Version XX.XX.XX:
**Note: This changelog is deprecated starting with version XX.XX.XX, please refer to the ChangeLog.md in each package for future change logs.**
## Version XX.XX.XX:
### All:
- Added logging to the library, the name of the logger is 'azure.storage'. User must add handlers to the logger to output logs.
- Secondary endpoints may now be specified in connection strings, provided the corresponding primary endpoint is specified. See the connection string documentation for details.
- The library has been split into 4 different packages:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
- The package `azure-storage` is now deprecated.
- The classes that were directly under azure.storage, not under azure.storage.*(blob, file, queue, table), are now under azure.storage.common.
- Example: azure.storage.retry becomes azure.storage.common.retry
## Version 0.36.0:

Просмотреть файл

@ -35,6 +35,20 @@ usage samples in the samples directory, and the ChangeLog and BreakingChanges.
If you are encountering `problems`_ installing azure-storage on Azure Web Apps,
`upgrading pip`_ might help.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.36.0 (or any version below 0.36.0), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
@ -71,13 +85,25 @@ Getting Started
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to and including version 0.36.0, there used to be a single package (azure-storage) containing all services.
It is no longer supported, and users should install the 4 before mentioned packages individually, depending on the need.
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
@ -87,9 +113,12 @@ To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
@ -97,13 +126,16 @@ Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, or 3.5.
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-blob
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-blob
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -1,3 +1,2 @@
include *.rst
exclude azure/__init__.py
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,183 @@
Microsoft Azure Storage SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure Storage services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
If you are upgrading from a version older than 0.30.0, see the upgrade doc, the
usage samples in the samples directory, and the ChangeLog and BreakingChanges.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.35.1 (or any version below 0.35.1), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
Getting Started
===============
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to version 0.36.0, there used to be a single package (azure-storage) containing all services
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .appendblobservice import AppendBlobService
from .blockblobservice import BlockBlobService
from .models import (
Container,
ContainerProperties,
@ -35,7 +37,4 @@ from .models import (
BlobPrefix,
DeleteSnapshot,
)
from .blockblobservice import BlockBlobService
from .pageblobservice import PageBlobService
from .appendblobservice import AppendBlobService

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,18 +11,20 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from azure.common import AzureException
from dateutil import parser
from azure.storage._error import AzureException
try:
from xml.etree import cElementTree as ETree
except ImportError:
from xml.etree import ElementTree as ETree
from .._common_conversion import (
from azure.storage.common._common_conversion import (
_decode_base64_to_text,
_to_str,
_get_content_md5
)
from .._deserialization import (
from azure.storage.common._deserialization import (
_parse_properties,
_int_to_str,
_parse_metadata,
@ -44,27 +46,28 @@ from .models import (
BlobPrefix,
)
from ._encryption import _decrypt_blob
from ..models import _list
from .._error import(
from azure.storage.common.models import _list
from azure.storage.common._error import (
_validate_content_match,
_ERROR_DECRYPTION_FAILURE,
)
from .._common_conversion import _get_content_md5
def _parse_base_properties(response):
'''
Extracts basic response headers.
'''
'''
resource_properties = ResourceProperties()
resource_properties.last_modified = parser.parse(response.headers.get('last-modified'))
resource_properties.etag = response.headers.get('etag')
return resource_properties
def _parse_page_properties(response):
'''
Extracts page response headers.
'''
'''
put_page = PageBlobProperties()
put_page.last_modified = parser.parse(response.headers.get('last-modified'))
put_page.etag = response.headers.get('etag')
@ -72,10 +75,11 @@ def _parse_page_properties(response):
return put_page
def _parse_append_block(response):
'''
Extracts append block response headers.
'''
'''
append_block = AppendBlockProperties()
append_block.last_modified = parser.parse(response.headers.get('last-modified'))
append_block.etag = response.headers.get('etag')
@ -84,14 +88,16 @@ def _parse_append_block(response):
return append_block
def _parse_snapshot_blob(response, name):
'''
Extracts snapshot return header.
'''
'''
snapshot = response.headers.get('x-ms-snapshot')
return _parse_blob(response, name, snapshot)
def _parse_lease(response):
'''
Extracts lease time and ID return headers.
@ -105,6 +111,7 @@ def _parse_lease(response):
return lease
def _parse_blob(response, name, snapshot, validate_content=False, require_encryption=False,
key_encryption_key=None, key_resolver_function=None, start_offset=None, end_offset=None):
if response is None:
@ -126,14 +133,15 @@ def _parse_blob(response, name, snapshot, validate_content=False, require_encryp
_validate_content_match(response.headers['content-md5'], computed_md5)
if key_encryption_key is not None or key_resolver_function is not None:
try:
response.body = _decrypt_blob(require_encryption, key_encryption_key, key_resolver_function,
response, start_offset, end_offset)
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
try:
response.body = _decrypt_blob(require_encryption, key_encryption_key, key_resolver_function,
response, start_offset, end_offset)
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
return Blob(name, snapshot, response.body, props, metadata)
def _parse_container(response, name):
if response is None:
return None
@ -142,12 +150,14 @@ def _parse_container(response, name):
props = _parse_properties(response, ContainerProperties)
return Container(name, props, metadata)
def _convert_xml_to_signed_identifiers_and_access(response):
acl = _convert_xml_to_signed_identifiers(response)
acl.public_access = response.headers.get('x-ms-blob-public-access')
return acl
def _convert_xml_to_containers(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -179,7 +189,7 @@ def _convert_xml_to_containers(response):
containers = _list()
list_element = ETree.fromstring(response.body)
# Set next marker
setattr(containers, 'next_marker', list_element.findtext('NextMarker'))
@ -205,12 +215,13 @@ def _convert_xml_to_containers(response):
container.properties.lease_state = properties_element.findtext('LeaseState')
container.properties.lease_duration = properties_element.findtext('LeaseDuration')
container.properties.public_access = properties_element.findtext('PublicAccess')
# Add container to list
containers.append(container)
return containers
LIST_BLOBS_ATTRIBUTE_MAP = {
'Last-Modified': (None, 'last_modified', parser.parse),
'Etag': (None, 'etag', _to_str),
@ -237,6 +248,7 @@ LIST_BLOBS_ATTRIBUTE_MAP = {
'ArchiveStatus': (None, 'rehydration_status', _to_str)
}
def _convert_xml_to_blob_list(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -285,7 +297,7 @@ def _convert_xml_to_blob_list(response):
if response is None or response.body is None:
return None
blob_list = _list()
blob_list = _list()
list_element = ETree.fromstring(response.body)
setattr(blob_list, 'next_marker', list_element.findtext('NextMarker'))
@ -309,7 +321,7 @@ def _convert_xml_to_blob_list(response):
for property_element in properties_element:
info = LIST_BLOBS_ATTRIBUTE_MAP.get(property_element.tag)
if info is None:
setattr(blob.properties, property_element.tag, _to_str(property_element.text))
setattr(blob.properties, property_element.tag, _to_str(property_element.text))
elif info[0] is None:
setattr(blob.properties, info[1], info[2](property_element.text))
else:
@ -322,12 +334,13 @@ def _convert_xml_to_blob_list(response):
blob.metadata = dict()
for metadata_element in metadata_root_element:
blob.metadata[metadata_element.tag] = metadata_element.text
# Add blob to list
blob_list.append(blob)
return blob_list
def _convert_xml_to_block_list(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -375,6 +388,7 @@ def _convert_xml_to_block_list(response):
return block_list
def _convert_xml_to_page_ranges(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -406,7 +420,7 @@ def _convert_xml_to_page_ranges(response):
elif page_range_element.tag == 'ClearRange':
is_cleared = True
else:
pass # ignore any unrecognized Page Range types
pass # ignore any unrecognized Page Range types
page_list.append(
PageRange(
@ -416,4 +430,4 @@ def _convert_xml_to_page_ranges(response):
)
)
return page_list
return page_list

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,19 +11,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import threading
from time import sleep
from azure.common import (
AzureHttpError,
)
from .._error import _ERROR_NO_SINGLE_THREAD_CHUNKING
from azure.storage.common._error import _ERROR_NO_SINGLE_THREAD_CHUNKING
def _download_blob_chunks(blob_service, container_name, blob_name, snapshot,
download_size, block_size, progress, start_range, end_range,
stream, max_connections, progress_callback, validate_content,
lease_id, if_modified_since, if_unmodified_since, if_match,
download_size, block_size, progress, start_range, end_range,
stream, max_connections, progress_callback, validate_content,
lease_id, if_modified_since, if_unmodified_since, if_match,
if_none_match, timeout, operation_context):
if max_connections <= 1:
raise ValueError(_ERROR_NO_SINGLE_THREAD_CHUNKING.format('blob'))
@ -37,7 +34,7 @@ def _download_blob_chunks(blob_service, container_name, blob_name, snapshot,
block_size,
progress,
start_range,
end_range,
end_range,
stream,
progress_callback,
validate_content,
@ -54,10 +51,11 @@ def _download_blob_chunks(blob_service, container_name, blob_name, snapshot,
executor = concurrent.futures.ThreadPoolExecutor(max_connections)
result = list(executor.map(downloader.process_chunk, downloader.get_chunk_offsets()))
class _BlobChunkDownloader(object):
def __init__(self, blob_service, container_name, blob_name, snapshot, download_size,
chunk_size, progress, start_range, end_range, stream,
progress_callback, validate_content, lease_id, if_modified_since,
chunk_size, progress, start_range, end_range, stream,
progress_callback, validate_content, lease_id, if_modified_since,
if_unmodified_since, if_match, if_none_match, timeout, operation_context):
self.blob_service = blob_service
self.container_name = container_name
@ -66,7 +64,7 @@ class _BlobChunkDownloader(object):
self.chunk_size = chunk_size
self.download_size = download_size
self.start_index = start_range
self.start_index = start_range
self.blob_end = end_range
self.stream = stream
@ -80,10 +78,10 @@ class _BlobChunkDownloader(object):
self.validate_content = validate_content
self.lease_id = lease_id
self.if_modified_since=if_modified_since
self.if_unmodified_since=if_unmodified_since
self.if_match=if_match
self.if_none_match=if_none_match
self.if_modified_since = if_modified_since
self.if_unmodified_since = if_unmodified_since
self.if_match = if_match
self.if_none_match = if_none_match
def get_chunk_offsets(self):
index = self.start_index
@ -135,4 +133,4 @@ class _BlobChunkDownloader(object):
# This makes sure that if_match is set so that we can validate
# that subsequent downloads are to an unmodified blob
self.if_match = response.properties.etag
return response
return response

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,26 +11,29 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from os import urandom
from json import(
from json import (
dumps,
loads,
)
from .._error import(
_validate_not_none,
_validate_key_encryption_key_wrap,
_ERROR_DATA_NOT_ENCRYPTED,
)
from .._encryption import (
from os import urandom
from cryptography.hazmat.primitives.padding import PKCS7
from azure.storage.common._encryption import (
_generate_encryption_data_dict,
_generate_AES_CBC_cipher,
_dict_to_encryption_data,
_validate_and_unwrap_cek,
_EncryptionAlgorithm,
)
from cryptography.hazmat.primitives.padding import PKCS7
from azure.storage.common._error import (
_validate_not_none,
_validate_key_encryption_key_wrap,
_ERROR_DATA_NOT_ENCRYPTED,
)
def _encrypt_blob(blob, key_encryption_key):
'''
@ -69,11 +72,12 @@ def _encrypt_blob(blob, key_encryption_key):
encryptor = cipher.encryptor()
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
encryption_data = _generate_encryption_data_dict(key_encryption_key, content_encryption_key,
initialization_vector)
initialization_vector)
encryption_data['EncryptionMode'] = 'FullBlob'
return dumps(encryption_data), encrypted_data
def _generate_blob_encryption_data(key_encryption_key):
'''
Generates the encryption_metadata for the blob.
@ -92,14 +96,15 @@ def _generate_blob_encryption_data(key_encryption_key):
content_encryption_key = urandom(32)
initialization_vector = urandom(16)
encryption_data = _generate_encryption_data_dict(key_encryption_key,
content_encryption_key,
initialization_vector)
content_encryption_key,
initialization_vector)
encryption_data['EncryptionMode'] = 'FullBlob'
encryption_data = dumps(encryption_data)
return (content_encryption_key, initialization_vector, encryption_data)
def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
response, start_offset, end_offset):
'''
Decrypts the given blob contents and returns only the requested range.
@ -120,7 +125,7 @@ def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
_validate_not_none('response', response)
content = response.body
_validate_not_none('content', content)
try:
encryption_data = _dict_to_encryption_data(loads(response.headers['x-ms-meta-encryptiondata']))
except:
@ -129,7 +134,7 @@ def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
else:
return content
if not(encryption_data.encryption_agent.encryption_algorithm == _EncryptionAlgorithm.AES_CBC_256):
if not (encryption_data.encryption_agent.encryption_algorithm == _EncryptionAlgorithm.AES_CBC_256):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM)
blob_type = response.headers['x-ms-blob-type']
@ -157,7 +162,7 @@ def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
else:
iv = encryption_data.content_encryption_IV
if end_range == blob_size-1:
if end_range == blob_size - 1:
unpad = True
else:
unpad = True
@ -169,21 +174,22 @@ def _decrypt_blob(require_encryption, key_encryption_key, key_resolver,
content_encryption_key = _validate_and_unwrap_cek(encryption_data, key_encryption_key, key_resolver)
cipher = _generate_AES_CBC_cipher(content_encryption_key, iv)
decryptor = cipher.decryptor()
content = decryptor.update(content) + decryptor.finalize()
if unpad:
unpadder = PKCS7(128).unpadder()
content = unpadder.update(content) + unpadder.finalize()
return content[start_offset : len(content) - end_offset]
return content[start_offset: len(content) - end_offset]
def _get_blob_encryptor_and_padder(cek, iv, should_pad):
encryptor = None
padder = None
if cek is not None and iv is not None:
cipher = _generate_AES_CBC_cipher(cek, iv)
encryptor = cipher.encryptor()
padder = PKCS7(128).padder() if should_pad else None
return encryptor, padder
return encryptor, padder

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
_ERROR_PAGE_BLOB_SIZE_ALIGNMENT = \
'Invalid page blob size: {0}. ' + \
@ -35,4 +35,4 @@ _ERROR_INVALID_LEASE_BREAK_PERIOD = \
_ERROR_NO_SINGLE_THREAD_CHUNKING = \
'To use blob chunk downloader more than 1 thread must be ' + \
'used since get_blob_to_bytes should be called for single threaded ' + \
'blob downloads.'
'blob downloads.'

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,17 +11,18 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from xml.sax.saxutils import escape as xml_escape
try:
from xml.etree import cElementTree as ETree
except ImportError:
from xml.etree import ElementTree as ETree
from .._common_conversion import (
from azure.storage.common._common_conversion import (
_encode_base64,
_str,
)
from .._error import (
from azure.storage.common._error import (
_validate_not_none,
_ERROR_START_END_NEEDED_FOR_MD5,
_ERROR_RANGE_TOO_LARGE_FOR_MD5,
@ -33,6 +34,7 @@ from ._error import (
)
from io import BytesIO
def _get_path(container_name=None, blob_name=None):
'''
Creates the path to access a blob resource.
@ -83,6 +85,7 @@ def _validate_and_format_range_headers(request, start_range, end_range, start_ra
request.headers['x-ms-range-get-content-md5'] = 'true'
def _convert_block_list_to_xml(block_id_list):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -102,7 +105,7 @@ def _convert_block_list_to_xml(block_id_list):
return ''
block_list_element = ETree.Element('BlockList');
# Enabled
for block in block_id_list:
if block.id is None:
@ -119,6 +122,6 @@ def _convert_block_list_to_xml(block_id_list):
finally:
output = stream.getvalue()
stream.close()
# return xml value
return output
return output

Просмотреть файл

@ -12,34 +12,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from io import (BytesIO, IOBase, SEEK_CUR, SEEK_END, SEEK_SET, UnsupportedOperation)
from math import ceil
from threading import Lock
from time import sleep
from cryptography.hazmat.primitives.padding import PKCS7
from .._common_conversion import _encode_base64
from .._serialization import (
from azure.storage.common._common_conversion import _encode_base64
from azure.storage.common._error import _ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM
from azure.storage.common._serialization import (
url_quote,
_get_data_bytes_only,
_len_plus
)
from ._encryption import(
from ._encryption import (
_get_blob_encryptor_and_padder,
)
from azure.common import (
AzureHttpError,
)
from io import (BytesIO, IOBase, SEEK_CUR, SEEK_END, SEEK_SET, UnsupportedOperation)
from .models import BlobBlock
from math import ceil
from .._error import _ERROR_VALUE_SHOULD_BE_SEEKABLE_STREAM
def _upload_blob_chunks(blob_service, container_name, blob_name,
blob_size, block_size, stream, max_connections,
progress_callback, validate_content, lease_id, uploader_class,
progress_callback, validate_content, lease_id, uploader_class,
maxsize_condition=None, if_match=None, timeout=None,
content_encryption_key=None, initialization_vector=None, resource_properties=None):
encryptor, padder = _get_blob_encryptor_and_padder(content_encryption_key, initialization_vector,
uploader_class is not _PageBlobChunkUploader)
@ -111,11 +105,11 @@ def _upload_blob_chunks(blob_service, container_name, blob_name,
return range_ids
def _upload_blob_substream_blocks(blob_service, container_name, blob_name,
blob_size, block_size, stream, max_connections,
progress_callback, validate_content, lease_id, uploader_class,
maxsize_condition=None, if_match=None, timeout=None):
uploader = uploader_class(
blob_service,
container_name,
@ -150,6 +144,7 @@ def _upload_blob_substream_blocks(blob_service, container_name, blob_name,
return range_ids
class _BlobChunkUploader(object):
def __init__(self, blob_service, container_name, blob_name, blob_size,
chunk_size, stream, parallel, progress_callback,
@ -183,7 +178,7 @@ class _BlobChunkUploader(object):
# Buffer until we either reach the end of the stream or get a whole chunk.
while True:
if self.blob_size:
read_size = min(self.chunk_size-len(data), self.blob_size - (index + len(data)))
read_size = min(self.chunk_size - len(data), self.blob_size - (index + len(data)))
temp = self.stream.read(read_size)
temp = _get_data_bytes_only('temp', temp)
data += temp
@ -226,7 +221,7 @@ class _BlobChunkUploader(object):
self.progress_callback(total, self.blob_size)
def _upload_chunk_with_progress(self, chunk_offset, chunk_data):
range_id = self._upload_chunk(chunk_offset, chunk_data)
range_id = self._upload_chunk(chunk_offset, chunk_data)
self._update_progress(len(chunk_data))
return range_id
@ -260,6 +255,7 @@ class _BlobChunkUploader(object):
self.etag = resp.etag
self.last_modified = resp.last_modified
class _BlockBlobChunkUploader(_BlobChunkUploader):
def _upload_chunk(self, chunk_offset, chunk_data):
block_id = url_quote(_encode_base64('{0:032d}'.format(chunk_offset)))
@ -310,6 +306,7 @@ class _PageBlobChunkUploader(_BlobChunkUploader):
self.set_response_properties(resp)
class _AppendBlobChunkUploader(_BlobChunkUploader):
def _upload_chunk(self, chunk_offset, chunk_data):
if not hasattr(self, 'current_length'):
@ -338,6 +335,7 @@ class _AppendBlobChunkUploader(_BlobChunkUploader):
self.set_response_properties(resp)
class _SubStream(IOBase):
def __init__(self, wrapped_stream, stream_begin_index, length, lockObj):
# Python 2.7: file-like objects created with open() typically support seek(), but are not
@ -455,4 +453,4 @@ class _SubStream(IOBase):
raise UnsupportedOperation
def writeable(self):
return False
return False

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,46 +11,48 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._error import (
_validate_not_none,
_validate_type_bytes,
_validate_encryption_unsupported,
_ERROR_VALUE_NEGATIVE,
)
from .._common_conversion import (
# --------------------------------------------------------------------------
import sys
from os import path
from azure.storage.common._common_conversion import (
_to_str,
_int_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
from .._serialization import (
_get_data_bytes_only,
_add_metadata_headers,
)
from .._http import HTTPRequest
from ._upload_chunking import (
_AppendBlobChunkUploader,
_upload_blob_chunks,
)
from .models import (
_BlobTypes,
ResourceProperties
)
from .._constants import (
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from ._serialization import (
_get_path,
from azure.storage.common._error import (
_validate_not_none,
_validate_type_bytes,
_validate_encryption_unsupported,
_ERROR_VALUE_NEGATIVE,
)
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_data_bytes_only,
_add_metadata_headers,
)
from ._deserialization import (
_parse_append_block,
_parse_base_properties,
)
from ._serialization import (
_get_path,
)
from ._upload_chunking import (
_AppendBlobChunkUploader,
_upload_blob_chunks,
)
from .baseblobservice import BaseBlobService
from os import path
import sys
from .models import (
_BlobTypes,
ResourceProperties
)
if sys.version_info >= (3,):
from io import BytesIO
else:
@ -75,7 +77,7 @@ class AppendBlobService(BaseBlobService):
'''
MAX_BLOCK_SIZE = 4 * 1024 * 1024
def __init__(self, account_name=None, account_key=None, sas_token=None,
def __init__(self, account_name=None, account_key=None, sas_token=None,
is_emulated=False, protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
custom_domain=None, request_session=None, connection_string=None, socket_timeout=None):
'''
@ -119,7 +121,7 @@ class AppendBlobService(BaseBlobService):
'''
self.blob_type = _BlobTypes.AppendBlob
super(AppendBlobService, self).__init__(
account_name, account_key, sas_token, is_emulated, protocol, endpoint_suffix,
account_name, account_key, sas_token, is_emulated, protocol, endpoint_suffix,
custom_domain, request_session, connection_string, socket_timeout)
def create_blob(self, container_name, blob_name, content_settings=None,
@ -271,7 +273,7 @@ class AppendBlobService(BaseBlobService):
request.query = {
'comp': 'appendblock',
'timeout': _int_to_str(timeout),
}
}
request.headers = {
'x-ms-blob-condition-maxsize': _to_str(maxsize_condition),
'x-ms-blob-condition-appendpos': _to_str(appendpos_condition),
@ -289,11 +291,11 @@ class AppendBlobService(BaseBlobService):
return self._perform_request(request, _parse_append_block)
#----Convenience APIs----------------------------------------------
# ----Convenience APIs----------------------------------------------
def append_blob_from_path(
self, container_name, blob_name, file_path, validate_content=False,
maxsize_condition=None, progress_callback=None, lease_id=None, timeout=None):
self, container_name, blob_name, file_path, validate_content=False,
maxsize_condition=None, progress_callback=None, lease_id=None, timeout=None):
'''
Appends to the content of an existing blob from a file path, with automatic
chunking and progress notifications.
@ -339,20 +341,20 @@ class AppendBlobService(BaseBlobService):
count = path.getsize(file_path)
with open(file_path, 'rb') as stream:
return self.append_blob_from_stream(
container_name,
blob_name,
stream,
count=count,
validate_content=validate_content,
maxsize_condition=maxsize_condition,
progress_callback=progress_callback,
lease_id=lease_id,
timeout=timeout)
container_name,
blob_name,
stream,
count=count,
validate_content=validate_content,
maxsize_condition=maxsize_condition,
progress_callback=progress_callback,
lease_id=lease_id,
timeout=timeout)
def append_blob_from_bytes(
self, container_name, blob_name, blob, index=0, count=None,
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
self, container_name, blob_name, blob, index=0, count=None,
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
'''
Appends to the content of an existing blob from an array of bytes, with
automatic chunking and progress notifications.
@ -412,20 +414,20 @@ class AppendBlobService(BaseBlobService):
stream.seek(index)
return self.append_blob_from_stream(
container_name,
blob_name,
stream,
count=count,
validate_content=validate_content,
maxsize_condition=maxsize_condition,
lease_id=lease_id,
progress_callback=progress_callback,
timeout=timeout)
container_name,
blob_name,
stream,
count=count,
validate_content=validate_content,
maxsize_condition=maxsize_condition,
lease_id=lease_id,
progress_callback=progress_callback,
timeout=timeout)
def append_blob_from_text(
self, container_name, blob_name, text, encoding='utf-8',
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
self, container_name, blob_name, text, encoding='utf-8',
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
'''
Appends to the content of an existing blob from str/unicode, with
automatic chunking and progress notifications.
@ -475,21 +477,21 @@ class AppendBlobService(BaseBlobService):
text = text.encode(encoding)
return self.append_blob_from_bytes(
container_name,
blob_name,
text,
index=0,
count=len(text),
validate_content=validate_content,
maxsize_condition=maxsize_condition,
lease_id=lease_id,
progress_callback=progress_callback,
timeout=timeout)
container_name,
blob_name,
text,
index=0,
count=len(text),
validate_content=validate_content,
maxsize_condition=maxsize_condition,
lease_id=lease_id,
progress_callback=progress_callback,
timeout=timeout)
def append_blob_from_stream(
self, container_name, blob_name, stream, count=None,
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
self, container_name, blob_name, stream, count=None,
validate_content=False, maxsize_condition=None, progress_callback=None,
lease_id=None, timeout=None):
'''
Appends to the content of an existing blob from a file/stream, with
automatic chunking and progress notifications.
@ -546,7 +548,7 @@ class AppendBlobService(BaseBlobService):
blob_size=count,
block_size=self.MAX_BLOCK_SIZE,
stream=stream,
max_connections=1, # upload not easily parallelizable
max_connections=1, # upload not easily parallelizable
progress_callback=progress_callback,
validate_content=validate_content,
lease_id=lease_id,
@ -556,4 +558,4 @@ class AppendBlobService(BaseBlobService):
resource_properties=resource_properties
)
return resource_properties
return resource_properties

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,74 +11,58 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from azure.common import AzureHttpError
from azure.storage._error import AzureException
from .._error import (
_dont_fail_not_exist,
_dont_fail_on_exist,
_validate_not_none,
_validate_decryption_required,
_validate_access_policies,
_validate_content_match,
_ERROR_PARALLEL_NOT_SEEKABLE,
_ERROR_DECRYPTION_FAILURE,
)
from ._error import (
_ERROR_INVALID_LEASE_DURATION,
_ERROR_INVALID_LEASE_BREAK_PERIOD,
)
from .._common_conversion import (
_int_to_str,
_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
# --------------------------------------------------------------------------
import sys
from abc import ABCMeta
from .._serialization import (
_get_request_body,
_convert_signed_identifiers_to_xml,
_convert_service_properties_to_xml,
_add_metadata_headers,
)
from .._http import HTTPRequest
from ._download_chunking import _download_blob_chunks
from ..models import (
Services,
ListGenerator,
_OperationContext,
)
from .models import (
Blob,
BlobProperties,
_LeaseActions,
ContainerPermissions,
BlobPermissions,
Container,
ContainerProperties,
)
from .._auth import (
from azure.common import AzureHttpError
from azure.storage.common._auth import (
_StorageSASAuthentication,
_StorageSharedKeyAuthentication,
_StorageNoAuthentication,
)
from .._connection import _ServiceParameters
from .._constants import (
from azure.storage.common._common_conversion import (
_int_to_str,
_to_str,
_datetime_to_utc_string,
)
from azure.storage.common._connection import _ServiceParameters
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from .._deserialization import (
from azure.storage.common._deserialization import (
_convert_xml_to_service_properties,
_get_download_size,
_parse_metadata,
_parse_properties,
_convert_xml_to_service_stats,
_parse_length_from_content_range,
)
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
from azure.storage.common._error import (
_dont_fail_not_exist,
_dont_fail_on_exist,
_validate_not_none,
_validate_decryption_required,
_validate_access_policies,
_ERROR_PARALLEL_NOT_SEEKABLE,
)
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_request_body,
_convert_signed_identifiers_to_xml,
_convert_service_properties_to_xml,
_add_metadata_headers,
)
from azure.storage.common.models import (
Services,
ListGenerator,
_OperationContext,
)
from azure.storage.common.sharedaccesssignature import (
SharedAccessSignature,
)
from azure.storage.common.storageclient import StorageClient
from ._deserialization import (
_convert_xml_to_containers,
_parse_blob,
@ -89,18 +73,29 @@ from ._deserialization import (
_convert_xml_to_signed_identifiers_and_access,
_parse_base_properties,
)
from ..sharedaccesssignature import (
SharedAccessSignature,
from ._download_chunking import _download_blob_chunks
from ._error import (
_ERROR_INVALID_LEASE_DURATION,
_ERROR_INVALID_LEASE_BREAK_PERIOD,
)
from ..storageclient import StorageClient
import sys
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
)
from .models import (
BlobProperties,
_LeaseActions,
ContainerPermissions,
BlobPermissions,
)
if sys.version_info >= (3,):
from io import BytesIO
else:
from cStringIO import StringIO as BytesIO
class BaseBlobService(StorageClient):
class BaseBlobService(StorageClient):
'''
This is the main class managing Blob resources.
@ -148,7 +143,7 @@ class BaseBlobService(StorageClient):
MAX_SINGLE_GET_SIZE = 32 * 1024 * 1024
MAX_CHUNK_GET_SIZE = 4 * 1024 * 1024
def __init__(self, account_name=None, account_key=None, sas_token=None,
def __init__(self, account_name=None, account_key=None, sas_token=None,
is_emulated=False, protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
custom_domain=None, request_session=None, connection_string=None, socket_timeout=None):
'''
@ -192,11 +187,11 @@ class BaseBlobService(StorageClient):
'''
service_params = _ServiceParameters.get_service_parameters(
'blob',
account_name=account_name,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
custom_domain=custom_domain,
request_session=request_session,
@ -256,8 +251,8 @@ class BaseBlobService(StorageClient):
return url
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
'''
Generates a shared access signature for the blob service.
Use the returned signature with the sas_token parameter of any BlobService.
@ -301,15 +296,15 @@ class BaseBlobService(StorageClient):
_validate_not_none('self.account_key', self.account_key)
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_account(Services.BLOB, resource_types, permission,
return sas.generate_account(Services.BLOB, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)
def generate_container_shared_access_signature(self, container_name,
permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
def generate_container_shared_access_signature(self, container_name,
permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
'''
Generates a shared access signature for the container.
Use the returned signature with the sas_token parameter of any BlobService.
@ -376,9 +371,9 @@ class BaseBlobService(StorageClient):
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_container(
container_name,
permission,
permission,
expiry,
start=start,
start=start,
id=id,
ip=ip,
protocol=protocol,
@ -390,11 +385,11 @@ class BaseBlobService(StorageClient):
)
def generate_blob_shared_access_signature(
self, container_name, blob_name, permission=None,
expiry=None, start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
self, container_name, blob_name, permission=None,
expiry=None, start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
'''
Generates a shared access signature for the blob.
Use the returned signature with the sas_token parameter of any BlobService.
@ -464,9 +459,9 @@ class BaseBlobService(StorageClient):
return sas.generate_blob(
container_name,
blob_name,
permission,
permission,
expiry,
start=start,
start=start,
id=id,
ip=ip,
protocol=protocol,
@ -477,7 +472,7 @@ class BaseBlobService(StorageClient):
content_type=content_type,
)
def list_containers(self, prefix=None, num_results=None, include_metadata=False,
def list_containers(self, prefix=None, num_results=None, include_metadata=False,
marker=None, timeout=None):
'''
Returns a generator to list the containers under the specified account.
@ -509,14 +504,13 @@ class BaseBlobService(StorageClient):
'''
include = 'metadata' if include_metadata else None
operation_context = _OperationContext(location_lock=True)
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'timeout': timeout, '_context': operation_context}
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'timeout': timeout, '_context': operation_context}
resp = self._list_containers(**kwargs)
return ListGenerator(resp, self._list_containers, (), kwargs)
def _list_containers(self, prefix=None, marker=None, max_results=None,
def _list_containers(self, prefix=None, marker=None, max_results=None,
include=None, timeout=None, _context=None):
'''
Returns a list of the containers under the specified account.
@ -545,7 +539,7 @@ class BaseBlobService(StorageClient):
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(secondary=True)
request.path = _get_path()
request.path = _get_path()
request.query = {
'comp': 'list',
'prefix': _to_str(prefix),
@ -840,7 +834,7 @@ class BaseBlobService(StorageClient):
request.headers = {
'x-ms-lease-id': _to_str(lease_id),
'If-Modified-Since': _datetime_to_utc_string(if_modified_since),
'If-Unmodified-Since': _datetime_to_utc_string(if_unmodified_since),
'If-Unmodified-Since': _datetime_to_utc_string(if_unmodified_since),
}
if not fail_not_exist:
@ -855,9 +849,9 @@ class BaseBlobService(StorageClient):
return True
def _lease_container_impl(
self, container_name, lease_action, lease_id, lease_duration,
lease_break_period, proposed_lease_id, if_modified_since,
if_unmodified_since, timeout):
self, container_name, lease_action, lease_id, lease_duration,
lease_break_period, proposed_lease_id, if_modified_since,
if_unmodified_since, timeout):
'''
Establishes and manages a lease on a container.
The Lease Container operation can be called in one of five modes
@ -937,8 +931,8 @@ class BaseBlobService(StorageClient):
return self._perform_request(request, _parse_lease)
def acquire_container_lease(
self, container_name, lease_duration=-1, proposed_lease_id=None,
if_modified_since=None, if_unmodified_since=None, timeout=None):
self, container_name, lease_duration=-1, proposed_lease_id=None,
if_modified_since=None, if_unmodified_since=None, timeout=None):
'''
Requests a new lease. If the container does not have an active lease,
the Blob service creates a lease on the container and returns a new
@ -972,15 +966,15 @@ class BaseBlobService(StorageClient):
:return: str
'''
_validate_not_none('lease_duration', lease_duration)
if lease_duration is not -1 and\
(lease_duration < 15 or lease_duration > 60):
if lease_duration is not -1 and \
(lease_duration < 15 or lease_duration > 60):
raise ValueError(_ERROR_INVALID_LEASE_DURATION)
lease = self._lease_container_impl(container_name,
lease = self._lease_container_impl(container_name,
_LeaseActions.Acquire,
None, # lease_id
None, # lease_id
lease_duration,
None, # lease_break_period
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
@ -988,8 +982,8 @@ class BaseBlobService(StorageClient):
return lease['id']
def renew_container_lease(
self, container_name, lease_id, if_modified_since=None,
if_unmodified_since=None, timeout=None):
self, container_name, lease_id, if_modified_since=None,
if_unmodified_since=None, timeout=None):
'''
Renews the lease. The lease can be renewed if the lease ID specified
matches that associated with the container. Note that
@ -1020,20 +1014,20 @@ class BaseBlobService(StorageClient):
'''
_validate_not_none('lease_id', lease_id)
lease = self._lease_container_impl(container_name,
lease = self._lease_container_impl(container_name,
_LeaseActions.Renew,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
timeout)
return lease['id']
def release_container_lease(
self, container_name, lease_id, if_modified_since=None,
if_unmodified_since=None, timeout=None):
self, container_name, lease_id, if_modified_since=None,
if_unmodified_since=None, timeout=None):
'''
Release the lease. The lease may be released if the lease_id specified matches
that associated with the container. Releasing the lease allows another client
@ -1060,19 +1054,19 @@ class BaseBlobService(StorageClient):
'''
_validate_not_none('lease_id', lease_id)
self._lease_container_impl(container_name,
_LeaseActions.Release,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
timeout)
self._lease_container_impl(container_name,
_LeaseActions.Release,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
timeout)
def break_container_lease(
self, container_name, lease_break_period=None,
if_modified_since=None, if_unmodified_since=None, timeout=None):
self, container_name, lease_break_period=None,
if_modified_since=None, if_unmodified_since=None, timeout=None):
'''
Break the lease, if the container has an active lease. Once a lease is
broken, it cannot be renewed. Any authorized request can break the lease;
@ -1113,21 +1107,21 @@ class BaseBlobService(StorageClient):
'''
if (lease_break_period is not None) and (lease_break_period < 0 or lease_break_period > 60):
raise ValueError(_ERROR_INVALID_LEASE_BREAK_PERIOD)
lease = self._lease_container_impl(container_name,
lease = self._lease_container_impl(container_name,
_LeaseActions.Break,
None, # lease_id
None, # lease_duration
None, # lease_id
None, # lease_duration
lease_break_period,
None, # proposed_lease_id
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
timeout)
return lease['time']
def change_container_lease(
self, container_name, lease_id, proposed_lease_id,
if_modified_since=None, if_unmodified_since=None, timeout=None):
self, container_name, lease_id, proposed_lease_id,
if_modified_since=None, if_unmodified_since=None, timeout=None):
'''
Change the lease ID of an active lease. A change must include the current
lease ID and a new lease ID.
@ -1156,17 +1150,17 @@ class BaseBlobService(StorageClient):
'''
_validate_not_none('lease_id', lease_id)
self._lease_container_impl(container_name,
_LeaseActions.Change,
lease_id,
None, # lease_duration
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
timeout)
self._lease_container_impl(container_name,
_LeaseActions.Change,
lease_id,
None, # lease_duration
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
timeout)
def list_blobs(self, container_name, prefix=None, num_results=None, include=None,
def list_blobs(self, container_name, prefix=None, num_results=None, include=None,
delimiter=None, marker=None, timeout=None):
'''
Returns a generator to list the blobs under the specified container.
@ -1208,16 +1202,16 @@ class BaseBlobService(StorageClient):
'''
operation_context = _OperationContext(location_lock=True)
args = (container_name,)
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'delimiter': delimiter, 'timeout': timeout,
'_context': operation_context}
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'delimiter': delimiter, 'timeout': timeout,
'_context': operation_context}
resp = self._list_blobs(*args, **kwargs)
return ListGenerator(resp, self._list_blobs, args, kwargs)
def _list_blobs(self, container_name, prefix=None, marker=None,
max_results=None, include=None, delimiter=None, timeout=None,
_context=None):
max_results=None, include=None, delimiter=None, timeout=None,
_context=None):
'''
Returns the list of blobs under the specified container.
@ -1321,8 +1315,8 @@ class BaseBlobService(StorageClient):
return self._perform_request(request, _convert_xml_to_service_stats)
def set_blob_service_properties(
self, logging=None, hour_metrics=None, minute_metrics=None,
cors=None, target_version=None, timeout=None):
self, logging=None, hour_metrics=None, minute_metrics=None,
cors=None, target_version=None, timeout=None):
'''
Sets the properties of a storage account's Blob service, including
Azure Storage Analytics. If an element (ex Logging) is left as None, the
@ -1350,7 +1344,7 @@ class BaseBlobService(StorageClient):
request = HTTPRequest()
request.method = 'PUT'
request.host_locations = self._get_host_locations()
request.path = _get_path()
request.path = _get_path()
request.query = {
'restype': 'service',
'comp': 'properties',
@ -1376,7 +1370,7 @@ class BaseBlobService(StorageClient):
request = HTTPRequest()
request.method = 'GET'
request.host_locations = self._get_host_locations(secondary=True)
request.path = _get_path()
request.path = _get_path()
request.query = {
'restype': 'service',
'comp': 'properties',
@ -1386,9 +1380,9 @@ class BaseBlobService(StorageClient):
return self._perform_request(request, _convert_xml_to_service_properties)
def get_blob_properties(
self, container_name, blob_name, snapshot=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, snapshot=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Returns all user-defined metadata, standard HTTP properties, and
system properties for the blob. It does not return the content of the blob.
@ -1450,9 +1444,9 @@ class BaseBlobService(StorageClient):
return self._perform_request(request, _parse_blob, [blob_name, snapshot])
def set_blob_properties(
self, container_name, blob_name, content_settings=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, content_settings=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Sets system properties on the blob. If one property is set for the
content_settings, all properties will be overriden.
@ -1542,10 +1536,10 @@ class BaseBlobService(StorageClient):
return False
def _get_blob(
self, container_name, blob_name, snapshot=None, start_range=None,
end_range=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
_context=None):
self, container_name, blob_name, snapshot=None, start_range=None,
end_range=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
_context=None):
'''
Downloads a blob's content, metadata, and properties. You can also
call this API to read a snapshot. You can specify a range if you don't
@ -1610,7 +1604,7 @@ class BaseBlobService(StorageClient):
self.key_encryption_key,
self.key_resolver_function)
start_offset, end_offset = 0,0
start_offset, end_offset = 0, 0
if self.key_encryption_key is not None or self.key_resolver_function is not None:
if start_range is not None:
# Align the start of the range along a 16 byte block
@ -1652,19 +1646,19 @@ class BaseBlobService(StorageClient):
end_range_required=False,
check_content_md5=validate_content)
return self._perform_request(request, _parse_blob,
return self._perform_request(request, _parse_blob,
[blob_name, snapshot, validate_content, self.require_encryption,
self.key_encryption_key, self.key_resolver_function,
start_offset, end_offset],
operation_context=_context)
def get_blob_to_path(
self, container_name, blob_name, file_path, open_mode='wb',
snapshot=None, start_range=None, end_range=None,
validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None):
self, container_name, blob_name, file_path, open_mode='wb',
snapshot=None, start_range=None, end_range=None,
validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None):
'''
Downloads a blob to a file path, with automatic chunking and progress
notifications. Returns an instance of :class:`Blob` with
@ -1783,11 +1777,11 @@ class BaseBlobService(StorageClient):
return blob
def get_blob_to_stream(
self, container_name, blob_name, stream, snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, stream, snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Downloads a blob to a stream, with automatic chunking and progress
@ -1898,7 +1892,7 @@ class BaseBlobService(StorageClient):
# If max_connections is greater than 1, do the first get to establish the
# size of the blob and get the first segment of data
else:
else:
if sys.version_info >= (3,) and not stream.seekable():
raise ValueError(_ERROR_PARALLEL_NOT_SEEKABLE)
@ -1976,15 +1970,15 @@ class BaseBlobService(StorageClient):
# If the blob is small or single shot download was used, the download is
# complete at this point. If blob size is large, use parallel download.
if blob.properties.content_length != download_size:
if blob.properties.content_length != download_size:
# Lock on the etag. This can be overriden by the user by specifying '*'
if_match = if_match if if_match is not None else blob.properties.etag
if_match = if_match if if_match is not None else blob.properties.etag
end_blob = blob_size
if end_range:
# Use the end_range unless it is over the end of the blob
end_blob = min(blob_size, end_range + 1)
_download_blob_chunks(
self,
container_name,
@ -1993,7 +1987,7 @@ class BaseBlobService(StorageClient):
download_size,
self.MAX_CHUNK_GET_SIZE,
first_get_size,
initial_request_end + 1, # start where the first download ended
initial_request_end + 1, # start where the first download ended
end_blob,
stream,
max_connections,
@ -2021,13 +2015,13 @@ class BaseBlobService(StorageClient):
blob.properties.content_md5 = None
return blob
def get_blob_to_bytes(
self, container_name, blob_name, snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Downloads a blob as an array of bytes, with automatic chunking and
progress notifications. Returns an instance of :class:`Blob` with
@ -2136,11 +2130,11 @@ class BaseBlobService(StorageClient):
return blob
def get_blob_to_text(
self, container_name, blob_name, encoding='utf-8', snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, encoding='utf-8', snapshot=None,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Downloads a blob as unicode text, with automatic chunking and progress
notifications. Returns an instance of :class:`Blob` with
@ -2231,26 +2225,26 @@ class BaseBlobService(StorageClient):
_validate_not_none('encoding', encoding)
blob = self.get_blob_to_bytes(container_name,
blob_name,
snapshot,
start_range,
end_range,
validate_content,
progress_callback,
max_connections,
lease_id,
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
blob_name,
snapshot,
start_range,
end_range,
validate_content,
progress_callback,
max_connections,
lease_id,
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
blob.content = blob.content.decode(encoding)
return blob
def get_blob_metadata(
self, container_name, blob_name, snapshot=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, snapshot=None, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Returns all user-defined metadata for the specified blob or snapshot.
@ -2307,7 +2301,7 @@ class BaseBlobService(StorageClient):
'If-Unmodified-Since': _datetime_to_utc_string(if_unmodified_since),
'If-Match': _to_str(if_match),
'If-None-Match': _to_str(if_none_match),
}
}
return self._perform_request(request, _parse_metadata)
@ -2524,15 +2518,15 @@ class BaseBlobService(StorageClient):
'''
_validate_not_none('lease_duration', lease_duration)
if lease_duration is not -1 and\
(lease_duration < 15 or lease_duration > 60):
if lease_duration is not -1 and \
(lease_duration < 15 or lease_duration > 60):
raise ValueError(_ERROR_INVALID_LEASE_DURATION)
lease = self._lease_blob_impl(container_name,
blob_name,
_LeaseActions.Acquire,
None, # lease_id
None, # lease_id
lease_duration,
None, # lease_break_period
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
@ -2590,9 +2584,9 @@ class BaseBlobService(StorageClient):
blob_name,
_LeaseActions.Renew,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
if_match,
@ -2642,17 +2636,17 @@ class BaseBlobService(StorageClient):
_validate_not_none('lease_id', lease_id)
self._lease_blob_impl(container_name,
blob_name,
_LeaseActions.Release,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
blob_name,
_LeaseActions.Release,
lease_id,
None, # lease_duration
None, # lease_break_period
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
def break_blob_lease(self, container_name, blob_name,
lease_break_period=None,
@ -2717,10 +2711,10 @@ class BaseBlobService(StorageClient):
lease = self._lease_blob_impl(container_name,
blob_name,
_LeaseActions.Break,
None, # lease_id
None, # lease_duration
None, # lease_id
None, # lease_duration
lease_break_period,
None, # proposed_lease_id
None, # proposed_lease_id
if_modified_since,
if_unmodified_since,
if_match,
@ -2729,12 +2723,12 @@ class BaseBlobService(StorageClient):
return lease['time']
def change_blob_lease(self, container_name, blob_name,
lease_id,
proposed_lease_id,
if_modified_since=None,
if_unmodified_since=None,
if_match=None,
if_none_match=None, timeout=None):
lease_id,
proposed_lease_id,
if_modified_since=None,
if_unmodified_since=None,
if_match=None,
if_none_match=None, timeout=None):
'''
Changes the lease ID of an active lease. A change must include the current
lease ID and a new lease ID.
@ -2773,17 +2767,17 @@ class BaseBlobService(StorageClient):
The timeout parameter is expressed in seconds.
'''
self._lease_blob_impl(container_name,
blob_name,
_LeaseActions.Change,
lease_id,
None, # lease_duration
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
blob_name,
_LeaseActions.Change,
lease_id,
None, # lease_duration
None, # lease_break_period
proposed_lease_id,
if_modified_since,
if_unmodified_since,
if_match,
if_none_match,
timeout)
def snapshot_blob(self, container_name, blob_name,
metadata=None, if_modified_since=None,
@ -2980,31 +2974,31 @@ class BaseBlobService(StorageClient):
:rtype: :class:`~azure.storage.blob.models.CopyProperties`
'''
return self._copy_blob(container_name, blob_name, copy_source,
metadata,
None,
source_if_modified_since, source_if_unmodified_since,
source_if_match, source_if_none_match,
destination_if_modified_since,
destination_if_unmodified_since,
destination_if_match,
destination_if_none_match,
destination_lease_id,
source_lease_id, timeout,
False)
metadata,
None,
source_if_modified_since, source_if_unmodified_since,
source_if_match, source_if_none_match,
destination_if_modified_since,
destination_if_unmodified_since,
destination_if_match,
destination_if_none_match,
destination_lease_id,
source_lease_id, timeout,
False)
def _copy_blob(self, container_name, blob_name, copy_source,
metadata=None,
premium_page_blob_tier=None,
source_if_modified_since=None,
source_if_unmodified_since=None,
source_if_match=None, source_if_none_match=None,
destination_if_modified_since=None,
destination_if_unmodified_since=None,
destination_if_match=None,
destination_if_none_match=None,
destination_lease_id=None,
source_lease_id=None, timeout=None,
incremental_copy=False):
metadata=None,
premium_page_blob_tier=None,
source_if_modified_since=None,
source_if_unmodified_since=None,
source_if_match=None, source_if_none_match=None,
destination_if_modified_since=None,
destination_if_unmodified_since=None,
destination_if_match=None,
destination_if_none_match=None,
destination_lease_id=None,
source_lease_id=None, timeout=None,
incremental_copy=False):
'''
See copy_blob for more details. This helper method
allows for standard copies as well as incremental copies which are only supported for page blobs.
@ -3026,10 +3020,10 @@ class BaseBlobService(StorageClient):
# /accountName/blobName
# - Snapshot in root container:
# /accountName/blobName?snapshot=<DateTime>
account, _, source =\
account, _, source = \
copy_source.partition('/')[2].partition('/')
copy_source = self.protocol + '://' + \
self.primary_endpoint + '/' + source
self.primary_endpoint + '/' + source
request = HTTPRequest()
request.method = 'PUT'

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,8 +11,26 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._error import (
# --------------------------------------------------------------------------
from io import (
BytesIO
)
from os import (
path,
)
from azure.storage.common._common_conversion import (
_encode_base64,
_to_str,
_int_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from azure.storage.common._error import (
_validate_not_none,
_validate_type_bytes,
_validate_encryption_required,
@ -20,57 +38,39 @@ from .._error import (
_ERROR_VALUE_NEGATIVE,
_ERROR_VALUE_SHOULD_BE_STREAM
)
from .._common_conversion import (
_encode_base64,
_to_str,
_int_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
from .._serialization import (
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_request_body,
_get_data_bytes_only,
_get_data_bytes_or_stream_only,
_add_metadata_headers,
)
from .._http import HTTPRequest
from ._upload_chunking import (
_BlockBlobChunkUploader,
_upload_blob_chunks,
_upload_blob_substream_blocks,
)
from .models import (
_BlobTypes,
)
from .._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from ._serialization import (
_convert_block_list_to_xml,
_get_path,
)
from .._serialization import (
_len_plus
from azure.storage.common._serialization import (
_len_plus
)
from ._deserialization import (
_convert_xml_to_block_list,
_parse_base_properties,
)
from ._encryption import(
from ._encryption import (
_encrypt_blob,
_generate_blob_encryption_data,
)
from ._serialization import (
_convert_block_list_to_xml,
_get_path,
)
from ._upload_chunking import (
_BlockBlobChunkUploader,
_upload_blob_chunks,
_upload_blob_substream_blocks,
)
from .baseblobservice import BaseBlobService
from os import(
path,
)
import sys
from io import (
BytesIO,
IOBase
from .models import (
_BlobTypes,
)
class BlockBlobService(BaseBlobService):
'''
Block blobs let you upload large blobs efficiently. Block blobs are comprised
@ -192,10 +192,10 @@ class BlockBlobService(BaseBlobService):
)
def put_block_list(
self, container_name, blob_name, block_list, content_settings=None,
metadata=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None):
self, container_name, blob_name, block_list, content_settings=None,
metadata=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None):
'''
Writes a blob by specifying the list of block IDs that make up the blob.
In order to be written as part of a blob, a block must have been
@ -259,19 +259,19 @@ class BlockBlobService(BaseBlobService):
_validate_encryption_unsupported(self.require_encryption, self.key_encryption_key)
return self._put_block_list(
container_name,
blob_name,
block_list,
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
container_name,
blob_name,
block_list,
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
def get_block_list(self, container_name, blob_name, snapshot=None,
block_list_type=None, lease_id=None, timeout=None):
@ -320,13 +320,13 @@ class BlockBlobService(BaseBlobService):
return self._perform_request(request, _convert_xml_to_block_list)
#----Convenience APIs-----------------------------------------------------
# ----Convenience APIs-----------------------------------------------------
def create_blob_from_path(
self, container_name, blob_name, file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
'''
Creates a new blob from a file path, or updates the content of an
existing blob, with automatic chunking and progress notifications.
@ -394,28 +394,28 @@ class BlockBlobService(BaseBlobService):
count = path.getsize(file_path)
with open(file_path, 'rb') as stream:
return self.create_blob_from_stream(
container_name=container_name,
blob_name=blob_name,
stream=stream,
count=count,
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
progress_callback=progress_callback,
max_connections=max_connections,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout)
container_name=container_name,
blob_name=blob_name,
stream=stream,
count=count,
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
progress_callback=progress_callback,
max_connections=max_connections,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout)
def create_blob_from_stream(
self, container_name, blob_name, stream, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None, use_byte_buffer=False):
self, container_name, blob_name, stream, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None, use_byte_buffer=False):
'''
Creates a new blob from a file/stream, or updates the content of
an existing blob, with automatic chunking and progress
@ -591,11 +591,11 @@ class BlockBlobService(BaseBlobService):
)
def create_blob_from_bytes(
self, container_name, blob_name, blob, index=0, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, blob, index=0, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Creates a new blob from an array of bytes, or updates the content
of an existing blob, with automatic chunking and progress
@ -697,11 +697,11 @@ class BlockBlobService(BaseBlobService):
)
def create_blob_from_text(
self, container_name, blob_name, text, encoding='utf-8',
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
self, container_name, blob_name, text, encoding='utf-8',
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None):
'''
Creates a new blob from str/unicode, or updates the content of an
existing blob, with automatic chunking and progress notifications.
@ -773,22 +773,22 @@ class BlockBlobService(BaseBlobService):
text = text.encode(encoding)
return self.create_blob_from_bytes(
container_name=container_name,
blob_name=blob_name,
blob=text,
index=0,
count=len(text),
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
progress_callback=progress_callback,
max_connections=max_connections,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout)
container_name=container_name,
blob_name=blob_name,
blob=text,
index=0,
count=len(text),
content_settings=content_settings,
metadata=metadata,
validate_content=validate_content,
lease_id=lease_id,
progress_callback=progress_callback,
max_connections=max_connections,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout)
def set_standard_blob_tier(
self, container_name, blob_name, standard_blob_tier, timeout=None):
@ -825,10 +825,10 @@ class BlockBlobService(BaseBlobService):
self._perform_request(request)
#-----Helper methods------------------------------------
# -----Helper methods------------------------------------
def _put_blob(self, container_name, blob_name, blob, content_settings=None,
metadata=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None):
'''
Creates a blob or updates an existing blob.
@ -916,7 +916,7 @@ class BlockBlobService(BaseBlobService):
return self._perform_request(request, _parse_base_properties)
def _put_block(self, container_name, blob_name, block, block_id,
validate_content=False, lease_id=None, timeout=None):
validate_content=False, lease_id=None, timeout=None):
'''
See put_block for more details. This helper method
allows for encryption or other such special behavior because
@ -924,7 +924,6 @@ class BlockBlobService(BaseBlobService):
prohibited in the public version of this function.
'''
_validate_not_none('container_name', container_name)
_validate_not_none('blob_name', blob_name)
_validate_not_none('block', block)
@ -959,10 +958,10 @@ class BlockBlobService(BaseBlobService):
self._perform_request(request)
def _put_block_list(
self, container_name, blob_name, block_list, content_settings=None,
metadata=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None, encryption_data=None):
self, container_name, blob_name, block_list, content_settings=None,
metadata=None, validate_content=False, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None,
timeout=None, encryption_data=None):
'''
See put_block_list for more details. This helper method
allows for encryption or other such special behavior because
@ -1005,4 +1004,4 @@ class BlockBlobService(BaseBlobService):
if encryption_data is not None:
request.headers['x-ms-meta-encryptiondata'] = encryption_data
return self._perform_request(request, _parse_base_properties)
return self._perform_request(request, _parse_base_properties)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,10 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._common_conversion import _to_str
class Container(object):
# --------------------------------------------------------------------------
from azure.storage.common._common_conversion import _to_str
class Container(object):
'''
Blob container class.
@ -37,7 +38,6 @@ class Container(object):
class ContainerProperties(object):
'''
Blob container's properties class.
@ -58,7 +58,6 @@ class ContainerProperties(object):
class Blob(object):
'''
Blob class.
@ -76,6 +75,7 @@ class Blob(object):
:ivar metadata:
Name-value pairs associated with the blob as metadata.
'''
def __init__(self, name=None, snapshot=None, content=None, props=None, metadata=None):
self.name = name
self.snapshot = snapshot
@ -85,7 +85,6 @@ class Blob(object):
class BlobProperties(object):
'''
Blob Properties
@ -133,7 +132,6 @@ class BlobProperties(object):
class ContentSettings(object):
'''
Used to store the content settings of a blob.
@ -161,10 +159,9 @@ class ContentSettings(object):
'''
def __init__(
self, content_type=None, content_encoding=None,
content_language=None, content_disposition=None,
cache_control=None, content_md5=None):
self, content_type=None, content_encoding=None,
content_language=None, content_disposition=None,
cache_control=None, content_md5=None):
self.content_type = content_type
self.content_encoding = content_encoding
self.content_language = content_language
@ -233,7 +230,6 @@ class CopyProperties(object):
class LeaseProperties(object):
'''
Blob Lease Properties.
@ -280,7 +276,6 @@ class BlobBlockState(object):
class BlobBlock(object):
'''
BlockBlob Block class.
@ -302,7 +297,6 @@ class BlobBlock(object):
class BlobBlockList(object):
'''
Blob Block List class.
@ -318,8 +312,8 @@ class BlobBlockList(object):
self.committed_blocks = list()
self.uncommitted_blocks = list()
class PageRange(object):
class PageRange(object):
'''
Page Range for page blob.
@ -337,8 +331,8 @@ class PageRange(object):
self.end = end
self.is_cleared = is_cleared
class ResourceProperties(object):
class ResourceProperties(object):
'''
Base response for a resource request.
@ -353,8 +347,8 @@ class ResourceProperties(object):
self.last_modified = None
self.etag = None
class AppendBlockProperties(ResourceProperties):
class AppendBlockProperties(ResourceProperties):
'''
Response for an append block request.
@ -371,7 +365,6 @@ class AppendBlockProperties(ResourceProperties):
class PageBlobProperties(ResourceProperties):
'''
Response for a page request.
@ -409,6 +402,7 @@ class PublicAccess(object):
within the storage account.
'''
class DeleteSnapshot(object):
'''
Required if the blob has associated snapshots. Specifies how to handle the snapshots.
@ -424,6 +418,7 @@ class DeleteSnapshot(object):
Delete only the blob's snapshots and not the blob itself.
'''
class BlockListType(object):
'''
Specifies whether to return the list of committed blocks, the list of uncommitted
@ -477,6 +472,7 @@ class _LeaseActions(object):
Renew = 'renew'
'''Renew the lease.'''
class _BlobTypes(object):
'''Blob type options.'''
@ -489,8 +485,8 @@ class _BlobTypes(object):
PageBlob = 'PageBlob'
'''Page blob type.'''
class Include(object):
class Include(object):
'''
Specifies the datasets to include in the blob list response.
@ -506,7 +502,7 @@ class Include(object):
been committed using Put Block List, be included in the response.
'''
def __init__(self, snapshots=False, metadata=False, uncommitted_blobs=False,
def __init__(self, snapshots=False, metadata=False, uncommitted_blobs=False,
copy=False, _str=None):
'''
:param bool snapshots:
@ -529,20 +525,21 @@ class Include(object):
self.metadata = metadata or ('metadata' in components)
self.uncommitted_blobs = uncommitted_blobs or ('uncommittedblobs' in components)
self.copy = copy or ('copy' in components)
def __or__(self, other):
return Include(_str=str(self) + str(other))
def __add__(self, other):
return Include(_str=str(self) + str(other))
def __str__(self):
include = (('snapshots,' if self.snapshots else '') +
include = (('snapshots,' if self.snapshots else '') +
('metadata,' if self.metadata else '') +
('uncommittedblobs,' if self.uncommitted_blobs else '') +
('copy,' if self.copy else ''))
return include.rstrip(',')
Include.COPY = Include(copy=True)
Include.METADATA = Include(metadata=True)
Include.SNAPSHOTS = Include(snapshots=True)
@ -550,7 +547,6 @@ Include.UNCOMMITTED_BLOBS = Include(uncommitted_blobs=True)
class BlobPermissions(object):
'''
BlobPermissions class to be used with
:func:`~azure.storage.blob.baseblobservice.BaseBlobService.generate_blob_shared_access_signature` API.
@ -569,7 +565,7 @@ class BlobPermissions(object):
copy operation within the same account.
'''
def __init__(self, read=False, add=False, create=False, write=False,
def __init__(self, read=False, add=False, create=False, write=False,
delete=False, _str=None):
'''
:param bool read:
@ -595,13 +591,13 @@ class BlobPermissions(object):
self.create = create or ('c' in _str)
self.write = write or ('w' in _str)
self.delete = delete or ('d' in _str)
def __or__(self, other):
return BlobPermissions(_str=str(self) + str(other))
def __add__(self, other):
return BlobPermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('a' if self.add else '') +
@ -609,6 +605,7 @@ class BlobPermissions(object):
('w' if self.write else '') +
('d' if self.delete else ''))
BlobPermissions.ADD = BlobPermissions(add=True)
BlobPermissions.CREATE = BlobPermissions(create=True)
BlobPermissions.DELETE = BlobPermissions(delete=True)
@ -617,7 +614,6 @@ BlobPermissions.WRITE = BlobPermissions(write=True)
class ContainerPermissions(object):
'''
ContainerPermissions class to be used with :func:`~azure.storage.blob.baseblobservice.BaseBlobService.generate_container_shared_access_signature`
API and for the AccessPolicies used with :func:`~azure.storage.blob.baseblobservice.BaseBlobService.set_container_acl`.
@ -639,7 +635,7 @@ class ContainerPermissions(object):
a container SAS. Use an account SAS instead.
'''
def __init__(self, read=False, write=False, delete=False, list=False,
def __init__(self, read=False, write=False, delete=False, list=False,
_str=None):
'''
:param bool read:
@ -666,24 +662,26 @@ class ContainerPermissions(object):
self.write = write or ('w' in _str)
self.delete = delete or ('d' in _str)
self.list = list or ('l' in _str)
def __or__(self, other):
return ContainerPermissions(_str=str(self) + str(other))
def __add__(self, other):
return ContainerPermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('w' if self.write else '') +
('d' if self.delete else '') +
('d' if self.delete else '') +
('l' if self.list else ''))
ContainerPermissions.DELETE = ContainerPermissions(delete=True)
ContainerPermissions.LIST = ContainerPermissions(list=True)
ContainerPermissions.READ = ContainerPermissions(read=True)
ContainerPermissions.WRITE = ContainerPermissions(write=True)
class PremiumPageBlobTier(object):
'''
Specifies the page blob tier to set the blob to. This is only applicable to page
@ -716,6 +714,7 @@ class PremiumPageBlobTier(object):
P60 = 'P60'
''' P60 Tier '''
class StandardBlobTier(object):
'''
Specifies the blob tier to set the blob to. This is only applicable for block blobs on standard storage accounts.
@ -728,4 +727,4 @@ class StandardBlobTier(object):
''' Cool '''
Hot = 'Hot'
''' Hot '''
''' Hot '''

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,52 +11,54 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._error import (
# --------------------------------------------------------------------------
import sys
from os import path
from azure.storage.common._common_conversion import (
_int_to_str,
_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from azure.storage.common._error import (
_validate_not_none,
_validate_type_bytes,
_validate_encryption_required,
_validate_encryption_unsupported,
_ERROR_VALUE_NEGATIVE,
)
from .._common_conversion import (
_int_to_str,
_to_str,
_datetime_to_utc_string,
_get_content_md5,
)
from .._serialization import (
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_data_bytes_only,
_add_metadata_headers,
)
from .._http import HTTPRequest
from ._error import (
_ERROR_PAGE_BLOB_SIZE_ALIGNMENT,
)
from ._upload_chunking import (
_PageBlobChunkUploader,
_upload_blob_chunks,
)
from .models import (
_BlobTypes,
ResourceProperties)
from .._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from ._encryption import _generate_blob_encryption_data
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
)
from ._deserialization import (
_convert_xml_to_page_ranges,
_parse_page_properties,
_parse_base_properties,
)
from ._encryption import _generate_blob_encryption_data
from ._error import (
_ERROR_PAGE_BLOB_SIZE_ALIGNMENT,
)
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
)
from ._upload_chunking import (
_PageBlobChunkUploader,
_upload_blob_chunks,
)
from .baseblobservice import BaseBlobService
from os import path
import sys
from .models import (
_BlobTypes,
ResourceProperties)
if sys.version_info >= (3,):
from io import BytesIO
else:
@ -85,7 +87,7 @@ class PageBlobService(BaseBlobService):
MAX_PAGE_SIZE = 4 * 1024 * 1024
def __init__(self, account_name=None, account_key=None, sas_token=None,
def __init__(self, account_name=None, account_key=None, sas_token=None,
is_emulated=False, protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
custom_domain=None, request_session=None, connection_string=None, socket_timeout=None):
'''
@ -129,13 +131,13 @@ class PageBlobService(BaseBlobService):
'''
self.blob_type = _BlobTypes.PageBlob
super(PageBlobService, self).__init__(
account_name, account_key, sas_token, is_emulated, protocol, endpoint_suffix,
account_name, account_key, sas_token, is_emulated, protocol, endpoint_suffix,
custom_domain, request_session, connection_string, socket_timeout)
def create_blob(
self, container_name, blob_name, content_length, content_settings=None,
sequence_number=None, metadata=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None, premium_page_blob_tier=None):
self, container_name, blob_name, content_length, content_settings=None,
sequence_number=None, metadata=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None, premium_page_blob_tier=None):
'''
Creates a new Page Blob.
@ -192,28 +194,28 @@ class PageBlobService(BaseBlobService):
:return: ETag and last modified properties for the new Page Blob
:rtype: :class:`~azure.storage.blob.models.ResourceProperties`
'''
_validate_encryption_unsupported(self.require_encryption, self.key_encryption_key)
_validate_encryption_unsupported(self.require_encryption, self.key_encryption_key)
return self._create_blob(
container_name,
blob_name,
content_length,
content_settings=content_settings,
sequence_number=sequence_number,
metadata=metadata,
lease_id=lease_id,
premium_page_blob_tier=premium_page_blob_tier,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
container_name,
blob_name,
content_length,
content_settings=content_settings,
sequence_number=sequence_number,
metadata=metadata,
lease_id=lease_id,
premium_page_blob_tier=premium_page_blob_tier,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
def incremental_copy_blob(self, container_name, blob_name, copy_source,
metadata=None, destination_if_modified_since=None, destination_if_unmodified_since=None,
destination_if_match=None, destination_if_none_match=None, destination_lease_id=None,
source_lease_id=None, timeout=None):
metadata=None, destination_if_modified_since=None, destination_if_unmodified_since=None,
destination_if_match=None, destination_if_none_match=None, destination_lease_id=None,
source_lease_id=None, timeout=None):
'''
Copies an incremental copy of a blob asynchronously. This operation returns a copy operation
properties object, including a copy ID you can use to check or abort the
@ -282,23 +284,23 @@ class PageBlobService(BaseBlobService):
:rtype: :class:`~azure.storage.blob.models.CopyProperties`
'''
return self._copy_blob(container_name, blob_name, copy_source,
metadata,
source_if_modified_since=None, source_if_unmodified_since=None,
source_if_match=None, source_if_none_match=None,
destination_if_modified_since=destination_if_modified_since,
destination_if_unmodified_since=destination_if_unmodified_since,
destination_if_match=destination_if_match,
destination_if_none_match=destination_if_none_match,
destination_lease_id=destination_lease_id,
source_lease_id=source_lease_id, timeout=timeout,
incremental_copy=True)
metadata,
source_if_modified_since=None, source_if_unmodified_since=None,
source_if_match=None, source_if_none_match=None,
destination_if_modified_since=destination_if_modified_since,
destination_if_unmodified_since=destination_if_unmodified_since,
destination_if_match=destination_if_match,
destination_if_none_match=destination_if_none_match,
destination_lease_id=destination_lease_id,
source_lease_id=source_lease_id, timeout=timeout,
incremental_copy=True)
def update_page(
self, container_name, blob_name, page, start_range, end_range,
validate_content=False, lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, page, start_range, end_range,
validate_content=False, lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
'''
Updates a range of pages.
@ -364,31 +366,31 @@ class PageBlobService(BaseBlobService):
'''
_validate_encryption_unsupported(self.require_encryption, self.key_encryption_key)
return self._update_page(
container_name,
blob_name,
page,
start_range,
end_range,
validate_content=validate_content,
lease_id=lease_id,
if_sequence_number_lte=if_sequence_number_lte,
if_sequence_number_lt=if_sequence_number_lt,
if_sequence_number_eq=if_sequence_number_eq,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
container_name,
blob_name,
page,
start_range,
end_range,
validate_content=validate_content,
lease_id=lease_id,
if_sequence_number_lte=if_sequence_number_lte,
if_sequence_number_lt=if_sequence_number_lt,
if_sequence_number_eq=if_sequence_number_eq,
if_modified_since=if_modified_since,
if_unmodified_since=if_unmodified_since,
if_match=if_match,
if_none_match=if_none_match,
timeout=timeout
)
def clear_page(
self, container_name, blob_name, start_range, end_range,
lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, start_range, end_range,
lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
'''
Clears a range of pages.
@ -474,9 +476,9 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _parse_page_properties)
def get_page_ranges(
self, container_name, blob_name, snapshot=None, start_range=None,
end_range=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, snapshot=None, start_range=None,
end_range=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
'''
Returns the list of valid page ranges for a Page Blob or snapshot
of a page blob.
@ -561,9 +563,9 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _convert_xml_to_page_ranges)
def get_page_ranges_diff(
self, container_name, blob_name, previous_snapshot, snapshot=None,
start_range=None, end_range=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, previous_snapshot, snapshot=None,
start_range=None, end_range=None, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None):
'''
The response will include only the pages that are different between either a
recent snapshot or the current blob and a previous snapshot, including pages
@ -655,10 +657,10 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _convert_xml_to_page_ranges)
def set_sequence_number(
self, container_name, blob_name, sequence_number_action, sequence_number=None,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, sequence_number_action, sequence_number=None,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
'''
Sets the blob sequence number.
@ -725,10 +727,10 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _parse_page_properties)
def resize_blob(
self, container_name, blob_name, content_length,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, content_length,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
'''
Resizes a page blob to the specified size. If the specified value is less
than the current size of the blob, then all pages above the specified value
@ -790,13 +792,13 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _parse_page_properties)
#----Convenience APIs-----------------------------------------------------
# ----Convenience APIs-----------------------------------------------------
def create_blob_from_path(
self, container_name, blob_name, file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None, max_connections=2,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None, premium_page_blob_tier=None):
self, container_name, blob_name, file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None, max_connections=2,
lease_id=None, if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None, premium_page_blob_tier=None):
'''
Creates a new blob from a file path, or updates the content of an
existing blob, with automatic chunking and progress notifications.
@ -884,13 +886,12 @@ class PageBlobService(BaseBlobService):
timeout=timeout,
premium_page_blob_tier=premium_page_blob_tier)
def create_blob_from_stream(
self, container_name, blob_name, stream, count, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
premium_page_blob_tier=None):
self, container_name, blob_name, stream, count, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, lease_id=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
premium_page_blob_tier=None):
'''
Creates a new blob from a file/stream, or updates the content of an
existing blob, with automatic chunking and progress notifications.
@ -1019,11 +1020,11 @@ class PageBlobService(BaseBlobService):
return resource_properties
def create_blob_from_bytes(
self, container_name, blob_name, blob, index=0, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None, premium_page_blob_tier=None):
self, container_name, blob_name, blob, index=0, count=None,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, lease_id=None,
if_modified_since=None, if_unmodified_since=None, if_match=None,
if_none_match=None, timeout=None, premium_page_blob_tier=None):
'''
Creates a new blob from an array of bytes, or updates the content
of an existing blob, with automatic chunking and progress
@ -1126,8 +1127,8 @@ class PageBlobService(BaseBlobService):
premium_page_blob_tier=premium_page_blob_tier)
def set_premium_page_blob_tier(
self, container_name, blob_name, premium_page_blob_tier,
timeout=None):
self, container_name, blob_name, premium_page_blob_tier,
timeout=None):
'''
Sets the page blob tiers on the blob. This API is only supported for page blobs on premium accounts.
@ -1291,24 +1292,24 @@ class PageBlobService(BaseBlobService):
:rtype: :class:`~azure.storage.blob.models.CopyProperties`
'''
return self._copy_blob(container_name, blob_name, copy_source,
metadata, premium_page_blob_tier,
source_if_modified_since, source_if_unmodified_since,
source_if_match, source_if_none_match,
destination_if_modified_since,
destination_if_unmodified_since,
destination_if_match,
destination_if_none_match,
destination_lease_id,
source_lease_id, timeout,
False)
metadata, premium_page_blob_tier,
source_if_modified_since, source_if_unmodified_since,
source_if_match, source_if_none_match,
destination_if_modified_since,
destination_if_unmodified_since,
destination_if_match,
destination_if_none_match,
destination_lease_id,
source_lease_id, timeout,
False)
#-----Helper methods-----------------------------------------------------
# -----Helper methods-----------------------------------------------------
def _create_blob(
self, container_name, blob_name, content_length, content_settings=None,
sequence_number=None, metadata=None, lease_id=None, premium_page_blob_tier=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
encryption_data=None):
self, container_name, blob_name, content_length, content_settings=None,
sequence_number=None, metadata=None, lease_id=None, premium_page_blob_tier=None, if_modified_since=None,
if_unmodified_since=None, if_match=None, if_none_match=None, timeout=None,
encryption_data=None):
'''
See create_blob for more details. This helper method
allows for encryption or other such special behavior because
@ -1349,11 +1350,11 @@ class PageBlobService(BaseBlobService):
return self._perform_request(request, _parse_base_properties)
def _update_page(
self, container_name, blob_name, page, start_range, end_range,
validate_content=False, lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
self, container_name, blob_name, page, start_range, end_range,
validate_content=False, lease_id=None, if_sequence_number_lte=None,
if_sequence_number_lt=None, if_sequence_number_eq=None,
if_modified_since=None, if_unmodified_since=None,
if_match=None, if_none_match=None, timeout=None):
'''
See update_page for more details. This helper method
allows for encryption or other such special behavior because
@ -1391,4 +1392,4 @@ class PageBlobService(BaseBlobService):
computed_md5 = _get_content_md5(request.body)
request.headers['Content-MD5'] = _to_str(computed_md5)
return self._perform_request(request, _parse_page_properties)
return self._perform_request(request, _parse_page_properties)

Просмотреть файл

Просмотреть файл

@ -0,0 +1,3 @@
[bdist_wheel]
universal=1
azure-namespace-package=azure-storage-nspkg

Просмотреть файл

@ -0,0 +1,92 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from setuptools import setup, find_packages
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
# azure-storage v0.36.0 and prior are not compatible with this package
try:
import azure.storage
try:
ver = azure.storage.__version__
raise Exception(
'This package is incompatible with azure-storage=={}. '.format(ver) +
' Uninstall it with "pip uninstall azure-storage".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-storage-blob',
version='0.36.0',
description='Microsoft Azure Storage Blob Client Library for Python',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-storage-python',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=find_packages(),
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
'azure-storage-common'
] + (['futures'] if sys.version_info < (3, 0) else []),
cmdclass=cmdclass
)

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-common
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-common
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -0,0 +1,2 @@
include *.rst
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,183 @@
Microsoft Azure Storage SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure Storage services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
If you are upgrading from a version older than 0.30.0, see the upgrade doc, the
usage samples in the samples directory, and the ChangeLog and BreakingChanges.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.35.1 (or any version below 0.35.1), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
Getting Started
===============
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to version 0.36.0, there used to be a single package (azure-storage) containing all services
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,13 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from ._constants import (
__author__,
__version__,
X_MS_VERSION,
)
from .cloudstorageaccount import CloudStorageAccount
from .models import (
RetentionPolicy,
Logging,
@ -34,14 +34,11 @@ from .models import (
LocationMode,
RetryContext,
)
from .retry import (
ExponentialRetry,
LinearRetry,
no_retry,
)
from .cloudstorageaccount import CloudStorageAccount
from .sharedaccesssignature import (
SharedAccessSignature,
)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from ._common_conversion import (
_sign_string,
)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,14 +11,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import base64
import hashlib
import hmac
import sys
from io import (SEEK_SET)
from dateutil.tz import tzutc
from io import (IOBase, SEEK_SET)
from ._error import (
_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM,
@ -37,12 +38,15 @@ if sys.version_info < (3,):
else:
_str = str
def _to_str(value):
return _str(value) if value is not None else None
def _int_to_str(value):
return str(int(value)) if value is not None else None
def _bool_to_str(value):
if value is None:
return None
@ -55,9 +59,11 @@ def _bool_to_str(value):
return str(value)
def _to_utc_datetime(value):
return value.strftime('%Y-%m-%dT%H:%M:%SZ')
def _datetime_to_utc_string(value):
# Azure expects the date value passed in to be UTC.
# Azure will always return values as UTC.
@ -70,6 +76,7 @@ def _datetime_to_utc_string(value):
return value.strftime('%a, %d %b %Y %H:%M:%S GMT')
def _encode_base64(data):
if isinstance(data, _unicode_type):
data = data.encode('utf-8')
@ -101,6 +108,7 @@ def _sign_string(key, string_to_sign, key_is_base64=True):
encoded_digest = _encode_base64(digest)
return encoded_digest
def _get_content_md5(data):
md5 = hashlib.md5()
if isinstance(data, bytes):
@ -122,5 +130,6 @@ def _get_content_md5(data):
return base64.b64encode(md5.digest()).decode('utf-8')
def _lower(text):
return text.lower()

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
import os
# --------------------------------------------------------------------------
import sys
if sys.version_info >= (3,):
from urllib.parse import urlparse
else:
@ -33,10 +33,10 @@ from ._error import (
)
_EMULATOR_ENDPOINTS = {
'blob': DEV_BLOB_HOST,
'queue': DEV_QUEUE_HOST,
'table': DEV_TABLE_HOST,
'file': '',
'blob': DEV_BLOB_HOST,
'queue': DEV_QUEUE_HOST,
'table': DEV_TABLE_HOST,
'file': '',
}
_CONNECTION_ENDPOINTS = {
@ -113,21 +113,21 @@ class _ServiceParameters(object):
@staticmethod
def get_service_parameters(service, account_name=None, account_key=None, sas_token=None, is_emulated=None,
protocol=None, endpoint_suffix=None, custom_domain=None, request_session=None,
connection_string=None, socket_timeout=None):
protocol=None, endpoint_suffix=None, custom_domain=None, request_session=None,
connection_string=None, socket_timeout=None):
if connection_string:
params = _ServiceParameters._from_connection_string(connection_string, service)
elif is_emulated:
params = _ServiceParameters(service, is_emulated=True)
elif account_name:
params = _ServiceParameters(service,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
custom_domain=custom_domain)
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
custom_domain=custom_domain)
else:
raise ValueError(_ERROR_STORAGE_MISSING_INFO)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import platform
__author__ = 'Microsoft Corp. <ptvshelp@microsoft.com>'
@ -21,7 +21,9 @@ __version__ = '0.36.0'
X_MS_VERSION = '2017-04-17'
# UserAgent string sample: 'Azure-Storage/0.32.0 (Python CPython 3.4.2; Windows 8)'
USER_AGENT_STRING = 'Azure-Storage/{} (Python {} {}; {} {})'.format(__version__, platform.python_implementation(), platform.python_version(), platform.system(), platform.release())
USER_AGENT_STRING = 'Azure-Storage/{} (Python {} {}; {} {})'.format(__version__, platform.python_implementation(),
platform.python_version(), platform.system(),
platform.release())
# Live ServiceClient URLs
SERVICE_HOST_BASE = 'core.windows.net'
@ -39,5 +41,5 @@ DEV_ACCOUNT_KEY = 'Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6t
# Socket timeout in seconds
DEFAULT_SOCKET_TIMEOUT = 20
#Encryption constants
_ENCRYPTION_PROTOCOL_V1 = '1.0'
# Encryption constants
_ENCRYPTION_PROTOCOL_V1 = '1.0'

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from dateutil import parser
from ._common_conversion import _to_str
try:
from xml.etree import cElementTree as ETree
except ImportError:
@ -25,21 +27,24 @@ from .models import (
Metrics,
CorsRule,
AccessPolicy,
_HeaderDict,
_dict,
GeoReplication,
ServiceStats,
)
def _int_to_str(value):
return value if value is None else int(value)
def _bool(value):
return value.lower() == 'true'
def _to_upper_str(value):
return _to_str(value).upper() if value is not None else None
def _get_download_size(start_range, end_range, resource_size):
if start_range is not None:
end_range = end_range if end_range else (resource_size if resource_size else None)
@ -50,6 +55,7 @@ def _get_download_size(start_range, end_range, resource_size):
else:
return resource_size
GET_PROPERTIES_ATTRIBUTE_MAP = {
'last-modified': (None, 'last_modified', parser.parse),
'etag': (None, 'etag', _to_str),
@ -81,6 +87,7 @@ GET_PROPERTIES_ATTRIBUTE_MAP = {
'x-ms-copy-status-description': ('copy', 'status_description', _to_str),
}
def _parse_metadata(response):
'''
Extracts out resource metadata information.
@ -96,6 +103,7 @@ def _parse_metadata(response):
return metadata
def _parse_properties(response, result_class):
'''
Extracts out resource properties and metadata information.
@ -119,10 +127,11 @@ def _parse_properties(response, result_class):
props.blob_tier = _to_upper_str(props.blob_tier)
return props
def _parse_length_from_content_range(content_range):
'''
Parses the blob length from the content range header: bytes 1-3/65537
'''
'''
if content_range is None:
return None
@ -131,6 +140,7 @@ def _parse_length_from_content_range(content_range):
# Finally, convert to an int: 65537
return int(content_range.split(' ', 1)[1].split('/', 1)[1])
def _convert_xml_to_signed_identifiers(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -173,6 +183,7 @@ def _convert_xml_to_signed_identifiers(response):
return signed_identifiers
def _convert_xml_to_service_stats(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -199,6 +210,7 @@ def _convert_xml_to_service_stats(response):
service_stats.geo_replication = geo_replication
return service_stats
def _convert_xml_to_service_properties(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -247,7 +259,7 @@ def _convert_xml_to_service_properties(response):
service_properties_element = ETree.fromstring(response.body)
service_properties = ServiceProperties()
# Logging
logging = service_properties_element.find('Logging')
if logging is not None:
@ -257,8 +269,8 @@ def _convert_xml_to_service_properties(response):
service_properties.logging.read = _bool(logging.find('Read').text)
service_properties.logging.write = _bool(logging.find('Write').text)
_convert_xml_to_retention_policy(logging.find('RetentionPolicy'),
service_properties.logging.retention_policy)
_convert_xml_to_retention_policy(logging.find('RetentionPolicy'),
service_properties.logging.retention_policy)
# HourMetrics
hour_metrics_element = service_properties_element.find('HourMetrics')
if hour_metrics_element is not None:
@ -336,6 +348,6 @@ def _convert_xml_to_retention_policy(xml, retention_policy):
retention_policy.enabled = _bool(xml.find('Enabled').text)
# Days
days_element = xml.find('Days')
days_element = xml.find('Days')
if days_element is not None:
retention_policy.days = int(days_element.text)
retention_policy.days = int(days_element.text)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,27 +11,30 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from ._common_conversion import(
# --------------------------------------------------------------------------
from collections import OrderedDict
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import CBC
from ._common_conversion import (
_encode_base64,
_decode_base64_to_bytes,
)
from ._constants import(
from ._constants import (
_ENCRYPTION_PROTOCOL_V1,
__version__,
)
from ._error import(
from ._error import (
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION,
_validate_not_none,
_validate_encryption_protocol_version,
_validate_key_encryption_key_unwrap,
_validate_kek_id,
)
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import CBC
from cryptography.hazmat.primitives.ciphers import Cipher
from collections import OrderedDict
class _EncryptionAlgorithm(object):
'''
@ -39,6 +42,7 @@ class _EncryptionAlgorithm(object):
'''
AES_CBC_256 = 'AES_CBC_256'
class _WrappedContentKey:
'''
Represents the envelope key details stored on the service.
@ -62,6 +66,7 @@ class _WrappedContentKey:
self.encrypted_key = encrypted_key
self.key_id = key_id
class _EncryptionAgent:
'''
Represents the encryption agent stored on the service.
@ -82,6 +87,7 @@ class _EncryptionAgent:
self.encryption_algorithm = str(encryption_algorithm)
self.protocol = protocol
class _EncryptionData:
'''
Represents the encryption data that is stored on the service.
@ -110,6 +116,7 @@ class _EncryptionData:
self.wrapped_content_key = wrapped_content_key
self.key_wrapping_metadata = key_wrapping_metadata
def _generate_encryption_data_dict(kek, cek, iv):
'''
Generates and returns the encryption metadata as a dict.
@ -138,10 +145,11 @@ def _generate_encryption_data_dict(kek, cek, iv):
encryption_data_dict['WrappedContentKey'] = wrapped_content_key
encryption_data_dict['EncryptionAgent'] = encryption_agent
encryption_data_dict['ContentEncryptionIV'] = _encode_base64(iv)
encryption_data_dict['KeyWrappingMetadata'] = {'EncryptionLibrary':'Python ' + __version__}
encryption_data_dict['KeyWrappingMetadata'] = {'EncryptionLibrary': 'Python ' + __version__}
return encryption_data_dict
def _dict_to_encryption_data(encryption_data_dict):
'''
Converts the specified dictionary to an EncryptionData object for
@ -159,8 +167,8 @@ def _dict_to_encryption_data(encryption_data_dict):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_VERSION)
wrapped_content_key = encryption_data_dict['WrappedContentKey']
wrapped_content_key = _WrappedContentKey(wrapped_content_key['Algorithm'],
_decode_base64_to_bytes(wrapped_content_key['EncryptedKey']),
wrapped_content_key['KeyId'])
_decode_base64_to_bytes(wrapped_content_key['EncryptedKey']),
wrapped_content_key['KeyId'])
encryption_agent = encryption_data_dict['EncryptionAgent']
encryption_agent = _EncryptionAgent(encryption_agent['EncryptionAlgorithm'],
@ -170,14 +178,15 @@ def _dict_to_encryption_data(encryption_data_dict):
key_wrapping_metadata = encryption_data_dict['KeyWrappingMetadata']
else:
key_wrapping_metadata = None
encryption_data = _EncryptionData(_decode_base64_to_bytes(encryption_data_dict['ContentEncryptionIV']),
encryption_agent,
wrapped_content_key,
key_wrapping_metadata)
encryption_agent,
wrapped_content_key,
key_wrapping_metadata)
return encryption_data
def _generate_AES_CBC_cipher(cek, iv):
'''
Generates and returns an encryption cipher for AES CBC using the given cek and iv.
@ -187,12 +196,13 @@ def _generate_AES_CBC_cipher(cek, iv):
:return: A cipher for encrypting in AES256 CBC.
:rtype: ~cryptography.hazmat.primitives.ciphers.Cipher
'''
backend = default_backend()
algorithm = AES(cek)
mode = CBC(iv)
return Cipher(algorithm, mode, backend)
def _validate_and_unwrap_cek(encryption_data, key_encryption_key=None, key_resolver=None):
'''
Extracts and returns the content_encryption_key stored in the encryption_data object
@ -211,7 +221,7 @@ def _validate_and_unwrap_cek(encryption_data, key_encryption_key=None, key_resol
_validate_not_none('content_encryption_IV', encryption_data.content_encryption_IV)
_validate_not_none('encrypted_key', encryption_data.wrapped_content_key.encrypted_key)
_validate_encryption_protocol_version(encryption_data.encryption_agent.protocol)
content_encryption_key = None
@ -229,4 +239,4 @@ def _validate_and_unwrap_cek(encryption_data, key_encryption_key=None, key_resol
encryption_data.wrapped_content_key.algorithm)
_validate_not_none('content_encryption_key', content_encryption_key)
return content_encryption_key
return content_encryption_key

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from sys import version_info
from io import IOBase
if version_info < (3,):
def _str(value):
if isinstance(value, unicode):
@ -23,9 +23,11 @@ if version_info < (3,):
else:
_str = str
def _to_str(value):
return _str(value) if value is not None else None
from azure.common import (
AzureHttpError,
AzureConflictHttpError,
@ -35,6 +37,7 @@ from azure.common import (
from ._constants import (
_ENCRYPTION_PROTOCOL_V1,
)
_ERROR_CONFLICT = 'Conflict ({0})'
_ERROR_NOT_FOUND = 'Not found ({0})'
_ERROR_UNKNOWN = 'Unknown error ({0})'
@ -72,7 +75,7 @@ _ERROR_OBJECT_INVALID = \
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION = \
'Encryption version is not supported.'
_ERROR_DECRYPTION_FAILURE = \
'Decryption failed'
'Decryption failed'
_ERROR_ENCRYPTION_REQUIRED = \
'Encryption required but no key was provided.'
_ERROR_DECRYPTION_REQUIRED = \
@ -83,10 +86,11 @@ _ERROR_INVALID_KID = \
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM = \
'Specified encryption algorithm is not supported.'
_ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION = 'The require_encryption flag is set, but encryption is not supported' + \
' for this method.'
' for this method.'
_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM = 'Unknown key wrap algorithm.'
_ERROR_DATA_NOT_ENCRYPTED = 'Encryption required, but received data does not contain appropriate metatadata.' + \
'Data was either not encrypted or metadata has been lost.'
'Data was either not encrypted or metadata has been lost.'
def _dont_fail_on_exist(error):
''' don't throw exception if the resource exists.
@ -118,21 +122,27 @@ def _validate_type_bytes(param_name, param):
if not isinstance(param, bytes):
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES.format(param_name))
def _validate_type_bytes_or_stream(param_name, param):
if not (isinstance(param, bytes) or hasattr(param, 'read')):
raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM.format(param_name))
def _validate_not_none(param_name, param):
if param is None:
raise ValueError(_ERROR_VALUE_NONE.format(param_name))
def _validate_content_match(server_md5, computed_md5):
if server_md5 != computed_md5:
raise AzureException(_ERROR_MD5_MISMATCH.format(server_md5, computed_md5))
def _validate_access_policies(identifiers):
if identifiers and len(identifiers) > 5:
raise AzureException(_ERROR_TOO_MANY_ACCESS_POLICIES)
def _validate_key_encryption_key_wrap(kek):
# Note that None is not callable and so will fail the second clause of each check.
if not hasattr(kek, 'wrap_key') or not callable(kek.wrap_key):
@ -142,29 +152,35 @@ def _validate_key_encryption_key_wrap(kek):
if not hasattr(kek, 'get_key_wrap_algorithm') or not callable(kek.get_key_wrap_algorithm):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'get_key_wrap_algorithm'))
def _validate_key_encryption_key_unwrap(kek):
if not hasattr(kek, 'get_kid') or not callable(kek.get_kid):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'get_kid'))
if not hasattr(kek, 'unwrap_key') or not callable(kek.unwrap_key):
raise AttributeError(_ERROR_OBJECT_INVALID.format('key encryption key', 'unwrap_key'))
def _validate_encryption_required(require_encryption, kek):
if require_encryption and (kek is None):
raise ValueError(_ERROR_ENCRYPTION_REQUIRED)
if require_encryption and (kek is None):
raise ValueError(_ERROR_ENCRYPTION_REQUIRED)
def _validate_decryption_required(require_encryption, kek, resolver):
if(require_encryption and (kek is None) and
(resolver is None)):
raise ValueError(_ERROR_DECRYPTION_REQUIRED)
if (require_encryption and (kek is None) and
(resolver is None)):
raise ValueError(_ERROR_DECRYPTION_REQUIRED)
def _validate_encryption_protocol_version(encryption_protocol):
if not (_ENCRYPTION_PROTOCOL_V1 == encryption_protocol):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_VERSION)
def _validate_kek_id(kid, resolved_id):
if not (kid == resolved_id):
raise ValueError(_ERROR_INVALID_KID)
def _validate_encryption_unsupported(require_encryption, key_encryption_key):
if require_encryption or (key_encryption_key is not None):
raise ValueError(_ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION)
raise ValueError(_ERROR_UNSUPPORTED_METHOD_FOR_ENCRYPTION)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,10 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
class HTTPError(Exception):
'''
Represents an HTTP Exception when response status code >= 300.
@ -36,7 +35,6 @@ class HTTPError(Exception):
class HTTPResponse(object):
'''
Represents a response from an HTTP request.
@ -58,7 +56,6 @@ class HTTPResponse(object):
class HTTPRequest(object):
'''
Represents an HTTP Request.
@ -80,6 +77,6 @@ class HTTPRequest(object):
self.host = ''
self.method = ''
self.path = ''
self.query = {} # list of (name, value)
self.headers = {} # list of (header name, header value)
self.query = {} # list of (name, value)
self.headers = {} # list of (header name, header value)
self.body = ''

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,17 +11,12 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
import uuid
from azure.common import (
AzureHttpError,
)
from ..models import (
AzureBatchOperationError,
AzureBatchValidationError,
)
from . import HTTPError, HTTPRequest, HTTPResponse
from .httpclient import _HTTPClient
from .._common_error import (
_ERROR_CANNOT_FIND_PARTITION_KEY,
_ERROR_CANNOT_FIND_ROW_KEY,
@ -37,11 +32,13 @@ from .._common_serialization import (
_etree_entity_feed_namespaces,
_update_request_uri_query,
)
from ..models import (
AzureBatchOperationError,
AzureBatchValidationError,
)
from ..table._serialization import (
_update_storage_table_header,
)
from . import HTTPError, HTTPRequest, HTTPResponse
from .httpclient import _HTTPClient
_DATASERVICES_NS = 'http://schemas.microsoft.com/ado/2007/08/dataservices'
@ -54,7 +51,6 @@ else:
class _BatchClient(_HTTPClient):
'''
This is the class that is used for batch operation for storage table
service. It only supports one changeset.
@ -62,7 +58,8 @@ class _BatchClient(_HTTPClient):
def __init__(self, service_instance, authentication,
protocol='http', request_session=None, timeout=65, user_agent=''):
_HTTPClient.__init__(self, service_instance, protocol=protocol, request_session=request_session, timeout=timeout, user_agent=user_agent)
_HTTPClient.__init__(self, service_instance, protocol=protocol, request_session=request_session,
timeout=timeout, user_agent=user_agent)
self.authentication = authentication
self.is_batch = False
self.batch_requests = []
@ -155,7 +152,7 @@ class _BatchClient(_HTTPClient):
'''
if self.batch_partition_key:
if self.get_request_partition_key(request) != \
self.batch_partition_key:
self.batch_partition_key:
raise AzureBatchValidationError(_ERROR_INCORRECT_PARTITION_KEY_IN_BATCH)
else:
self.batch_partition_key = self.get_request_partition_key(request)
@ -227,7 +224,7 @@ class _BatchClient(_HTTPClient):
request.path = '/$batch'
request.headers = [
('Content-Type', 'multipart/mixed; boundary=' + \
batch_boundary.decode('utf-8')),
batch_boundary.decode('utf-8')),
('Accept', 'application/atom+xml,application/xml'),
('Accept-Charset', 'UTF-8')]

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,29 +11,21 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
import base64
# --------------------------------------------------------------------------
import sys
import logging
logger = logging.getLogger(__name__)
if sys.version_info < (3,):
from httplib import (
HTTP_PORT,
HTTPS_PORT,
)
from urllib2 import quote as url_quote
else:
from http.client import (
HTTP_PORT,
HTTPS_PORT,
)
from urllib.parse import quote as url_quote
pass
from . import HTTPError, HTTPResponse
from .._serialization import _get_data_bytes_or_stream_only
class _HTTPClient(object):
'''
Takes the request and sends it to cloud service and returns the response.
@ -82,9 +74,9 @@ class _HTTPClient(object):
Password for proxy authorization.
'''
if user and password:
proxy_string = '{}:{}@{}:{}'.format(user, password, host, port)
proxy_string = '{}:{}@{}:{}'.format(user, password, host, port)
else:
proxy_string = '{}:{}'.format(host, port)
proxy_string = '{}:{}'.format(host, port)
self.proxies = {}
self.proxies['http'] = 'http://{}'.format(proxy_string)
@ -108,10 +100,10 @@ class _HTTPClient(object):
uri = self.protocol.lower() + '://' + request.host + request.path
# Send the request
response = self.session.request(request.method,
response = self.session.request(request.method,
uri,
params=request.query,
headers=request.headers,
headers=request.headers,
data=request.body or None,
timeout=self.timeout,
proxies=self.proxies)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,15 +11,16 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
import uuid
from datetime import date
from dateutil.tz import tzutc
from io import (BytesIO, IOBase, SEEK_SET, SEEK_END, UnsupportedOperation)
from os import fstat
from time import time
from wsgiref.handlers import format_date_time
from os import fstat
from io import (BytesIO, IOBase, SEEK_SET, SEEK_END, UnsupportedOperation)
from dateutil.tz import tzutc
if sys.version_info >= (3,):
from urllib.parse import quote as url_quote
@ -47,6 +48,7 @@ from ._common_conversion import (
_str,
)
def _to_utc_datetime(value):
# Azure expects the date value passed in to be UTC.
# Azure will always return values as UTC.
@ -55,6 +57,7 @@ def _to_utc_datetime(value):
value = value.astimezone(tzutc())
return value.strftime('%Y-%m-%dT%H:%M:%SZ')
def _update_request(request):
# Verify body
if request.body:
@ -83,6 +86,7 @@ def _update_request(request):
# Encode and optionally add local storage prefix to path
request.path = url_quote(request.path, '/()$=\',~')
def _add_metadata_headers(metadata, request):
if metadata:
if not request.headers:
@ -90,10 +94,12 @@ def _add_metadata_headers(metadata, request):
for name, value in metadata.items():
request.headers['x-ms-meta-' + name] = value
def _add_date_header(request):
current_time = format_date_time(time())
request.headers['x-ms-date'] = current_time
def _get_data_bytes_only(param_name, param_value):
'''Validates the request body passed in and converts it to bytes
if our policy allows it.'''
@ -138,6 +144,7 @@ def _get_request_body(request_body):
return request_body
def _convert_signed_identifiers_to_xml(signed_identifiers):
if signed_identifiers is None:
return ''
@ -177,9 +184,10 @@ def _convert_signed_identifiers_to_xml(signed_identifiers):
finally:
output = stream.getvalue()
stream.close()
return output
def _convert_service_properties_to_xml(logging, hour_metrics, minute_metrics, cors, target_version=None):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -265,15 +273,17 @@ def _convert_service_properties_to_xml(logging, hour_metrics, minute_metrics, co
# Add xml declaration and serialize
try:
stream = BytesIO()
ETree.ElementTree(service_properties_element).write(stream, xml_declaration=True, encoding='utf-8', method='xml')
ETree.ElementTree(service_properties_element).write(stream, xml_declaration=True, encoding='utf-8',
method='xml')
except:
raise
finally:
output = stream.getvalue()
stream.close()
return output
def _convert_metrics_to_xml(metrics, root):
'''
<Version>version-number</Version>
@ -298,6 +308,7 @@ def _convert_metrics_to_xml(metrics, root):
retention_element = ETree.SubElement(root, 'RetentionPolicy')
_convert_retention_policy_to_xml(metrics.retention_policy, retention_element)
def _convert_retention_policy_to_xml(retention_policy, root):
'''
<Enabled>true|false</Enabled>
@ -310,6 +321,7 @@ def _convert_retention_policy_to_xml(retention_policy, root):
if retention_policy.enabled and retention_policy.days:
ETree.SubElement(root, 'Days').text = str(retention_policy.days)
def _len_plus(data):
length = None
# Check if object implements the __len__ method, covers most input cases such as bytearray.

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,22 +11,23 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
# Note that we import BlobService/QueueService/TableService on demand
# because this module is imported by azure/storage/__init__
# ie. we don't want 'import azure.storage' to trigger an automatic import
# of blob/queue/table packages.
from .sharedaccesssignature import (
SharedAccessSignature,
)
from .models import (
from azure.storage.common._error import _validate_not_none
from azure.storage.common.models import (
ResourceTypes,
Services,
AccountPermissions,
)
from ._error import _validate_not_none
from azure.storage.common.sharedaccesssignature import (
SharedAccessSignature,
)
class CloudStorageAccount(object):
"""
@ -64,10 +65,14 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.blob.blockblobservice.BlockBlobService`
'''
from .blob.blockblobservice import BlockBlobService
return BlockBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
try:
from azure.storage.blob.blockblobservice import BlockBlobService
return BlockBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-blob is required. '
+ 'Please install it using "pip install azure-storage-blob"')
def create_page_blob_service(self):
'''
@ -77,10 +82,14 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.blob.pageblobservice.PageBlobService`
'''
from .blob.pageblobservice import PageBlobService
return PageBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
try:
from azure.storage.blob.pageblobservice import PageBlobService
return PageBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-blob is required. '
+ 'Please install it using "pip install azure-storage-blob"')
def create_append_blob_service(self):
'''
@ -90,10 +99,14 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.blob.appendblobservice.AppendBlobService`
'''
from .blob.appendblobservice import AppendBlobService
return AppendBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
try:
from azure.storage.blob.appendblobservice import AppendBlobService
return AppendBlobService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-blob is required. '
+ 'Please install it using "pip install azure-storage-blob"')
def create_table_service(self):
'''
@ -103,10 +116,14 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.table.tableservice.TableService`
'''
from .table.tableservice import TableService
return TableService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
try:
from azure.storage.table.tableservice import TableService
return TableService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-table is required. '
+ 'Please install it using "pip install azure-storage-table"')
def create_queue_service(self):
'''
@ -116,10 +133,14 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.queue.queueservice.QueueService`
'''
from .queue.queueservice import QueueService
return QueueService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
try:
from azure.storage.queue.queueservice import QueueService
return QueueService(self.account_name, self.account_key,
sas_token=self.sas_token,
is_emulated=self.is_emulated)
except ImportError:
raise Exception('The package azure-storage-queue is required. '
+ 'Please install it using "pip install azure-storage-queue"')
def create_file_service(self):
'''
@ -129,12 +150,16 @@ class CloudStorageAccount(object):
:return: A service object.
:rtype: :class:`~azure.storage.file.fileservice.FileService`
'''
from .file.fileservice import FileService
return FileService(self.account_name, self.account_key,
sas_token=self.sas_token)
try:
from azure.storage.file.fileservice import FileService
return FileService(self.account_name, self.account_key,
sas_token=self.sas_token)
except ImportError:
raise Exception('The package azure-storage-file is required. '
+ 'Please install it using "pip install azure-storage-file"')
def generate_shared_access_signature(self, services, resource_types,
permission, expiry, start=None,
def generate_shared_access_signature(self, services, resource_types,
permission, expiry, start=None,
ip=None, protocol=None):
'''
Generates a shared access signature for the account.
@ -185,5 +210,5 @@ class CloudStorageAccount(object):
_validate_not_none('self.account_key', self.account_key)
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_account(services, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)
return sas.generate_account(services, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,45 +11,38 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
if sys.version_info < (3,):
from collections import Iterable
_unicode_type = unicode
else:
from collections.abc import Iterable
_unicode_type = str
from ._error import (
_validate_not_none,
_ERROR_UNKNOWN_KEY_WRAP_ALGORITHM
_validate_not_none
)
from cryptography.hazmat.primitives.keywrap import(
aes_key_wrap,
aes_key_unwrap,
)
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric.rsa import generate_private_key
from cryptography.hazmat.primitives.asymmetric.padding import (
OAEP,
MGF1,
)
from cryptography.hazmat.primitives.hashes import SHA1
from os import urandom
class _HeaderDict(dict):
def __getitem__(self, index):
return super(_HeaderDict, self).__getitem__(index.lower())
class _list(list):
'''Used so that additional properties can be set on the return list'''
pass
class _dict(dict):
'''Used so that additional properties can be set on the return dictionary'''
pass
class _OperationContext(object):
'''
Contains information that lasts the lifetime of an operation. This operation
@ -60,10 +53,12 @@ class _OperationContext(object):
:ivar str location:
The location to lock to.
'''
def __init__(self, location_lock=False):
self.location_lock = location_lock
self.host_location = None
class ListGenerator(Iterable):
'''
A generator object used to list storage resources. The generator will lazily
@ -75,6 +70,7 @@ class ListGenerator(Iterable):
finishes. This marker can be used to create a new generator if more
results are desired.
'''
def __init__(self, resources, list_method, list_args, list_kwargs):
self.items = resources
self.next_marker = resources.next_marker
@ -131,11 +127,13 @@ class RetryContext(object):
:ivar LocationMode location_mode:
The location the request was sent to.
'''
def __init__(self):
self.request = None
self.response = None
self.location_mode = None
class LocationMode(object):
'''
Specifies the location the request should be sent to. This mode only applies
@ -151,7 +149,6 @@ class LocationMode(object):
class RetentionPolicy(object):
'''
By default, Storage Analytics will not delete any logging or metrics data. Blobs
and table entities will continue to be written until the shared 20TB limit is
@ -186,7 +183,6 @@ class RetentionPolicy(object):
class Logging(object):
'''
Storage Analytics logs detailed information about successful and failed requests
to a storage service. This information can be used to monitor individual requests
@ -226,8 +222,7 @@ class Logging(object):
class Metrics(object):
'''
'''
Metrics include aggregated transaction statistics and capacity data about requests
to a storage service. Transactions are reported at both the API operation level
as well as at the storage service level, and capacity is reported at the storage
@ -261,7 +256,6 @@ class Metrics(object):
class CorsRule(object):
'''
CORS is an HTTP feature that enables a web application running under one domain
to access resources in another domain. Web browsers implement a security
@ -371,8 +365,7 @@ class GeoReplication(object):
class AccessPolicy(object):
'''
'''
Access Policy class used by the set and get acl methods in each service.
A stored access policy can specify the start time, expiry time, and
@ -434,8 +427,8 @@ class Protocol(object):
HTTPS_HTTP = 'https,http'
''' Allow HTTP and HTTPS requests. '''
class ResourceTypes(object):
class ResourceTypes(object):
'''
Specifies the resource types that are accessible with the account SAS.
@ -450,6 +443,7 @@ class ResourceTypes(object):
Access to service-level APIs (e.g., Get/Set Service Properties,
Get Service Stats, List Containers/Queues/Tables/Shares)
'''
def __init__(self, service=False, container=False, object=False, _str=None):
'''
:param bool service:
@ -470,25 +464,25 @@ class ResourceTypes(object):
self.service = service or ('s' in _str)
self.container = container or ('c' in _str)
self.object = object or ('o' in _str)
def __or__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __add__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __str__(self):
return (('s' if self.service else '') +
('c' if self.container else '') +
('o' if self.object else ''))
ResourceTypes.SERVICE = ResourceTypes(service=True)
ResourceTypes.CONTAINER = ResourceTypes(container=True)
ResourceTypes.OBJECT = ResourceTypes(object=True)
class Services(object):
'''
Specifies the services accessible with the account SAS.
@ -497,6 +491,7 @@ class Services(object):
:ivar Services Services.QUEUE: The queue service.
:ivar Services Services.TABLE: The table service
'''
def __init__(self, blob=False, queue=False, table=False, file=False, _str=None):
'''
:param bool blob:
@ -516,19 +511,20 @@ class Services(object):
self.queue = queue or ('q' in _str)
self.table = table or ('t' in _str)
self.file = file or ('f' in _str)
def __or__(self, other):
return Services(_str=str(self) + str(other))
def __add__(self, other):
return Services(_str=str(self) + str(other))
def __str__(self):
return (('b' if self.blob else '') +
('q' if self.queue else '') +
('t' if self.table else '') +
('f' if self.file else ''))
Services.BLOB = Services(blob=True)
Services.QUEUE = Services(queue=True)
Services.TABLE = Services(table=True)
@ -536,7 +532,6 @@ Services.FILE = Services(file=True)
class AccountPermissions(object):
'''
:class:`~ResourceTypes` class to be used with generate_shared_access_signature
method and for the AccessPolicies used with set_*_acl. There are two types of
@ -567,7 +562,8 @@ class AccountPermissions(object):
Valid for all signed resources types (Service, Container, and Object).
Permits write permissions to the specified resource type.
'''
def __init__(self, read=False, write=False, delete=False, list=False,
def __init__(self, read=False, write=False, delete=False, list=False,
add=False, create=False, update=False, process=False, _str=None):
'''
:param bool read:
@ -605,13 +601,13 @@ class AccountPermissions(object):
self.create = create or ('c' in _str)
self.update = update or ('u' in _str)
self.process = process or ('p' in _str)
def __or__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __add__(self, other):
return ResourceTypes(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('w' if self.write else '') +
@ -622,6 +618,7 @@ class AccountPermissions(object):
('u' if self.update else '') +
('p' if self.process else ''))
AccountPermissions.READ = AccountPermissions(read=True)
AccountPermissions.WRITE = AccountPermissions(write=True)
AccountPermissions.DELETE = AccountPermissions(delete=True)
@ -629,4 +626,4 @@ AccountPermissions.LIST = AccountPermissions(list=True)
AccountPermissions.ADD = AccountPermissions(add=True)
AccountPermissions.CREATE = AccountPermissions(create=True)
AccountPermissions.UPDATE = AccountPermissions(update=True)
AccountPermissions.PROCESS = AccountPermissions(process=True)
AccountPermissions.PROCESS = AccountPermissions(process=True)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,12 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from math import pow
# --------------------------------------------------------------------------
from abc import ABCMeta
from math import pow
from .models import LocationMode
class _Retry(object):
'''
The base class for Exponential and Linear retries containing shared code.
@ -55,7 +56,7 @@ class _Retry(object):
status = None
if context.response and context.response.status:
status = context.response.status
if status == None:
'''
If status is None, retry as this request triggered an exception. For
@ -133,7 +134,7 @@ class _Retry(object):
# request as desired, and return the backoff.
if self._should_retry(context):
context.count += 1
# If retry to secondary is enabled, attempt to change the host if the
# request allows it
if self.retry_to_secondary:
@ -149,7 +150,7 @@ class ExponentialRetry(_Retry):
Exponential retry.
'''
def __init__(self, initial_backoff=15, increment_power=3, max_attempts=3,
def __init__(self, initial_backoff=15, increment_power=3, max_attempts=3,
retry_to_secondary=False):
'''
Constructs an Exponential retry object. The initial_backoff is used for
@ -185,6 +186,7 @@ class ExponentialRetry(_Retry):
or None to indicate no retry should be performed.
:rtype: int or None
'''
def retry(self, context):
return self._retry(context, self._backoff)
@ -196,9 +198,11 @@ class ExponentialRetry(_Retry):
or None to indicate no retry should be performed.
:rtype: int or None
'''
def _backoff(self, context):
return self.initial_backoff + pow(self.increment_power, context.count)
class LinearRetry(_Retry):
'''
Linear retry.
@ -232,6 +236,7 @@ class LinearRetry(_Retry):
or None to indicate no retry should be performed.
:rtype: int or None
'''
def retry(self, context):
return self._retry(context, self._backoff)
@ -243,9 +248,11 @@ class LinearRetry(_Retry):
or None to indicate no retry should be performed.
:rtype: int or None
'''
def _backoff(self, context):
return self.backoff
def no_retry(context):
'''
Specifies never to retry.
@ -256,4 +263,4 @@ def no_retry(context):
Always returns None to indicate never to retry.
:rtype: None
'''
return None
return None

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,18 +11,19 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from datetime import date
from ._common_conversion import (
_sign_string,
_to_str,
)
from ._constants import X_MS_VERSION
from ._serialization import (
url_quote,
_to_utc_datetime,
)
from ._constants import X_MS_VERSION
class SharedAccessSignature(object):
'''
@ -42,11 +43,11 @@ class SharedAccessSignature(object):
self.account_name = account_name
self.account_key = account_key
def generate_table(self, table_name, permission=None,
expiry=None, start=None, id=None,
ip=None, protocol=None,
start_pk=None, start_rk=None,
end_pk=None, end_rk=None):
def generate_table(self, table_name, permission=None,
expiry=None, start=None, id=None,
ip=None, protocol=None,
start_pk=None, start_rk=None,
end_pk=None, end_rk=None):
'''
Generates a shared access signature for the table.
Use the returned signature with the sas_token parameter of TableService.
@ -118,9 +119,9 @@ class SharedAccessSignature(object):
return sas.get_token()
def generate_queue(self, queue_name, permission=None,
expiry=None, start=None, id=None,
ip=None, protocol=None):
def generate_queue(self, queue_name, permission=None,
expiry=None, start=None, id=None,
ip=None, protocol=None):
'''
Generates a shared access signature for the queue.
Use the returned signature with the sas_token parameter of QueueService.
@ -170,11 +171,11 @@ class SharedAccessSignature(object):
return sas.get_token()
def generate_blob(self, container_name, blob_name, permission=None,
expiry=None, start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
def generate_blob(self, container_name, blob_name, permission=None,
expiry=None, start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
'''
Generates a shared access signature for the blob.
Use the returned signature with the sas_token parameter of any BlobService.
@ -240,18 +241,18 @@ class SharedAccessSignature(object):
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_id(id)
sas.add_resource('b')
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
content_type)
sas.add_resource_signature(self.account_name, self.account_key, 'blob', resource_path)
return sas.get_token()
def generate_container(self, container_name, permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
def generate_container(self, container_name, permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
'''
Generates a shared access signature for the container.
Use the returned signature with the sas_token parameter of any BlobService.
@ -313,17 +314,17 @@ class SharedAccessSignature(object):
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_id(id)
sas.add_resource('c')
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
content_type)
sas.add_resource_signature(self.account_name, self.account_key, 'blob', container_name)
return sas.get_token()
def generate_file(self, share_name, directory_name=None, file_name=None,
def generate_file(self, share_name, directory_name=None, file_name=None,
permission=None, expiry=None, start=None, id=None,
ip=None, protocol=None, cache_control=None,
content_disposition=None, content_encoding=None,
content_disposition=None, content_encoding=None,
content_language=None, content_type=None):
'''
Generates a shared access signature for the file.
@ -396,17 +397,17 @@ class SharedAccessSignature(object):
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_id(id)
sas.add_resource('f')
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
content_type)
sas.add_resource_signature(self.account_name, self.account_key, 'file', resource_path)
return sas.get_token()
def generate_share(self, share_name, permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
def generate_share(self, share_name, permission=None, expiry=None,
start=None, id=None, ip=None, protocol=None,
cache_control=None, content_disposition=None,
content_encoding=None, content_language=None,
content_type=None):
'''
Generates a shared access signature for the share.
@ -469,14 +470,14 @@ class SharedAccessSignature(object):
sas.add_base(permission, expiry, start, ip, protocol)
sas.add_id(id)
sas.add_resource('s')
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
sas.add_override_response_headers(cache_control, content_disposition,
content_encoding, content_language,
content_type)
sas.add_resource_signature(self.account_name, self.account_key, 'file', share_name)
return sas.get_token()
def generate_account(self, services, resource_types, permission, expiry, start=None,
def generate_account(self, services, resource_types, permission, expiry, start=None,
ip=None, protocol=None):
'''
Generates a shared access signature for the account.
@ -529,6 +530,7 @@ class SharedAccessSignature(object):
return sas.get_token()
class _QueryStringConstants(object):
SIGNED_SIGNATURE = 'sig'
SIGNED_PERMISSION = 'sp'
@ -552,8 +554,8 @@ class _QueryStringConstants(object):
SIGNED_RESOURCE_TYPES = 'srt'
SIGNED_SERVICES = 'ss'
class _SharedAccessHelper():
class _SharedAccessHelper():
def __init__(self):
self.query_dict = {}
@ -585,8 +587,8 @@ class _SharedAccessHelper():
self._add_query(_QueryStringConstants.SIGNED_SERVICES, services)
self._add_query(_QueryStringConstants.SIGNED_RESOURCE_TYPES, resource_types)
def add_table_access_ranges(self, table_name, start_pk, start_rk,
end_pk, end_rk):
def add_table_access_ranges(self, table_name, start_pk, start_rk,
end_pk, end_rk):
self._add_query(_QueryStringConstants.TABLE_NAME, table_name)
self._add_query(_QueryStringConstants.START_PK, start_pk)
self._add_query(_QueryStringConstants.START_RK, start_rk)
@ -594,10 +596,10 @@ class _SharedAccessHelper():
self._add_query(_QueryStringConstants.END_RK, end_rk)
def add_override_response_headers(self, cache_control,
content_disposition,
content_encoding,
content_language,
content_type):
content_disposition,
content_encoding,
content_language,
content_type):
self._add_query(_QueryStringConstants.SIGNED_CACHE_CONTROL, cache_control)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_DISPOSITION, content_disposition)
self._add_query(_QueryStringConstants.SIGNED_CONTENT_ENCODING, content_encoding)
@ -618,34 +620,34 @@ class _SharedAccessHelper():
# resource. The order of values is important.
string_to_sign = \
(get_value_to_append(_QueryStringConstants.SIGNED_PERMISSION) +
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
canonicalized_resource +
get_value_to_append(_QueryStringConstants.SIGNED_IDENTIFIER) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
canonicalized_resource +
get_value_to_append(_QueryStringConstants.SIGNED_IDENTIFIER) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
if service == 'blob' or service == 'file':
string_to_sign += \
(get_value_to_append(_QueryStringConstants.SIGNED_CACHE_CONTROL) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_DISPOSITION) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_ENCODING) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_LANGUAGE) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_TYPE))
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_DISPOSITION) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_ENCODING) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_LANGUAGE) +
get_value_to_append(_QueryStringConstants.SIGNED_CONTENT_TYPE))
if service == 'table':
string_to_sign += \
(get_value_to_append(_QueryStringConstants.START_PK) +
get_value_to_append(_QueryStringConstants.START_RK) +
get_value_to_append(_QueryStringConstants.END_PK) +
get_value_to_append(_QueryStringConstants.END_RK))
get_value_to_append(_QueryStringConstants.START_RK) +
get_value_to_append(_QueryStringConstants.END_PK) +
get_value_to_append(_QueryStringConstants.END_RK))
# remove the trailing newline
if string_to_sign[-1] == '\n':
string_to_sign = string_to_sign[:-1]
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
_sign_string(account_key, string_to_sign))
def add_account_signature(self, account_name, account_key):
@ -655,17 +657,17 @@ class _SharedAccessHelper():
string_to_sign = \
(account_name + '\n' +
get_value_to_append(_QueryStringConstants.SIGNED_PERMISSION) +
get_value_to_append(_QueryStringConstants.SIGNED_SERVICES) +
get_value_to_append(_QueryStringConstants.SIGNED_RESOURCE_TYPES) +
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
get_value_to_append(_QueryStringConstants.SIGNED_PERMISSION) +
get_value_to_append(_QueryStringConstants.SIGNED_SERVICES) +
get_value_to_append(_QueryStringConstants.SIGNED_RESOURCE_TYPES) +
get_value_to_append(_QueryStringConstants.SIGNED_START) +
get_value_to_append(_QueryStringConstants.SIGNED_EXPIRY) +
get_value_to_append(_QueryStringConstants.SIGNED_IP) +
get_value_to_append(_QueryStringConstants.SIGNED_PROTOCOL) +
get_value_to_append(_QueryStringConstants.SIGNED_VERSION))
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
self._add_query(_QueryStringConstants.SIGNED_SIGNATURE,
_sign_string(account_key, string_to_sign))
def get_token(self):
return '&'.join(['{0}={1}'.format(n, url_quote(v)) for n, v in self.query_dict.items() if v is not None])
return '&'.join(['{0}={1}'.format(n, url_quote(v)) for n, v in self.query_dict.items() if v is not None])

Просмотреть файл

@ -12,39 +12,38 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import os
import sys
import copy
import requests
from time import sleep
from abc import ABCMeta
import logging
logger = logging.getLogger(__name__)
from time import sleep
import requests
from azure.common import (
AzureException,
)
from .models import (
RetryContext,
LocationMode,
_OperationContext,
)
from .retry import ExponentialRetry
from ._constants import (
DEFAULT_SOCKET_TIMEOUT
)
from ._error import (
_ERROR_DECRYPTION_FAILURE,
_http_error_handler,
)
from ._http import HTTPError
from ._http.httpclient import _HTTPClient
from ._serialization import (
_update_request,
_add_date_header,
)
from ._error import (
_ERROR_STORAGE_MISSING_INFO,
_ERROR_DECRYPTION_FAILURE,
_http_error_handler,
from .models import (
RetryContext,
LocationMode,
_OperationContext,
)
from .retry import ExponentialRetry
class StorageClient(object):

Просмотреть файл

@ -0,0 +1,533 @@
"""
"wheel" copyright (c) 2012-2017 Daniel Holth <dholth@fastmail.fm> and
contributors.
The MIT License
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Create a Azure wheel (.whl) distribution (a wheel is a built archive format).
This file is a copy of the official bdist_wheel file from wheel 0.30.0a0, enhanced
of the bottom with some Microsoft extension for Azure SDK for Python
"""
import csv
import hashlib
import os
import subprocess
import warnings
import shutil
import json
import sys
try:
import sysconfig
except ImportError: # pragma nocover
# Python < 2.7
import distutils.sysconfig as sysconfig
import pkg_resources
safe_name = pkg_resources.safe_name
safe_version = pkg_resources.safe_version
from shutil import rmtree
from email.generator import Generator
from distutils.core import Command
from distutils.sysconfig import get_python_version
from distutils import log as logger
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag, get_platform
from wheel.util import native, open_for_csv
from wheel.archive import archive_wheelfile
from wheel.pkginfo import read_pkg_info, write_pkg_info
from wheel.metadata import pkginfo_to_dict
from wheel import pep425tags, metadata
from wheel import __version__ as wheel_version
def safer_name(name):
return safe_name(name).replace('-', '_')
def safer_version(version):
return safe_version(version).replace('-', '_')
class bdist_wheel(Command):
description = 'create a wheel distribution'
user_options = [('bdist-dir=', 'b',
"temporary directory for creating the distribution"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('keep-temp', 'k',
"keep the pseudo-installation tree around after " +
"creating the distribution archive"),
('dist-dir=', 'd',
"directory to put final built distributions in"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
('relative', None,
"build the archive using relative paths"
"(default: false)"),
('owner=', 'u',
"Owner name used when creating a tar file"
" [default: current user]"),
('group=', 'g',
"Group name used when creating a tar file"
" [default: current group]"),
('universal', None,
"make a universal wheel"
" (default: false)"),
('python-tag=', None,
"Python implementation compatibility tag"
" (default: py%s)" % get_impl_ver()[0]),
]
boolean_options = ['keep-temp', 'skip-build', 'relative', 'universal']
def initialize_options(self):
self.bdist_dir = None
self.data_dir = None
self.plat_name = None
self.plat_tag = None
self.format = 'zip'
self.keep_temp = False
self.dist_dir = None
self.distinfo_dir = None
self.egginfo_dir = None
self.root_is_pure = None
self.skip_build = None
self.relative = False
self.owner = None
self.group = None
self.universal = False
self.python_tag = 'py' + get_impl_ver()[0]
self.plat_name_supplied = False
def finalize_options(self):
if self.bdist_dir is None:
bdist_base = self.get_finalized_command('bdist').bdist_base
self.bdist_dir = os.path.join(bdist_base, 'wheel')
self.data_dir = self.wheel_dist_name + '.data'
self.plat_name_supplied = self.plat_name is not None
need_options = ('dist_dir', 'plat_name', 'skip_build')
self.set_undefined_options('bdist',
*zip(need_options, need_options))
self.root_is_pure = not (self.distribution.has_ext_modules()
or self.distribution.has_c_libraries())
# Support legacy [wheel] section for setting universal
wheel = self.distribution.get_option_dict('wheel')
if 'universal' in wheel:
# please don't define this in your global configs
val = wheel['universal'][1].strip()
if val.lower() in ('1', 'true', 'yes'):
self.universal = True
@property
def wheel_dist_name(self):
"""Return distribution full name with - replaced with _"""
return '-'.join((safer_name(self.distribution.get_name()),
safer_version(self.distribution.get_version())))
def get_tag(self):
# bdist sets self.plat_name if unset, we should only use it for purepy
# wheels if the user supplied it.
if self.plat_name_supplied:
plat_name = self.plat_name
elif self.root_is_pure:
plat_name = 'any'
else:
plat_name = self.plat_name or get_platform()
if plat_name in ('linux-x86_64', 'linux_x86_64') and sys.maxsize == 2147483647:
plat_name = 'linux_i686'
plat_name = plat_name.replace('-', '_').replace('.', '_')
if self.root_is_pure:
if self.universal:
impl = 'py2.py3'
else:
impl = self.python_tag
tag = (impl, 'none', plat_name)
else:
impl_name = get_abbr_impl()
impl_ver = get_impl_ver()
# PEP 3149
abi_tag = str(get_abi_tag()).lower()
tag = (impl_name + impl_ver, abi_tag, plat_name)
supported_tags = pep425tags.get_supported(
supplied_platform=plat_name if self.plat_name_supplied else None)
# XXX switch to this alternate implementation for non-pure:
assert tag == supported_tags[0], "%s != %s" % (tag, supported_tags[0])
return tag
def get_archive_basename(self):
"""Return archive name without extension"""
impl_tag, abi_tag, plat_tag = self.get_tag()
archive_basename = "%s-%s-%s-%s" % (
self.wheel_dist_name,
impl_tag,
abi_tag,
plat_tag)
return archive_basename
def run(self):
build_scripts = self.reinitialize_command('build_scripts')
build_scripts.executable = 'python'
if not self.skip_build:
self.run_command('build')
install = self.reinitialize_command('install',
reinit_subcommands=True)
install.root = self.bdist_dir
install.compile = False
install.skip_build = self.skip_build
install.warn_dir = False
# A wheel without setuptools scripts is more cross-platform.
# Use the (undocumented) `no_ep` option to setuptools'
# install_scripts command to avoid creating entry point scripts.
install_scripts = self.reinitialize_command('install_scripts')
install_scripts.no_ep = True
# Use a custom scheme for the archive, because we have to decide
# at installation time which scheme to use.
for key in ('headers', 'scripts', 'data', 'purelib', 'platlib'):
setattr(install,
'install_' + key,
os.path.join(self.data_dir, key))
basedir_observed = ''
if os.name == 'nt':
# win32 barfs if any of these are ''; could be '.'?
# (distutils.command.install:change_roots bug)
basedir_observed = os.path.normpath(os.path.join(self.data_dir, '..'))
self.install_libbase = self.install_lib = basedir_observed
setattr(install,
'install_purelib' if self.root_is_pure else 'install_platlib',
basedir_observed)
logger.info("installing to %s", self.bdist_dir)
self.run_command('install')
archive_basename = self.get_archive_basename()
pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
if not self.relative:
archive_root = self.bdist_dir
else:
archive_root = os.path.join(
self.bdist_dir,
self._ensure_relative(install.install_base))
self.set_undefined_options(
'install_egg_info', ('target', 'egginfo_dir'))
self.distinfo_dir = os.path.join(self.bdist_dir,
'%s.dist-info' % self.wheel_dist_name)
self.egg2dist(self.egginfo_dir,
self.distinfo_dir)
self.write_wheelfile(self.distinfo_dir)
self.write_record(self.bdist_dir, self.distinfo_dir)
# Make the archive
if not os.path.exists(self.dist_dir):
os.makedirs(self.dist_dir)
wheel_name = archive_wheelfile(pseudoinstall_root, archive_root)
# Sign the archive
if 'WHEEL_TOOL' in os.environ:
subprocess.call([os.environ['WHEEL_TOOL'], 'sign', wheel_name])
# Add to 'Distribution.dist_files' so that the "upload" command works
getattr(self.distribution, 'dist_files', []).append(
('bdist_wheel', get_python_version(), wheel_name))
if not self.keep_temp:
if self.dry_run:
logger.info('removing %s', self.bdist_dir)
else:
rmtree(self.bdist_dir)
def write_wheelfile(self, wheelfile_base, generator='bdist_wheel (' + wheel_version + ')'):
from email.message import Message
msg = Message()
msg['Wheel-Version'] = '1.0' # of the spec
msg['Generator'] = generator
msg['Root-Is-Purelib'] = str(self.root_is_pure).lower()
# Doesn't work for bdist_wininst
impl_tag, abi_tag, plat_tag = self.get_tag()
for impl in impl_tag.split('.'):
for abi in abi_tag.split('.'):
for plat in plat_tag.split('.'):
msg['Tag'] = '-'.join((impl, abi, plat))
wheelfile_path = os.path.join(wheelfile_base, 'WHEEL')
logger.info('creating %s', wheelfile_path)
with open(wheelfile_path, 'w') as f:
Generator(f, maxheaderlen=0).flatten(msg)
def _ensure_relative(self, path):
# copied from dir_util, deleted
drive, path = os.path.splitdrive(path)
if path[0:1] == os.sep:
path = drive + path[1:]
return path
def _pkginfo_to_metadata(self, egg_info_path, pkginfo_path):
return metadata.pkginfo_to_metadata(egg_info_path, pkginfo_path)
def license_file(self):
"""Return license filename from a license-file key in setup.cfg, or None."""
metadata = self.distribution.get_option_dict('metadata')
if not 'license_file' in metadata:
return None
return metadata['license_file'][1]
def setupcfg_requirements(self):
"""Generate requirements from setup.cfg as
('Requires-Dist', 'requirement; qualifier') tuples. From a metadata
section in setup.cfg:
[metadata]
provides-extra = extra1
extra2
requires-dist = requirement; qualifier
another; qualifier2
unqualified
Yields
('Provides-Extra', 'extra1'),
('Provides-Extra', 'extra2'),
('Requires-Dist', 'requirement; qualifier'),
('Requires-Dist', 'another; qualifier2'),
('Requires-Dist', 'unqualified')
"""
metadata = self.distribution.get_option_dict('metadata')
# our .ini parser folds - to _ in key names:
for key, title in (('provides_extra', 'Provides-Extra'),
('requires_dist', 'Requires-Dist')):
if not key in metadata:
continue
field = metadata[key]
for line in field[1].splitlines():
line = line.strip()
if not line:
continue
yield (title, line)
def add_requirements(self, metadata_path):
"""Add additional requirements from setup.cfg to file metadata_path"""
additional = list(self.setupcfg_requirements())
if not additional: return
pkg_info = read_pkg_info(metadata_path)
if 'Provides-Extra' in pkg_info or 'Requires-Dist' in pkg_info:
warnings.warn('setup.cfg requirements overwrite values from setup.py')
del pkg_info['Provides-Extra']
del pkg_info['Requires-Dist']
for k, v in additional:
pkg_info[k] = v
write_pkg_info(metadata_path, pkg_info)
def egg2dist(self, egginfo_path, distinfo_path):
"""Convert an .egg-info directory into a .dist-info directory"""
def adios(p):
"""Appropriately delete directory, file or link."""
if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p):
shutil.rmtree(p)
elif os.path.exists(p):
os.unlink(p)
adios(distinfo_path)
if not os.path.exists(egginfo_path):
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
import glob
pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info')
possible = glob.glob(pat)
err = "Egg metadata expected at %s but not found" % (egginfo_path,)
if possible:
alt = os.path.basename(possible[0])
err += " (%s found - possible misnamed archive file?)" % (alt,)
raise ValueError(err)
if os.path.isfile(egginfo_path):
# .egg-info is a single file
pkginfo_path = egginfo_path
pkg_info = self._pkginfo_to_metadata(egginfo_path, egginfo_path)
os.mkdir(distinfo_path)
else:
# .egg-info is a directory
pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO')
pkg_info = self._pkginfo_to_metadata(egginfo_path, pkginfo_path)
# ignore common egg metadata that is useless to wheel
shutil.copytree(egginfo_path, distinfo_path,
ignore=lambda x, y: set(('PKG-INFO',
'requires.txt',
'SOURCES.txt',
'not-zip-safe',)))
# delete dependency_links if it is only whitespace
dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt')
with open(dependency_links_path, 'r') as dependency_links_file:
dependency_links = dependency_links_file.read().strip()
if not dependency_links:
adios(dependency_links_path)
write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info)
# XXX deprecated. Still useful for current distribute/setuptools.
metadata_path = os.path.join(distinfo_path, 'METADATA')
self.add_requirements(metadata_path)
# XXX intentionally a different path than the PEP.
metadata_json_path = os.path.join(distinfo_path, 'metadata.json')
pymeta = pkginfo_to_dict(metadata_path,
distribution=self.distribution)
if 'description' in pymeta:
description_filename = 'DESCRIPTION.rst'
description_text = pymeta.pop('description')
description_path = os.path.join(distinfo_path,
description_filename)
with open(description_path, "wb") as description_file:
description_file.write(description_text.encode('utf-8'))
pymeta['extensions']['python.details']['document_names']['description'] = description_filename
# XXX heuristically copy any LICENSE/LICENSE.txt?
license = self.license_file()
if license:
license_filename = 'LICENSE.txt'
shutil.copy(license, os.path.join(self.distinfo_dir, license_filename))
pymeta['extensions']['python.details']['document_names']['license'] = license_filename
with open(metadata_json_path, "w") as metadata_json:
json.dump(pymeta, metadata_json, sort_keys=True)
adios(egginfo_path)
def write_record(self, bdist_dir, distinfo_dir):
from wheel.util import urlsafe_b64encode
record_path = os.path.join(distinfo_dir, 'RECORD')
record_relpath = os.path.relpath(record_path, bdist_dir)
def walk():
for dir, dirs, files in os.walk(bdist_dir):
dirs.sort()
for f in sorted(files):
yield os.path.join(dir, f)
def skip(path):
"""Wheel hashes every possible file."""
return (path == record_relpath)
with open_for_csv(record_path, 'w+') as record_file:
writer = csv.writer(record_file)
for path in walk():
relpath = os.path.relpath(path, bdist_dir)
if skip(relpath):
hash = ''
size = ''
else:
with open(path, 'rb') as f:
data = f.read()
digest = hashlib.sha256(data).digest()
hash = 'sha256=' + native(urlsafe_b64encode(digest))
size = len(data)
record_path = os.path.relpath(
path, bdist_dir).replace(os.path.sep, '/')
writer.writerow((record_path, hash, size))
#-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
from distutils import log as logger
import os.path
#from wheel.bdist_wheel import bdist_wheel
class azure_bdist_wheel(bdist_wheel):
description = "Create an Azure wheel distribution"
user_options = bdist_wheel.user_options + \
[('azure-namespace-package=', None,
"Name of the deepest nspkg used")]
def initialize_options(self):
bdist_wheel.initialize_options(self)
self.azure_namespace_package = None
def finalize_options(self):
bdist_wheel.finalize_options(self)
if self.azure_namespace_package and not self.azure_namespace_package.endswith("-nspkg"):
raise ValueError("azure_namespace_package must finish by -nspkg")
def run(self):
if not self.distribution.install_requires:
self.distribution.install_requires = []
self.distribution.install_requires.append(
"{}>=2.0.0".format(self.azure_namespace_package))
bdist_wheel.run(self)
def write_record(self, bdist_dir, distinfo_dir):
if self.azure_namespace_package:
# Split and remove last part, assuming it's "nspkg"
subparts = self.azure_namespace_package.split('-')[0:-1]
folder_with_init = [os.path.join(*subparts[0:i+1]) for i in range(len(subparts))]
for azure_sub_package in folder_with_init:
init_file = os.path.join(bdist_dir, azure_sub_package, '__init__.py')
if os.path.isfile(init_file):
logger.info("manually remove {} while building the wheel".format(init_file))
os.remove(init_file)
else:
raise ValueError("Unable to find {}. Are you sure of your namespace package?".format(init_file))
bdist_wheel.write_record(self, bdist_dir, distinfo_dir)
cmdclass = {
'bdist_wheel': azure_bdist_wheel,
}

Просмотреть файл

@ -0,0 +1,3 @@
[bdist_wheel]
universal=1
azure-namespace-package=azure-storage-nspkg

Просмотреть файл

@ -0,0 +1,91 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from setuptools import setup, find_packages
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
# azure-storage v0.36.0 and prior are not compatible with this package
try:
import azure.storage
try:
ver = azure.storage.__version__
raise Exception(
'This package is incompatible with azure-storage=={}. '.format(ver) +
' Uninstall it with "pip uninstall azure-storage".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-storage-common',
version='0.36.0',
description='Microsoft Azure Storage Common Client Library for Python',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-storage-python',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=find_packages(),
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
] + (['futures'] if sys.version_info < (3, 0) else []),
cmdclass=cmdclass
)

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-file
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-file
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -0,0 +1,2 @@
include *.rst
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,183 @@
Microsoft Azure Storage SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure Storage services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
If you are upgrading from a version older than 0.30.0, see the upgrade doc, the
usage samples in the samples directory, and the ChangeLog and BreakingChanges.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.35.1 (or any version below 0.35.1), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
Getting Started
===============
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to version 0.36.0, there used to be a single package (azure-storage) containing all services
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .fileservice import FileService
from .models import (
Share,
ShareProperties,
@ -25,5 +26,3 @@ from .models import (
SharePermissions,
FilePermissions,
)
from .fileservice import FileService

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from dateutil import parser
try:
from xml.etree import cElementTree as ETree
except ImportError:
@ -26,19 +27,20 @@ from .models import (
ShareProperties,
DirectoryProperties,
)
from ..models import (
from azure.storage.common.models import (
_list,
)
from .._deserialization import (
from azure.storage.common._deserialization import (
_parse_properties,
_parse_metadata,
)
from .._error import _validate_content_match
from .._common_conversion import (
from azure.storage.common._error import _validate_content_match
from azure.storage.common._common_conversion import (
_get_content_md5,
_to_str,
)
def _parse_share(response, name):
if response is None:
return None
@ -47,6 +49,7 @@ def _parse_share(response, name):
props = _parse_properties(response, ShareProperties)
return Share(name, props, metadata)
def _parse_directory(response, name):
if response is None:
return None
@ -55,6 +58,7 @@ def _parse_directory(response, name):
props = _parse_properties(response, DirectoryProperties)
return Directory(name, props, metadata)
def _parse_file(response, name, validate_content=False):
if response is None:
return None
@ -76,6 +80,7 @@ def _parse_file(response, name, validate_content=False):
return File(name, response.body, props, metadata)
def _convert_xml_to_shares(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -104,7 +109,7 @@ def _convert_xml_to_shares(response):
shares = _list()
list_element = ETree.fromstring(response.body)
# Set next marker
next_marker = list_element.findtext('NextMarker') or None
setattr(shares, 'next_marker', next_marker)
@ -128,12 +133,13 @@ def _convert_xml_to_shares(response):
share.properties.last_modified = parser.parse(properties_element.findtext('Last-Modified'))
share.properties.etag = properties_element.findtext('Etag')
share.properties.quota = int(properties_element.findtext('Quota'))
# Add share to list
shares.append(share)
return shares
def _convert_xml_to_directories_and_files(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -159,7 +165,7 @@ def _convert_xml_to_directories_and_files(response):
entries = _list()
list_element = ETree.fromstring(response.body)
# Set next marker
next_marker = list_element.findtext('NextMarker') or None
setattr(entries, 'next_marker', next_marker)
@ -174,7 +180,7 @@ def _convert_xml_to_directories_and_files(response):
# Properties
properties_element = file_element.find('Properties')
file.properties.content_length = int(properties_element.findtext('Content-Length'))
# Add file to list
entries.append(file)
@ -182,12 +188,13 @@ def _convert_xml_to_directories_and_files(response):
# Name element
directory = Directory()
directory.name = directory_element.findtext('Name')
# Add directory to list
entries.append(directory)
return entries
def _convert_xml_to_ranges(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -211,12 +218,13 @@ def _convert_xml_to_ranges(response):
for range_element in ranges_element.findall('Range'):
# Parse range
range = FileRange(int(range_element.findtext('Start')), int(range_element.findtext('End')))
# Add range to list
ranges.append(range)
return ranges
def _convert_xml_to_share_stats(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -228,4 +236,4 @@ def _convert_xml_to_share_stats(response):
return None
share_stats_element = ETree.fromstring(response.body)
return int(share_stats_element.findtext('ShareUsage'))
return int(share_stats_element.findtext('ShareUsage'))

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,15 +11,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import threading
from time import sleep
from .._error import _ERROR_NO_SINGLE_THREAD_CHUNKING
from azure.storage.common._error import _ERROR_NO_SINGLE_THREAD_CHUNKING
def _download_file_chunks(file_service, share_name, directory_name, file_name,
download_size, block_size, progress, start_range, end_range,
stream, max_connections, progress_callback, validate_content,
download_size, block_size, progress, start_range, end_range,
stream, max_connections, progress_callback, validate_content,
timeout, operation_context):
if max_connections <= 1:
raise ValueError(_ERROR_NO_SINGLE_THREAD_CHUNKING.format('file'))
@ -27,7 +27,7 @@ def _download_file_chunks(file_service, share_name, directory_name, file_name,
downloader = _FileChunkDownloader(
file_service,
share_name,
directory_name,
directory_name,
file_name,
download_size,
block_size,
@ -45,9 +45,10 @@ def _download_file_chunks(file_service, share_name, directory_name, file_name,
executor = concurrent.futures.ThreadPoolExecutor(max_connections)
result = list(executor.map(downloader.process_chunk, downloader.get_chunk_offsets()))
class _FileChunkDownloader(object):
def __init__(self, file_service, share_name, directory_name, file_name,
download_size, chunk_size, progress, start_range, end_range,
def __init__(self, file_service, share_name, directory_name, file_name,
download_size, chunk_size, progress, start_range, end_range,
stream, progress_callback, validate_content, timeout, operation_context):
self.file_service = file_service
self.share_name = share_name
@ -56,7 +57,7 @@ class _FileChunkDownloader(object):
self.chunk_size = chunk_size
self.download_size = download_size
self.start_index = start_range
self.start_index = start_range
self.file_end = end_range
self.stream = stream
@ -109,4 +110,4 @@ class _FileChunkDownloader(object):
validate_content=self.validate_content,
timeout=self.timeout,
_context=self.operation_context
)
)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,15 +11,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from time import time
from wsgiref.handlers import format_date_time
from .._error import (
# --------------------------------------------------------------------------
from azure.storage.common._common_conversion import _str
from azure.storage.common._error import (
_validate_not_none,
_ERROR_START_END_NEEDED_FOR_MD5,
_ERROR_RANGE_TOO_LARGE_FOR_MD5,
)
from .._common_conversion import _str
def _get_path(share_name=None, directory_name=None, file_name=None):
'''
@ -50,7 +49,9 @@ def _get_path(share_name=None, directory_name=None, file_name=None):
else:
return '/'
def _validate_and_format_range_headers(request, start_range, end_range, start_range_required=True, end_range_required=True, check_content_md5=False):
def _validate_and_format_range_headers(request, start_range, end_range, start_range_required=True,
end_range_required=True, check_content_md5=False):
# If end range is provided, start range must be provided
if start_range_required == True or end_range is not None:
_validate_not_none('start_range', start_range)
@ -71,4 +72,4 @@ def _validate_and_format_range_headers(request, start_range, end_range, start_ra
if end_range - start_range > 4 * 1024 * 1024:
raise ValueError(_ERROR_RANGE_TOO_LARGE_FOR_MD5)
request.headers['x-ms-range-get-content-md5'] = 'true'
request.headers['x-ms-range-get-content-md5'] = 'true'

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,10 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import threading
from time import sleep
def _upload_file_chunks(file_service, share_name, directory_name, file_name,
file_size, block_size, stream, max_connections,
@ -48,9 +47,10 @@ def _upload_file_chunks(file_service, share_name, directory_name, file_name,
return range_ids
class _FileChunkUploader(object):
def __init__(self, file_service, share_name, directory_name, file_name,
file_size, chunk_size, stream, parallel, progress_callback,
def __init__(self, file_service, share_name, directory_name, file_name,
file_size, chunk_size, stream, parallel, progress_callback,
validate_content, timeout):
self.file_service = file_service
self.share_name = share_name
@ -139,4 +139,4 @@ class _FileChunkUploader(object):
)
range_id = 'bytes={0}-{1}'.format(chunk_start, chunk_end)
self._update_progress(len(chunk_data))
return range_id
return range_id

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,35 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
from os import path
from azure.common import AzureHttpError
from .._error import (
from azure.storage.common._auth import (
_StorageSharedKeyAuthentication,
_StorageSASAuthentication,
)
from azure.storage.common._common_conversion import (
_int_to_str,
_to_str,
_get_content_md5,
)
from azure.storage.common._connection import _ServiceParameters
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
DEV_ACCOUNT_NAME,
)
from azure.storage.common._deserialization import (
_convert_xml_to_service_properties,
_convert_xml_to_signed_identifiers,
_parse_metadata,
_parse_properties,
_parse_length_from_content_range,
)
from azure.storage.common._error import (
_dont_fail_not_exist,
_dont_fail_on_exist,
_validate_not_none,
@ -24,52 +50,23 @@ from .._error import (
_ERROR_PARALLEL_NOT_SEEKABLE,
_validate_access_policies,
)
from .._common_conversion import (
_int_to_str,
_to_str,
_get_content_md5,
)
from .._serialization import (
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_request_body,
_get_data_bytes_only,
_convert_signed_identifiers_to_xml,
_convert_service_properties_to_xml,
_add_metadata_headers,
)
from .._deserialization import (
_convert_xml_to_service_properties,
_convert_xml_to_signed_identifiers,
_get_download_size,
_parse_metadata,
_parse_properties,
_parse_length_from_content_range,
)
from ..models import (
from azure.storage.common.models import (
Services,
ListGenerator,
_OperationContext,
)
from .models import (
File,
FileProperties,
)
from .._http import HTTPRequest
from ._upload_chunking import _upload_file_chunks
from ._download_chunking import _download_file_chunks
from .._auth import (
_StorageSharedKeyAuthentication,
_StorageSASAuthentication,
)
from .._connection import _ServiceParameters
from .._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
DEV_ACCOUNT_NAME,
)
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
from azure.storage.common.sharedaccesssignature import (
SharedAccessSignature,
)
from azure.storage.common.storageclient import StorageClient
from ._deserialization import (
_convert_xml_to_shares,
_convert_xml_to_directories_and_files,
@ -79,19 +76,23 @@ from ._deserialization import (
_parse_share,
_parse_directory,
)
from ..sharedaccesssignature import (
SharedAccessSignature,
from ._download_chunking import _download_file_chunks
from ._serialization import (
_get_path,
_validate_and_format_range_headers,
)
from ..storageclient import StorageClient
from os import path
import sys
from ._upload_chunking import _upload_file_chunks
from .models import (
FileProperties,
)
if sys.version_info >= (3,):
from io import BytesIO
else:
from cStringIO import StringIO as BytesIO
class FileService(StorageClient):
class FileService(StorageClient):
'''
The Server Message Block (SMB) protocol is the preferred file share protocol
used on premise today. The Microsoft Azure File service enables customers to
@ -122,8 +123,8 @@ class FileService(StorageClient):
MAX_CHUNK_GET_SIZE = 8 * 1024 * 1024
MAX_RANGE_SIZE = 4 * 1024 * 1024
def __init__(self, account_name=None, account_key=None, sas_token=None,
protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
def __init__(self, account_name=None, account_key=None, sas_token=None,
protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
request_session=None, connection_string=None, socket_timeout=None):
'''
:param str account_name:
@ -155,15 +156,15 @@ class FileService(StorageClient):
'''
service_params = _ServiceParameters.get_service_parameters(
'file',
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
protocol=protocol,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
request_session=request_session,
connection_string=connection_string,
socket_timeout=socket_timeout)
super(FileService, self).__init__(service_params)
if self.account_name == DEV_ACCOUNT_NAME:
@ -179,7 +180,7 @@ class FileService(StorageClient):
else:
raise ValueError(_ERROR_STORAGE_MISSING_INFO)
def make_file_url(self, share_name, directory_name, file_name,
def make_file_url(self, share_name, directory_name, file_name,
protocol=None, sas_token=None):
'''
Creates the url to access a file.
@ -221,8 +222,8 @@ class FileService(StorageClient):
return url
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
'''
Generates a shared access signature for the file service.
Use the returned signature with the sas_token parameter of the FileService.
@ -267,21 +268,21 @@ class FileService(StorageClient):
_validate_not_none('self.account_key', self.account_key)
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_account(Services.FILE, resource_types, permission,
return sas.generate_account(Services.FILE, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)
def generate_share_shared_access_signature(self, share_name,
permission=None,
expiry=None,
start=None,
id=None,
ip=None,
protocol=None,
cache_control=None,
content_disposition=None,
content_encoding=None,
content_language=None,
content_type=None):
def generate_share_shared_access_signature(self, share_name,
permission=None,
expiry=None,
start=None,
id=None,
ip=None,
protocol=None,
cache_control=None,
content_disposition=None,
content_encoding=None,
content_language=None,
content_type=None):
'''
Generates a shared access signature for the share.
Use the returned signature with the sas_token parameter of FileService.
@ -348,9 +349,9 @@ class FileService(StorageClient):
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_share(
share_name,
permission,
permission,
expiry,
start=start,
start=start,
id=id,
ip=ip,
protocol=protocol,
@ -361,20 +362,20 @@ class FileService(StorageClient):
content_type=content_type,
)
def generate_file_shared_access_signature(self, share_name,
directory_name=None,
file_name=None,
permission=None,
expiry=None,
start=None,
id=None,
ip=None,
protocol=None,
cache_control=None,
content_disposition=None,
content_encoding=None,
content_language=None,
content_type=None):
def generate_file_shared_access_signature(self, share_name,
directory_name=None,
file_name=None,
permission=None,
expiry=None,
start=None,
id=None,
ip=None,
protocol=None,
cache_control=None,
content_disposition=None,
content_encoding=None,
content_language=None,
content_type=None):
'''
Generates a shared access signature for the file.
Use the returned signature with the sas_token parameter of FileService.
@ -450,9 +451,9 @@ class FileService(StorageClient):
share_name,
directory_name,
file_name,
permission,
permission,
expiry,
start=start,
start=start,
id=id,
ip=ip,
protocol=protocol,
@ -463,7 +464,7 @@ class FileService(StorageClient):
content_type=content_type,
)
def set_file_service_properties(self, hour_metrics=None, minute_metrics=None,
def set_file_service_properties(self, hour_metrics=None, minute_metrics=None,
cors=None, timeout=None):
'''
Sets the properties of a storage account's File service, including
@ -489,9 +490,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path()
request.query = {
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
request.body = _get_request_body(
_convert_service_properties_to_xml(None, hour_metrics, minute_metrics, cors))
@ -514,14 +515,14 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path()
request.query = {
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
'restype': 'service',
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_service_properties)
def list_shares(self, prefix=None, marker=None, num_results=None,
def list_shares(self, prefix=None, marker=None, num_results=None,
include_metadata=False, timeout=None):
'''
Returns a generator to list the shares under the specified account.
@ -552,13 +553,13 @@ class FileService(StorageClient):
'''
include = 'metadata' if include_metadata else None
operation_context = _OperationContext(location_lock=True)
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'timeout': timeout, '_context': operation_context}
kwargs = {'prefix': prefix, 'marker': marker, 'max_results': num_results,
'include': include, 'timeout': timeout, '_context': operation_context}
resp = self._list_shares(**kwargs)
return ListGenerator(resp, self._list_shares, (), kwargs)
def _list_shares(self, prefix=None, marker=None, max_results=None,
def _list_shares(self, prefix=None, marker=None, max_results=None,
include=None, timeout=None, _context=None):
'''
Returns a list of the shares under the specified account.
@ -589,12 +590,12 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path()
request.query = {
'comp': 'list',
'prefix': _to_str(prefix),
'marker': _to_str(marker),
'maxresults': _int_to_str(max_results),
'include': _to_str(include),
'timeout': _int_to_str(timeout),
'comp': 'list',
'prefix': _to_str(prefix),
'marker': _to_str(marker),
'maxresults': _int_to_str(max_results),
'include': _to_str(include),
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_shares, operation_context=_context)
@ -630,8 +631,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'timeout': _int_to_str(timeout),
'restype': 'share',
'timeout': _int_to_str(timeout),
}
request.headers = {
'x-ms-share-quota': _int_to_str(quota)
@ -668,8 +669,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'timeout': _int_to_str(timeout),
'restype': 'share',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _parse_share, [share_name])
@ -693,11 +694,11 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'properties',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
request.headers = {
request.headers = {
'x-ms-share-quota': _int_to_str(quota)
}
@ -721,9 +722,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _parse_metadata)
@ -750,9 +751,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
_add_metadata_headers(metadata, request)
@ -775,9 +776,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'acl',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'acl',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_signed_identifiers)
@ -804,9 +805,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'acl',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'acl',
'timeout': _int_to_str(timeout),
}
request.body = _get_request_body(
_convert_signed_identifiers_to_xml(signed_identifiers))
@ -834,9 +835,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'comp': 'stats',
'timeout': _int_to_str(timeout),
'restype': 'share',
'comp': 'stats',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_share_stats)
@ -864,8 +865,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name)
request.query = {
'restype': 'share',
'timeout': _int_to_str(timeout),
'restype': 'share',
'timeout': _int_to_str(timeout),
}
if not fail_not_exist:
@ -911,8 +912,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'timeout': _int_to_str(timeout),
'restype': 'directory',
'timeout': _int_to_str(timeout),
}
_add_metadata_headers(metadata, request)
@ -958,8 +959,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'timeout': _int_to_str(timeout),
'restype': 'directory',
'timeout': _int_to_str(timeout),
}
if not fail_not_exist:
@ -995,8 +996,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'timeout': _int_to_str(timeout),
'restype': 'directory',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _parse_directory, [directory_name])
@ -1022,9 +1023,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'restype': 'directory',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _parse_metadata)
@ -1054,9 +1055,9 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'restype': 'directory',
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
_add_metadata_headers(metadata, request)
@ -1108,7 +1109,7 @@ class FileService(StorageClient):
return ListGenerator(resp, self._list_directories_and_files, args, kwargs)
def _list_directories_and_files(self, share_name, directory_name=None,
marker=None, max_results=None, timeout=None,
marker=None, max_results=None, timeout=None,
prefix=None, _context=None):
'''
@ -1142,15 +1143,15 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name)
request.query = {
'restype': 'directory',
'comp': 'list',
'prefix': _to_str(prefix),
'marker': _to_str(marker),
'maxresults': _int_to_str(max_results),
'timeout': _int_to_str(timeout),
'restype': 'directory',
'comp': 'list',
'prefix': _to_str(prefix),
'marker': _to_str(marker),
'maxresults': _int_to_str(max_results),
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _convert_xml_to_directories_and_files,
return self._perform_request(request, _convert_xml_to_directories_and_files,
operation_context=_context)
def get_file_properties(self, share_name, directory_name, file_name, timeout=None):
@ -1176,7 +1177,7 @@ class FileService(StorageClient):
request.method = 'HEAD'
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = { 'timeout': _int_to_str(timeout)}
request.query = {'timeout': _int_to_str(timeout)}
return self._perform_request(request, _parse_file, [file_name])
@ -1211,7 +1212,7 @@ class FileService(StorageClient):
_dont_fail_not_exist(ex)
return False
def resize_file(self, share_name, directory_name,
def resize_file(self, share_name, directory_name,
file_name, content_length, timeout=None):
'''
Resizes a file to the specified size. If the specified byte
@ -1237,16 +1238,16 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'properties',
'timeout': _int_to_str(timeout),
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
request.headers = {
'x-ms-content-length': _to_str(content_length)
'x-ms-content-length': _to_str(content_length)
}
self._perform_request(request)
def set_file_properties(self, share_name, directory_name, file_name,
def set_file_properties(self, share_name, directory_name, file_name,
content_settings, timeout=None):
'''
Sets system properties on the file. If one property is set for the
@ -1271,8 +1272,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'properties',
'timeout': _int_to_str(timeout),
'comp': 'properties',
'timeout': _int_to_str(timeout),
}
request.headers = content_settings._to_headers()
@ -1301,13 +1302,13 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
return self._perform_request(request, _parse_metadata)
def set_file_metadata(self, share_name, directory_name,
def set_file_metadata(self, share_name, directory_name,
file_name, metadata=None, timeout=None):
'''
Sets user-defined metadata for the specified file as one or more
@ -1334,8 +1335,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'metadata',
'timeout': _int_to_str(timeout),
'comp': 'metadata',
'timeout': _int_to_str(timeout),
}
_add_metadata_headers(metadata, request)
@ -1387,9 +1388,9 @@ class FileService(StorageClient):
request.method = 'PUT'
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = { 'timeout': _int_to_str(timeout)}
request.query = {'timeout': _int_to_str(timeout)}
request.headers = {
'x-ms-copy-source': _to_str(copy_source),
'x-ms-copy-source': _to_str(copy_source),
}
_add_metadata_headers(metadata, request)
@ -1420,12 +1421,12 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'copy',
'copyid': _to_str(copy_id),
'timeout': _int_to_str(timeout),
'comp': 'copy',
'copyid': _to_str(copy_id),
'timeout': _int_to_str(timeout),
}
request.headers = {
'x-ms-copy-action': 'abort',
'x-ms-copy-action': 'abort',
}
self._perform_request(request)
@ -1450,12 +1451,12 @@ class FileService(StorageClient):
request.method = 'DELETE'
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = { 'timeout': _int_to_str(timeout)}
request.query = {'timeout': _int_to_str(timeout)}
self._perform_request(request)
def create_file(self, share_name, directory_name, file_name,
content_length, content_settings=None, metadata=None,
content_length, content_settings=None, metadata=None,
timeout=None):
'''
Creates a new file.
@ -1487,10 +1488,10 @@ class FileService(StorageClient):
request.method = 'PUT'
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = { 'timeout': _int_to_str(timeout)}
request.query = {'timeout': _int_to_str(timeout)}
request.headers = {
'x-ms-content-length': _to_str(content_length),
'x-ms-type': 'file'
'x-ms-content-length': _to_str(content_length),
'x-ms-type': 'file'
}
_add_metadata_headers(metadata, request)
if content_settings is not None:
@ -1498,10 +1499,10 @@ class FileService(StorageClient):
self._perform_request(request)
def create_file_from_path(self, share_name, directory_name, file_name,
local_file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, timeout=None):
def create_file_from_path(self, share_name, directory_name, file_name,
local_file_path, content_settings=None,
metadata=None, validate_content=False, progress_callback=None,
max_connections=2, timeout=None):
'''
Creates a new azure file from a local file path, or updates the content of an
existing file, with automatic chunking and progress notifications.
@ -1549,9 +1550,9 @@ class FileService(StorageClient):
count, content_settings, metadata, validate_content, progress_callback,
max_connections, timeout)
def create_file_from_text(self, share_name, directory_name, file_name,
text, encoding='utf-8', content_settings=None,
metadata=None, validate_content=False, timeout=None):
def create_file_from_text(self, share_name, directory_name, file_name,
text, encoding='utf-8', content_settings=None,
metadata=None, validate_content=False, timeout=None):
'''
Creates a new file from str/unicode, or updates the content of an
existing file, with automatic chunking and progress notifications.
@ -1592,15 +1593,15 @@ class FileService(StorageClient):
text = text.encode(encoding)
self.create_file_from_bytes(
share_name, directory_name, file_name, text, count=len(text),
content_settings=content_settings, metadata=metadata,
share_name, directory_name, file_name, text, count=len(text),
content_settings=content_settings, metadata=metadata,
validate_content=validate_content, timeout=timeout)
def create_file_from_bytes(
self, share_name, directory_name, file_name, file,
index=0, count=None, content_settings=None, metadata=None,
validate_content=False, progress_callback=None, max_connections=2,
timeout=None):
self, share_name, directory_name, file_name, file,
index=0, count=None, content_settings=None, metadata=None,
validate_content=False, progress_callback=None, max_connections=2,
timeout=None):
'''
Creates a new file from an array of bytes, or updates the content
of an existing file, with automatic chunking and progress
@ -1663,9 +1664,9 @@ class FileService(StorageClient):
max_connections, timeout)
def create_file_from_stream(
self, share_name, directory_name, file_name, stream, count,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
self, share_name, directory_name, file_name, stream, count,
content_settings=None, metadata=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
'''
Creates a new file from a file/stream, or updates the content of an
existing file, with automatic chunking and progress notifications.
@ -1739,8 +1740,8 @@ class FileService(StorageClient):
)
def _get_file(self, share_name, directory_name, file_name,
start_range=None, end_range=None, validate_content=False,
timeout=None, _context=None):
start_range=None, end_range=None, validate_content=False,
timeout=None, _context=None):
'''
Downloads a file's content, metadata, and properties. You can specify a
range if you don't need to download the file in its entirety. If no range
@ -1780,7 +1781,7 @@ class FileService(StorageClient):
request.method = 'GET'
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = { 'timeout': _int_to_str(timeout)}
request.query = {'timeout': _int_to_str(timeout)}
_validate_and_format_range_headers(
request,
start_range,
@ -1789,7 +1790,7 @@ class FileService(StorageClient):
end_range_required=False,
check_content_md5=validate_content)
return self._perform_request(request, _parse_file,
return self._perform_request(request, _parse_file,
[file_name, validate_content],
operation_context=_context)
@ -1876,9 +1877,9 @@ class FileService(StorageClient):
return file
def get_file_to_stream(
self, share_name, directory_name, file_name, stream,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
self, share_name, directory_name, file_name, stream,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
'''
Downloads a file to a stream, with automatic chunking and progress
notifications. Returns an instance of :class:`File` with properties
@ -1957,10 +1958,10 @@ class FileService(StorageClient):
# If max_connections is greater than 1, do the first get to establish the
# size of the file and get the first segment of data
else:
else:
if sys.version_info >= (3,) and not stream.seekable():
raise ValueError(_ERROR_PARALLEL_NOT_SEEKABLE)
# The service only provides transactional MD5s for chunks under 4MB.
# If validate_content is on, get only self.MAX_CHUNK_GET_SIZE for the first
# chunk so a transactional MD5 can be retrieved.
@ -2025,16 +2026,16 @@ class FileService(StorageClient):
# If the file is small or single shot download was used, the download is
# complete at this point. If file size is large, use parallel download.
if file.properties.content_length != download_size:
if file.properties.content_length != download_size:
# At this point would like to lock on something like the etag so that
# if the file is modified, we dont get a corrupted download. However,
# this feature is not yet available on the file service.
end_file = file_size
if end_range:
# Use the end_range unless it is over the end of the file
end_file = min(file_size, end_range + 1)
_download_file_chunks(
self,
share_name,
@ -2043,7 +2044,7 @@ class FileService(StorageClient):
download_size,
self.MAX_CHUNK_GET_SIZE,
first_get_size,
initial_request_end + 1, # start where the first download ended
initial_request_end + 1, # start where the first download ended
end_file,
stream,
max_connections,
@ -2067,7 +2068,7 @@ class FileService(StorageClient):
return file
def get_file_to_bytes(self, share_name, directory_name, file_name,
def get_file_to_bytes(self, share_name, directory_name, file_name,
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
'''
@ -2147,9 +2148,9 @@ class FileService(StorageClient):
return file
def get_file_to_text(
self, share_name, directory_name, file_name, encoding='utf-8',
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
self, share_name, directory_name, file_name, encoding='utf-8',
start_range=None, end_range=None, validate_content=False,
progress_callback=None, max_connections=2, timeout=None):
'''
Downloads a file as unicode text, with automatic chunking and progress
notifications. Returns an instance of :class:`File` with properties,
@ -2227,7 +2228,7 @@ class FileService(StorageClient):
file.content = file.content.decode(encoding)
return file
def update_range(self, share_name, directory_name, file_name, data,
def update_range(self, share_name, directory_name, file_name, data,
start_range, end_range, validate_content=False, timeout=None):
'''
Writes the bytes specified by the request body into the specified range.
@ -2268,11 +2269,11 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'range',
'timeout': _int_to_str(timeout),
'comp': 'range',
'timeout': _int_to_str(timeout),
}
request.headers = {
'x-ms-write': 'update',
'x-ms-write': 'update',
}
_validate_and_format_range_headers(
request, start_range, end_range)
@ -2316,12 +2317,12 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'range',
'timeout': _int_to_str(timeout),
'comp': 'range',
'timeout': _int_to_str(timeout),
}
request.headers = {
'Content-Length': '0',
'x-ms-write': 'clear',
'Content-Length': '0',
'x-ms-write': 'clear',
}
_validate_and_format_range_headers(
request, start_range, end_range)
@ -2359,8 +2360,8 @@ class FileService(StorageClient):
request.host_locations = self._get_host_locations()
request.path = _get_path(share_name, directory_name, file_name)
request.query = {
'comp': 'rangelist',
'timeout': _int_to_str(timeout),
'comp': 'rangelist',
'timeout': _int_to_str(timeout),
}
if start_range is not None:
_validate_and_format_range_headers(
@ -2370,4 +2371,4 @@ class FileService(StorageClient):
start_range_required=False,
end_range_required=False)
return self._perform_request(request, _convert_xml_to_ranges)
return self._perform_request(request, _convert_xml_to_ranges)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,10 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._common_conversion import _to_str
class Share(object):
# --------------------------------------------------------------------------
from azure.storage.common._common_conversion import _to_str
class Share(object):
'''
File share class.
@ -37,7 +38,6 @@ class Share(object):
class ShareProperties(object):
'''
File share's properties class.
@ -55,8 +55,8 @@ class ShareProperties(object):
self.etag = None
self.quota = None
class Directory(object):
class Directory(object):
'''
Directory class.
@ -77,8 +77,8 @@ class Directory(object):
self.properties = props or DirectoryProperties()
self.metadata = metadata
class DirectoryProperties(object):
class DirectoryProperties(object):
'''
File directory's properties class.
@ -96,8 +96,8 @@ class DirectoryProperties(object):
self.etag = None
self.server_encrypted = None
class File(object):
class File(object):
'''
File class.
@ -124,7 +124,6 @@ class File(object):
class FileProperties(object):
'''
File Properties.
@ -159,7 +158,6 @@ class FileProperties(object):
class ContentSettings(object):
'''
Used to store the content settings of a file.
@ -187,10 +185,9 @@ class ContentSettings(object):
'''
def __init__(
self, content_type=None, content_encoding=None,
content_language=None, content_disposition=None,
cache_control=None, content_md5=None):
self, content_type=None, content_encoding=None,
content_language=None, content_disposition=None,
cache_control=None, content_md5=None):
self.content_type = content_type
self.content_encoding = content_encoding
self.content_language = content_language
@ -259,7 +256,6 @@ class CopyProperties(object):
class FileRange(object):
'''
File Range.
@ -273,8 +269,8 @@ class FileRange(object):
self.start = start
self.end = end
class FilePermissions(object):
class FilePermissions(object):
'''
FilePermissions class to be used with
:func:`~azure.storage.file.fileservice.FileService.generate_file_shared_access_signature` API.
@ -290,7 +286,8 @@ class FilePermissions(object):
Create or write content, properties, metadata. Resize the file. Use the file
as the destination of a copy operation within the same account.
'''
def __init__(self, read=False, create=False, write=False, delete=False,
def __init__(self, read=False, create=False, write=False, delete=False,
_str=None):
'''
:param bool read:
@ -313,13 +310,13 @@ class FilePermissions(object):
self.create = create or ('c' in _str)
self.write = write or ('w' in _str)
self.delete = delete or ('d' in _str)
def __or__(self, other):
return FilePermissions(_str=str(self) + str(other))
def __add__(self, other):
return FilePermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('c' if self.create else '') +
@ -334,7 +331,6 @@ FilePermissions.WRITE = FilePermissions(write=True)
class SharePermissions(object):
'''
SharePermissions class to be used with `azure.storage.file.FileService.generate_share_shared_access_signature`
method and for the AccessPolicies used with `azure.storage.file.FileService.set_share_acl`.
@ -355,7 +351,8 @@ class SharePermissions(object):
Note: You cannot grant permissions to read or write share properties or
metadata with a service SAS. Use an account SAS instead.
'''
def __init__(self, read=False, write=False, delete=False, list=False,
def __init__(self, read=False, write=False, delete=False, list=False,
_str=None):
'''
:param bool read:
@ -383,20 +380,21 @@ class SharePermissions(object):
self.write = write or ('w' in _str)
self.delete = delete or ('d' in _str)
self.list = list or ('l' in _str)
def __or__(self, other):
return SharePermissions(_str=str(self) + str(other))
def __add__(self, other):
return SharePermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('w' if self.write else '') +
('d' if self.delete else '') +
('d' if self.delete else '') +
('l' if self.list else ''))
SharePermissions.DELETE = SharePermissions(delete=True)
SharePermissions.LIST = SharePermissions(list=True)
SharePermissions.READ = SharePermissions(read=True)
SharePermissions.WRITE = SharePermissions(write=True)
SharePermissions.WRITE = SharePermissions(write=True)

Просмотреть файл

@ -0,0 +1,533 @@
"""
"wheel" copyright (c) 2012-2017 Daniel Holth <dholth@fastmail.fm> and
contributors.
The MIT License
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Create a Azure wheel (.whl) distribution (a wheel is a built archive format).
This file is a copy of the official bdist_wheel file from wheel 0.30.0a0, enhanced
of the bottom with some Microsoft extension for Azure SDK for Python
"""
import csv
import hashlib
import os
import subprocess
import warnings
import shutil
import json
import sys
try:
import sysconfig
except ImportError: # pragma nocover
# Python < 2.7
import distutils.sysconfig as sysconfig
import pkg_resources
safe_name = pkg_resources.safe_name
safe_version = pkg_resources.safe_version
from shutil import rmtree
from email.generator import Generator
from distutils.core import Command
from distutils.sysconfig import get_python_version
from distutils import log as logger
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag, get_platform
from wheel.util import native, open_for_csv
from wheel.archive import archive_wheelfile
from wheel.pkginfo import read_pkg_info, write_pkg_info
from wheel.metadata import pkginfo_to_dict
from wheel import pep425tags, metadata
from wheel import __version__ as wheel_version
def safer_name(name):
return safe_name(name).replace('-', '_')
def safer_version(version):
return safe_version(version).replace('-', '_')
class bdist_wheel(Command):
description = 'create a wheel distribution'
user_options = [('bdist-dir=', 'b',
"temporary directory for creating the distribution"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('keep-temp', 'k',
"keep the pseudo-installation tree around after " +
"creating the distribution archive"),
('dist-dir=', 'd',
"directory to put final built distributions in"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
('relative', None,
"build the archive using relative paths"
"(default: false)"),
('owner=', 'u',
"Owner name used when creating a tar file"
" [default: current user]"),
('group=', 'g',
"Group name used when creating a tar file"
" [default: current group]"),
('universal', None,
"make a universal wheel"
" (default: false)"),
('python-tag=', None,
"Python implementation compatibility tag"
" (default: py%s)" % get_impl_ver()[0]),
]
boolean_options = ['keep-temp', 'skip-build', 'relative', 'universal']
def initialize_options(self):
self.bdist_dir = None
self.data_dir = None
self.plat_name = None
self.plat_tag = None
self.format = 'zip'
self.keep_temp = False
self.dist_dir = None
self.distinfo_dir = None
self.egginfo_dir = None
self.root_is_pure = None
self.skip_build = None
self.relative = False
self.owner = None
self.group = None
self.universal = False
self.python_tag = 'py' + get_impl_ver()[0]
self.plat_name_supplied = False
def finalize_options(self):
if self.bdist_dir is None:
bdist_base = self.get_finalized_command('bdist').bdist_base
self.bdist_dir = os.path.join(bdist_base, 'wheel')
self.data_dir = self.wheel_dist_name + '.data'
self.plat_name_supplied = self.plat_name is not None
need_options = ('dist_dir', 'plat_name', 'skip_build')
self.set_undefined_options('bdist',
*zip(need_options, need_options))
self.root_is_pure = not (self.distribution.has_ext_modules()
or self.distribution.has_c_libraries())
# Support legacy [wheel] section for setting universal
wheel = self.distribution.get_option_dict('wheel')
if 'universal' in wheel:
# please don't define this in your global configs
val = wheel['universal'][1].strip()
if val.lower() in ('1', 'true', 'yes'):
self.universal = True
@property
def wheel_dist_name(self):
"""Return distribution full name with - replaced with _"""
return '-'.join((safer_name(self.distribution.get_name()),
safer_version(self.distribution.get_version())))
def get_tag(self):
# bdist sets self.plat_name if unset, we should only use it for purepy
# wheels if the user supplied it.
if self.plat_name_supplied:
plat_name = self.plat_name
elif self.root_is_pure:
plat_name = 'any'
else:
plat_name = self.plat_name or get_platform()
if plat_name in ('linux-x86_64', 'linux_x86_64') and sys.maxsize == 2147483647:
plat_name = 'linux_i686'
plat_name = plat_name.replace('-', '_').replace('.', '_')
if self.root_is_pure:
if self.universal:
impl = 'py2.py3'
else:
impl = self.python_tag
tag = (impl, 'none', plat_name)
else:
impl_name = get_abbr_impl()
impl_ver = get_impl_ver()
# PEP 3149
abi_tag = str(get_abi_tag()).lower()
tag = (impl_name + impl_ver, abi_tag, plat_name)
supported_tags = pep425tags.get_supported(
supplied_platform=plat_name if self.plat_name_supplied else None)
# XXX switch to this alternate implementation for non-pure:
assert tag == supported_tags[0], "%s != %s" % (tag, supported_tags[0])
return tag
def get_archive_basename(self):
"""Return archive name without extension"""
impl_tag, abi_tag, plat_tag = self.get_tag()
archive_basename = "%s-%s-%s-%s" % (
self.wheel_dist_name,
impl_tag,
abi_tag,
plat_tag)
return archive_basename
def run(self):
build_scripts = self.reinitialize_command('build_scripts')
build_scripts.executable = 'python'
if not self.skip_build:
self.run_command('build')
install = self.reinitialize_command('install',
reinit_subcommands=True)
install.root = self.bdist_dir
install.compile = False
install.skip_build = self.skip_build
install.warn_dir = False
# A wheel without setuptools scripts is more cross-platform.
# Use the (undocumented) `no_ep` option to setuptools'
# install_scripts command to avoid creating entry point scripts.
install_scripts = self.reinitialize_command('install_scripts')
install_scripts.no_ep = True
# Use a custom scheme for the archive, because we have to decide
# at installation time which scheme to use.
for key in ('headers', 'scripts', 'data', 'purelib', 'platlib'):
setattr(install,
'install_' + key,
os.path.join(self.data_dir, key))
basedir_observed = ''
if os.name == 'nt':
# win32 barfs if any of these are ''; could be '.'?
# (distutils.command.install:change_roots bug)
basedir_observed = os.path.normpath(os.path.join(self.data_dir, '..'))
self.install_libbase = self.install_lib = basedir_observed
setattr(install,
'install_purelib' if self.root_is_pure else 'install_platlib',
basedir_observed)
logger.info("installing to %s", self.bdist_dir)
self.run_command('install')
archive_basename = self.get_archive_basename()
pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
if not self.relative:
archive_root = self.bdist_dir
else:
archive_root = os.path.join(
self.bdist_dir,
self._ensure_relative(install.install_base))
self.set_undefined_options(
'install_egg_info', ('target', 'egginfo_dir'))
self.distinfo_dir = os.path.join(self.bdist_dir,
'%s.dist-info' % self.wheel_dist_name)
self.egg2dist(self.egginfo_dir,
self.distinfo_dir)
self.write_wheelfile(self.distinfo_dir)
self.write_record(self.bdist_dir, self.distinfo_dir)
# Make the archive
if not os.path.exists(self.dist_dir):
os.makedirs(self.dist_dir)
wheel_name = archive_wheelfile(pseudoinstall_root, archive_root)
# Sign the archive
if 'WHEEL_TOOL' in os.environ:
subprocess.call([os.environ['WHEEL_TOOL'], 'sign', wheel_name])
# Add to 'Distribution.dist_files' so that the "upload" command works
getattr(self.distribution, 'dist_files', []).append(
('bdist_wheel', get_python_version(), wheel_name))
if not self.keep_temp:
if self.dry_run:
logger.info('removing %s', self.bdist_dir)
else:
rmtree(self.bdist_dir)
def write_wheelfile(self, wheelfile_base, generator='bdist_wheel (' + wheel_version + ')'):
from email.message import Message
msg = Message()
msg['Wheel-Version'] = '1.0' # of the spec
msg['Generator'] = generator
msg['Root-Is-Purelib'] = str(self.root_is_pure).lower()
# Doesn't work for bdist_wininst
impl_tag, abi_tag, plat_tag = self.get_tag()
for impl in impl_tag.split('.'):
for abi in abi_tag.split('.'):
for plat in plat_tag.split('.'):
msg['Tag'] = '-'.join((impl, abi, plat))
wheelfile_path = os.path.join(wheelfile_base, 'WHEEL')
logger.info('creating %s', wheelfile_path)
with open(wheelfile_path, 'w') as f:
Generator(f, maxheaderlen=0).flatten(msg)
def _ensure_relative(self, path):
# copied from dir_util, deleted
drive, path = os.path.splitdrive(path)
if path[0:1] == os.sep:
path = drive + path[1:]
return path
def _pkginfo_to_metadata(self, egg_info_path, pkginfo_path):
return metadata.pkginfo_to_metadata(egg_info_path, pkginfo_path)
def license_file(self):
"""Return license filename from a license-file key in setup.cfg, or None."""
metadata = self.distribution.get_option_dict('metadata')
if not 'license_file' in metadata:
return None
return metadata['license_file'][1]
def setupcfg_requirements(self):
"""Generate requirements from setup.cfg as
('Requires-Dist', 'requirement; qualifier') tuples. From a metadata
section in setup.cfg:
[metadata]
provides-extra = extra1
extra2
requires-dist = requirement; qualifier
another; qualifier2
unqualified
Yields
('Provides-Extra', 'extra1'),
('Provides-Extra', 'extra2'),
('Requires-Dist', 'requirement; qualifier'),
('Requires-Dist', 'another; qualifier2'),
('Requires-Dist', 'unqualified')
"""
metadata = self.distribution.get_option_dict('metadata')
# our .ini parser folds - to _ in key names:
for key, title in (('provides_extra', 'Provides-Extra'),
('requires_dist', 'Requires-Dist')):
if not key in metadata:
continue
field = metadata[key]
for line in field[1].splitlines():
line = line.strip()
if not line:
continue
yield (title, line)
def add_requirements(self, metadata_path):
"""Add additional requirements from setup.cfg to file metadata_path"""
additional = list(self.setupcfg_requirements())
if not additional: return
pkg_info = read_pkg_info(metadata_path)
if 'Provides-Extra' in pkg_info or 'Requires-Dist' in pkg_info:
warnings.warn('setup.cfg requirements overwrite values from setup.py')
del pkg_info['Provides-Extra']
del pkg_info['Requires-Dist']
for k, v in additional:
pkg_info[k] = v
write_pkg_info(metadata_path, pkg_info)
def egg2dist(self, egginfo_path, distinfo_path):
"""Convert an .egg-info directory into a .dist-info directory"""
def adios(p):
"""Appropriately delete directory, file or link."""
if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p):
shutil.rmtree(p)
elif os.path.exists(p):
os.unlink(p)
adios(distinfo_path)
if not os.path.exists(egginfo_path):
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
import glob
pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info')
possible = glob.glob(pat)
err = "Egg metadata expected at %s but not found" % (egginfo_path,)
if possible:
alt = os.path.basename(possible[0])
err += " (%s found - possible misnamed archive file?)" % (alt,)
raise ValueError(err)
if os.path.isfile(egginfo_path):
# .egg-info is a single file
pkginfo_path = egginfo_path
pkg_info = self._pkginfo_to_metadata(egginfo_path, egginfo_path)
os.mkdir(distinfo_path)
else:
# .egg-info is a directory
pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO')
pkg_info = self._pkginfo_to_metadata(egginfo_path, pkginfo_path)
# ignore common egg metadata that is useless to wheel
shutil.copytree(egginfo_path, distinfo_path,
ignore=lambda x, y: set(('PKG-INFO',
'requires.txt',
'SOURCES.txt',
'not-zip-safe',)))
# delete dependency_links if it is only whitespace
dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt')
with open(dependency_links_path, 'r') as dependency_links_file:
dependency_links = dependency_links_file.read().strip()
if not dependency_links:
adios(dependency_links_path)
write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info)
# XXX deprecated. Still useful for current distribute/setuptools.
metadata_path = os.path.join(distinfo_path, 'METADATA')
self.add_requirements(metadata_path)
# XXX intentionally a different path than the PEP.
metadata_json_path = os.path.join(distinfo_path, 'metadata.json')
pymeta = pkginfo_to_dict(metadata_path,
distribution=self.distribution)
if 'description' in pymeta:
description_filename = 'DESCRIPTION.rst'
description_text = pymeta.pop('description')
description_path = os.path.join(distinfo_path,
description_filename)
with open(description_path, "wb") as description_file:
description_file.write(description_text.encode('utf-8'))
pymeta['extensions']['python.details']['document_names']['description'] = description_filename
# XXX heuristically copy any LICENSE/LICENSE.txt?
license = self.license_file()
if license:
license_filename = 'LICENSE.txt'
shutil.copy(license, os.path.join(self.distinfo_dir, license_filename))
pymeta['extensions']['python.details']['document_names']['license'] = license_filename
with open(metadata_json_path, "w") as metadata_json:
json.dump(pymeta, metadata_json, sort_keys=True)
adios(egginfo_path)
def write_record(self, bdist_dir, distinfo_dir):
from wheel.util import urlsafe_b64encode
record_path = os.path.join(distinfo_dir, 'RECORD')
record_relpath = os.path.relpath(record_path, bdist_dir)
def walk():
for dir, dirs, files in os.walk(bdist_dir):
dirs.sort()
for f in sorted(files):
yield os.path.join(dir, f)
def skip(path):
"""Wheel hashes every possible file."""
return (path == record_relpath)
with open_for_csv(record_path, 'w+') as record_file:
writer = csv.writer(record_file)
for path in walk():
relpath = os.path.relpath(path, bdist_dir)
if skip(relpath):
hash = ''
size = ''
else:
with open(path, 'rb') as f:
data = f.read()
digest = hashlib.sha256(data).digest()
hash = 'sha256=' + native(urlsafe_b64encode(digest))
size = len(data)
record_path = os.path.relpath(
path, bdist_dir).replace(os.path.sep, '/')
writer.writerow((record_path, hash, size))
#-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
from distutils import log as logger
import os.path
#from wheel.bdist_wheel import bdist_wheel
class azure_bdist_wheel(bdist_wheel):
description = "Create an Azure wheel distribution"
user_options = bdist_wheel.user_options + \
[('azure-namespace-package=', None,
"Name of the deepest nspkg used")]
def initialize_options(self):
bdist_wheel.initialize_options(self)
self.azure_namespace_package = None
def finalize_options(self):
bdist_wheel.finalize_options(self)
if self.azure_namespace_package and not self.azure_namespace_package.endswith("-nspkg"):
raise ValueError("azure_namespace_package must finish by -nspkg")
def run(self):
if not self.distribution.install_requires:
self.distribution.install_requires = []
self.distribution.install_requires.append(
"{}>=2.0.0".format(self.azure_namespace_package))
bdist_wheel.run(self)
def write_record(self, bdist_dir, distinfo_dir):
if self.azure_namespace_package:
# Split and remove last part, assuming it's "nspkg"
subparts = self.azure_namespace_package.split('-')[0:-1]
folder_with_init = [os.path.join(*subparts[0:i+1]) for i in range(len(subparts))]
for azure_sub_package in folder_with_init:
init_file = os.path.join(bdist_dir, azure_sub_package, '__init__.py')
if os.path.isfile(init_file):
logger.info("manually remove {} while building the wheel".format(init_file))
os.remove(init_file)
else:
raise ValueError("Unable to find {}. Are you sure of your namespace package?".format(init_file))
bdist_wheel.write_record(self, bdist_dir, distinfo_dir)
cmdclass = {
'bdist_wheel': azure_bdist_wheel,
}

Просмотреть файл

@ -0,0 +1,3 @@
[bdist_wheel]
universal=1
azure-namespace-package=azure-storage-nspkg

Просмотреть файл

@ -0,0 +1,92 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from setuptools import setup, find_packages
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
# azure-storage v0.36.0 and prior are not compatible with this package
try:
import azure.storage
try:
ver = azure.storage.__version__
raise Exception(
'This package is incompatible with azure-storage=={}. '.format(ver) +
' Uninstall it with "pip uninstall azure-storage".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-storage-file',
version='0.36.0',
description='Microsoft Azure Storage File Client Library for Python',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-storage-python',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=find_packages(),
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
'azure-storage-common'
] + (['futures'] if sys.version_info < (3, 0) else []),
cmdclass=cmdclass
)

Просмотреть файл

@ -0,0 +1 @@
include *.rst

Просмотреть файл

@ -0,0 +1,11 @@
Microsoft Azure Storage SDK for Python
======================================
This is the Microsoft Azure Storage namespace package.
This package is not intended to be installed directly by the end user.
It provides the necessary files for other packages to extend the azure.storage namespace.
If you are looking to install the Azure Storage libraries, see the
`azure <https://pypi.python.org/pypi/azure>`__ bundle package.

Просмотреть файл

Просмотреть файл

@ -0,0 +1,2 @@
[bdist_wheel]
universal=1

Просмотреть файл

@ -1,6 +1,6 @@
#!/usr/bin/env python
#!/usr/bin/env python
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -13,21 +13,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from setuptools import setup
import sys
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
@ -40,16 +34,16 @@ except ImportError:
pass
setup(
name='azure-storage',
version='0.36.0',
description='Microsoft Azure Storage Client Library for Python',
name='azure-storage-nspkg',
version='2.0.0',
description='Microsoft Azure Storage Namespace Package [Internal]',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ptvshelp@microsoft.com',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-storage-python',
classifiers=[
'Development Status :: 4 - Beta',
'Development Status :: 5 - Production/Stable',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
@ -57,23 +51,15 @@ setup(
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=[
'azure',
'azure.storage',
'azure.storage._http',
'azure.storage.blob',
'azure.storage.queue',
'azure.storage.table',
'azure.storage.file',
],
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
] + (['futures'] if sys.version_info < (3,0) else []),
cmdclass=cmdclass
)
'azure-nspkg>=2.0.0',
]
)

Просмотреть файл

@ -1,179 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<SchemaVersion>2.0</SchemaVersion>
<ProjectGuid>{c443d9a8-dcf3-4b96-bf55-0797045be2b8}</ProjectGuid>
<ProjectHome />
<StartupFile>
</StartupFile>
<WorkingDirectory>.</WorkingDirectory>
<OutputPath>.</OutputPath>
<ProjectTypeGuids>{888888a0-9f3d-457c-b088-3a5042f75d52}</ProjectTypeGuids>
<LaunchProvider>Standard Python launcher</LaunchProvider>
<InterpreterId>{2af0f10d-7135-4994-9156-5d01c9c11b7e}</InterpreterId>
<InterpreterVersion>3.4</InterpreterVersion>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)' == 'Debug'" />
<PropertyGroup Condition="'$(Configuration)' == 'Release'" />
<PropertyGroup>
<VisualStudioVersion Condition=" '$(VisualStudioVersion)' == '' ">10.0</VisualStudioVersion>
<PtvsTargetsFile>$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Python Tools\Microsoft.PythonTools.targets</PtvsTargetsFile>
</PropertyGroup>
<ItemGroup>
<Compile Include="azure\storage\blob\_encryption.py" />
<Compile Include="azure\storage\blob\_upload_chunking.py" />
<Compile Include="azure\storage\file\_upload_chunking.py" />
<Compile Include="azure\storage\retry.py" />
<Compile Include="azure\storage\queue\_encryption.py" />
<Compile Include="azure\storage\table\_encryption.py" />
<Compile Include="azure\storage\_auth.py" />
<Compile Include="azure\storage\blob\appendblobservice.py" />
<Compile Include="azure\storage\blob\blockblobservice.py" />
<Compile Include="azure\storage\blob\models.py" />
<Compile Include="azure\storage\blob\pageblobservice.py" />
<Compile Include="azure\storage\blob\baseblobservice.py" />
<Compile Include="azure\storage\blob\_download_chunking.py" />
<Compile Include="azure\storage\blob\_deserialization.py" />
<Compile Include="azure\storage\blob\_error.py" />
<Compile Include="azure\storage\blob\_serialization.py" />
<Compile Include="azure\storage\blob\__init__.py" />
<Compile Include="azure\storage\cloudstorageaccount.py" />
<Compile Include="azure\storage\_connection.py" />
<Compile Include="azure\storage\_constants.py" />
<Compile Include="azure\storage\file\fileservice.py" />
<Compile Include="azure\storage\file\models.py" />
<Compile Include="azure\storage\file\_download_chunking.py" />
<Compile Include="azure\storage\file\_deserialization.py" />
<Compile Include="azure\storage\file\_serialization.py" />
<Compile Include="azure\storage\file\__init__.py" />
<Compile Include="azure\storage\models.py" />
<Compile Include="azure\storage\queue\models.py" />
<Compile Include="azure\storage\queue\queueservice.py" />
<Compile Include="azure\storage\queue\_deserialization.py" />
<Compile Include="azure\storage\queue\_error.py" />
<Compile Include="azure\storage\queue\_serialization.py" />
<Compile Include="azure\storage\queue\__init__.py" />
<Compile Include="azure\storage\sharedaccesssignature.py" />
<Compile Include="azure\storage\storageclient.py" />
<Compile Include="azure\storage\table\tablebatch.py" />
<Compile Include="azure\storage\table\models.py" />
<Compile Include="azure\storage\table\tableservice.py" />
<Compile Include="azure\storage\table\_deserialization.py" />
<Compile Include="azure\storage\table\_error.py" />
<Compile Include="azure\storage\table\_request.py" />
<Compile Include="azure\storage\table\_serialization.py" />
<Compile Include="azure\storage\table\__init__.py" />
<Compile Include="azure\storage\_common_conversion.py" />
<Compile Include="azure\storage\_encryption.py" />
<Compile Include="azure\storage\_error.py" />
<Compile Include="azure\storage\_deserialization.py" />
<Compile Include="azure\storage\_http\httpclient.py" />
<Compile Include="azure\storage\_http\__init__.py" />
<Compile Include="azure\storage\_serialization.py" />
<Compile Include="azure\storage\__init__.py" />
<Compile Include="azure\__init__.py" />
<Compile Include="doc\conf.py" />
<Compile Include="doc\__init__.py" />
<Compile Include="samples\advanced\client.py" />
<Compile Include="samples\advanced\authentication.py" />
<Compile Include="samples\advanced\__init__.py" />
<Compile Include="samples\blob\append_blob_usage.py" />
<Compile Include="samples\blob\block_blob_usage.py" />
<Compile Include="samples\blob\encryption_usage.py" />
<Compile Include="samples\blob\page_blob_usage.py" />
<Compile Include="samples\blob\sas_usage.py" />
<Compile Include="samples\blob\container_usage.py" />
<Compile Include="samples\blob\__init__.py" />
<Compile Include="samples\file\share_usage.py" />
<Compile Include="samples\file\directory_usage.py" />
<Compile Include="samples\file\file_usage.py" />
<Compile Include="samples\file\sas_usage.py" />
<Compile Include="samples\file\__init__.py" />
<Compile Include="samples\queue\encryption_usage.py" />
<Compile Include="samples\queue\queue_usage.py" />
<Compile Include="samples\queue\sas_usage.py" />
<Compile Include="samples\queue\__init__.py" />
<Compile Include="samples\config.py" />
<Compile Include="samples\table\encryption_usage.py" />
<Compile Include="samples\table\sas_usage.py" />
<Compile Include="samples\table\table_usage.py" />
<Compile Include="samples\table\__init__.py" />
<Compile Include="samples\test_sample.py" />
<Compile Include="samples\__init__.py" />
<Compile Include="setup.py" />
<Compile Include="tests\blob_performance.py" />
<Compile Include="tests\test_blob_encryption.py" />
<Compile Include="tests\settings_fake.py" />
<Compile Include="tests\settings_real.py" />
<Compile Include="tests\testcase.py" />
<Compile Include="tests\test_retry.py" />
<Compile Include="tests\test_client.py" />
<Compile Include="tests\test_account.py" />
<Compile Include="tests\test_directory.py" />
<Compile Include="tests\test_encryption_helper.py" />
<Compile Include="tests\test_get_file.py" />
<Compile Include="tests\test_queue_encryption.py" />
<Compile Include="tests\test_service_stats.py" />
<Compile Include="tests\test_share.py" />
<Compile Include="tests\test_get_blob.py" />
<Compile Include="tests\test_container.py" />
<Compile Include="tests\test_queue_encodings.py" />
<Compile Include="tests\test_service_properties.py" />
<Compile Include="tests\test_append_blob.py" />
<Compile Include="tests\test_common_blob.py" />
<Compile Include="tests\test_blob_access_conditions.py" />
<Compile Include="tests\test_block_blob.py" />
<Compile Include="tests\test_file.py" />
<Compile Include="tests\test_page_blob.py" />
<Compile Include="tests\test_queue.py" />
<Compile Include="tests\test_table_encryption.py" />
<Compile Include="tests\test_table_entity.py" />
<Compile Include="tests\test_table_batch.py" />
<Compile Include="tests\test_table.py" />
<Compile Include="tests\__init__.py" />
</ItemGroup>
<ItemGroup>
<Folder Include="azure" />
<Folder Include="azure\storage" />
<Folder Include="azure\storage\blob" />
<Folder Include="azure\storage\file" />
<Folder Include="azure\storage\queue" />
<Folder Include="azure\storage\table" />
<Folder Include="azure\storage\_http" />
<Folder Include="doc\" />
<Folder Include="doc\ref\" />
<Folder Include="samples\" />
<Folder Include="samples\file\" />
<Folder Include="samples\blob\" />
<Folder Include="samples\advanced\" />
<Folder Include="samples\table\" />
<Folder Include="samples\queue\" />
<Folder Include="tests" />
</ItemGroup>
<ItemGroup>
<Content Include=".gitignore" />
<Content Include="BreakingChanges.md" />
<Content Include="ChangeLog.md" />
<Content Include="CONTRIBUTING.md" />
<Content Include="doc\BuildDocs.bat" />
<Content Include="doc\index.rst" />
<Content Include="doc\InstallDocDependencies.bat" />
<Content Include="doc\make.bat" />
<Content Include="doc\Makefile" />
<Content Include="doc\requirements.txt" />
<Content Include="doc\upgrade.rst" />
<Content Include="LICENSE.txt" />
<Content Include="MANIFEST.in" />
<Content Include="README.rst" />
<Content Include="requirements.txt" />
</ItemGroup>
<ItemGroup>
<InterpreterReference Include="{2af0f10d-7135-4994-9156-5d01c9c11b7e}\2.7" />
<InterpreterReference Include="{2af0f10d-7135-4994-9156-5d01c9c11b7e}\3.3" />
<InterpreterReference Include="{2af0f10d-7135-4994-9156-5d01c9c11b7e}\3.4" />
<InterpreterReference Include="{2af0f10d-7135-4994-9156-5d01c9c11b7e}\3.5" />
</ItemGroup>
<Import Project="$(PtvsTargetsFile)" Condition="Exists($(PtvsTargetsFile))" />
<Import Project="$(MSBuildToolsPath)\Microsoft.Common.targets" Condition="!Exists($(PtvsTargetsFile))" />
</Project>

Просмотреть файл

@ -1,20 +0,0 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 14
VisualStudioVersion = 14.0.23107.0
MinimumVisualStudioVersion = 10.0.40219.1
Project("{888888A0-9F3D-457C-B088-3A5042F75D52}") = "azure-storage-python", "azure-storage-python.pyproj", "{C443D9A8-DCF3-4B96-BF55-0797045BE2B8}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{C443D9A8-DCF3-4B96-BF55-0797045BE2B8}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{C443D9A8-DCF3-4B96-BF55-0797045BE2B8}.Release|Any CPU.ActiveCfg = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-queue
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-queue
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -0,0 +1,2 @@
include *.rst
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,183 @@
Microsoft Azure Storage SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure Storage services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
If you are upgrading from a version older than 0.30.0, see the upgrade doc, the
usage samples in the samples directory, and the ChangeLog and BreakingChanges.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.35.1 (or any version below 0.35.1), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
Getting Started
===============
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to version 0.36.0, there used to be a single package (azure-storage) containing all services
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .models import (
Queue,
QueueMessage,

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from dateutil import parser
try:
from xml.etree import cElementTree as ETree
except ImportError:
@ -22,10 +23,10 @@ from .models import (
Queue,
QueueMessage,
)
from ..models import (
from azure.storage.common.models import (
_list,
)
from .._deserialization import (
from azure.storage.common._deserialization import (
_int_to_str,
_parse_metadata,
)
@ -33,6 +34,7 @@ from ._encryption import (
_decrypt_queue_message,
)
def _parse_metadata_and_message_count(response):
'''
Extracts approximate messages count header.
@ -42,6 +44,7 @@ def _parse_metadata_and_message_count(response):
return metadata
def _parse_queue_message_from_headers(response):
'''
Extracts pop receipt and time next visible from headers.
@ -49,9 +52,10 @@ def _parse_queue_message_from_headers(response):
message = QueueMessage()
message.pop_receipt = response.headers.get('x-ms-popreceipt')
message.time_next_visible = parser.parse(response.headers.get('x-ms-time-next-visible'))
return message
def _convert_xml_to_queues(response):
'''
<?xml version="1.0" encoding="utf-8"?>
@ -74,7 +78,7 @@ def _convert_xml_to_queues(response):
queues = _list()
list_element = ETree.fromstring(response.body)
# Set next marker
next_marker = list_element.findtext('NextMarker') or None
setattr(queues, 'next_marker', next_marker)
@ -92,13 +96,15 @@ def _convert_xml_to_queues(response):
queue.metadata = dict()
for metadata_element in metadata_root_element:
queue.metadata[metadata_element.tag] = metadata_element.text
# Add queue to list
queues.append(queue)
return queues
def _convert_xml_to_queue_messages(response, decode_function, require_encryption, key_encryption_key, resolver, content=None):
def _convert_xml_to_queue_messages(response, decode_function, require_encryption, key_encryption_key, resolver,
content=None):
'''
<?xml version="1.0" encoding="utf-8"?>
<QueueMessagesList>
@ -135,12 +141,12 @@ def _convert_xml_to_queue_messages(response, decode_function, require_encryption
message.content = message_element.findtext('MessageText')
if (key_encryption_key is not None) or (resolver is not None):
message.content = _decrypt_queue_message(message.content, require_encryption,
key_encryption_key, resolver)
key_encryption_key, resolver)
message.content = decode_function(message.content)
message.insertion_time = parser.parse(message_element.findtext('InsertionTime'))
message.expiration_time = parser.parse(message_element.findtext('ExpirationTime'))
message.pop_receipt = message_element.findtext('PopReceipt')
time_next_visible = message_element.find('TimeNextVisible')
@ -150,4 +156,4 @@ def _convert_xml_to_queue_messages(response, decode_function, require_encryption
# Add message to list
messages.append(message)
return messages
return messages

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,46 +11,37 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import os
from json import (
dumps,
loads,
)
from azure.common import (
AzureException,
)
from .._constants import (
_ENCRYPTION_PROTOCOL_V1,
from cryptography.hazmat.primitives.padding import PKCS7
from azure.storage.common._common_conversion import (
_encode_base64,
_decode_base64_to_bytes
)
from .._encryption import (
from azure.storage.common._encryption import (
_generate_encryption_data_dict,
_dict_to_encryption_data,
_generate_AES_CBC_cipher,
_validate_and_unwrap_cek,
_EncryptionAlgorithm,
)
from json import (
dumps,
loads,
)
from base64 import(
b64encode,
b64decode,
)
from .._error import(
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION,
from azure.storage.common._error import (
_ERROR_DECRYPTION_FAILURE,
_ERROR_DATA_NOT_ENCRYPTED,
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM,
_validate_not_none,
_validate_key_encryption_key_wrap,
_validate_key_encryption_key_unwrap,
_validate_encryption_protocol_version,
_validate_kek_id,
)
from .._common_conversion import (
_encode_base64,
_decode_base64_to_bytes
)
from cryptography.hazmat.primitives.padding import PKCS7
import os
def _encrypt_queue_message(message, key_encryption_key):
'''
@ -100,6 +91,7 @@ def _encrypt_queue_message(message, key_encryption_key):
return dumps(queue_message)
def _decrypt_queue_message(message, require_encryption, key_encryption_key, resolver):
'''
Returns the decrypted message contents from an EncryptedQueueMessage.
@ -135,6 +127,7 @@ def _decrypt_queue_message(message, require_encryption, key_encryption_key, reso
except Exception as e:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
def _decrypt(message, encryption_data, key_encryption_key=None, resolver=None):
'''
Decrypts the given ciphertext using AES256 in CBC mode with 128 bit padding.
@ -156,18 +149,18 @@ def _decrypt(message, encryption_data, key_encryption_key=None, resolver=None):
_validate_not_none('message', message)
content_encryption_key = _validate_and_unwrap_cek(encryption_data, key_encryption_key, resolver)
if not ( _EncryptionAlgorithm.AES_CBC_256 == encryption_data.encryption_agent.encryption_algorithm):
if not (_EncryptionAlgorithm.AES_CBC_256 == encryption_data.encryption_agent.encryption_algorithm):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM)
cipher = _generate_AES_CBC_cipher(content_encryption_key, encryption_data.content_encryption_IV)
#decrypt data
# decrypt data
decrypted_data = message
decryptor = cipher.decryptor()
decrypted_data = (decryptor.update(decrypted_data) + decryptor.finalize())
#unpad data
# unpad data
unpadder = PKCS7(128).unpadder()
decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize())
return decrypted_data
return decrypted_data

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
from .._error import (
from azure.storage.common._error import (
_validate_type_bytes,
)
@ -21,6 +22,7 @@ _ERROR_MESSAGE_SHOULD_BE_UNICODE = 'message should be of type unicode.'
_ERROR_MESSAGE_SHOULD_BE_STR = 'message should be of type str.'
_ERROR_MESSAGE_NOT_BASE64 = 'message is not a valid base64 value.'
def _validate_message_type_text(param):
if sys.version_info < (3,):
if not isinstance(param, unicode):
@ -29,5 +31,6 @@ def _validate_message_type_text(param):
if not isinstance(param, str):
raise TypeError(_ERROR_MESSAGE_SHOULD_BE_STR)
def _validate_message_type_bytes(param):
_validate_type_bytes('message', param)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,8 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
if sys.version_info >= (3,):
from io import BytesIO
else:
@ -26,14 +27,14 @@ try:
except ImportError:
from xml.etree import ElementTree as ETree
from xml.sax.saxutils import escape as xml_escape
from .._common_conversion import (
from azure.storage.common._common_conversion import (
_str,
)
from ._encryption import (
_encrypt_queue_message,
)
def _get_path(queue_name=None, include_messages=None, message_id=None):
'''
Creates the path to access a queue resource.
@ -67,7 +68,7 @@ def _convert_queue_message_xml(message_text, encode_function, key_encryption_key
# Enabled
message_text = encode_function(message_text)
if key_encryption_key is not None:
message_text = _encrypt_queue_message(message_text, key_encryption_key)
message_text = _encrypt_queue_message(message_text, key_encryption_key)
ETree.SubElement(queue_message_element, 'MessageText').text = message_text
# Add xml declaration and serialize

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,22 +11,23 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from xml.sax.saxutils import escape as xml_escape
from xml.sax.saxutils import unescape as xml_unescape
# --------------------------------------------------------------------------
from base64 import (
b64encode,
b64decode,
)
from xml.sax.saxutils import escape as xml_escape
from xml.sax.saxutils import unescape as xml_unescape
from ._error import (
_validate_message_type_bytes,
_validate_message_type_text,
_ERROR_MESSAGE_NOT_BASE64,
)
class Queue(object):
'''
class Queue(object):
'''
Queue class.
:ivar str name:
@ -102,16 +103,16 @@ class QueueMessageFormat:
'''
_validate_message_type_text(data)
return b64encode(data.encode('utf-8')).decode('utf-8')
@staticmethod
def text_base64decode(data):
def text_base64decode(data):
'''
Base64 decode to unicode text.
:param str data: String data to decode to unicode.
:return: Base64 decoded string.
:rtype: str
'''
'''
try:
return b64decode(data.encode('utf-8')).decode('utf-8')
except (ValueError, TypeError):
@ -129,7 +130,7 @@ class QueueMessageFormat:
'''
_validate_message_type_bytes(data)
return b64encode(data).decode('utf-8')
@staticmethod
def binary_base64decode(data):
'''
@ -138,7 +139,7 @@ class QueueMessageFormat:
:param str data: Data to decode to a byte string.
:return: Base64 decoded data.
:rtype: str
'''
'''
try:
return b64decode(data.encode('utf-8'))
except (ValueError, TypeError):
@ -156,8 +157,8 @@ class QueueMessageFormat:
'''
_validate_message_type_text(data)
return xml_escape(data)
@staticmethod
@staticmethod
def text_xmldecode(data):
'''
XML decode to unicode text.
@ -178,7 +179,7 @@ class QueueMessageFormat:
:rtype: str
'''
return data
@staticmethod
def nodecode(data):
'''
@ -192,7 +193,6 @@ class QueueMessageFormat:
class QueuePermissions(object):
'''
QueuePermissions class to be used with :func:`~azure.storage.queue.queueservice.QueueService.generate_queue_shared_access_signature`
method and for the AccessPolicies used with :func:`~azure.storage.queue.queueservice.QueueService.set_queue_acl`.
@ -207,6 +207,7 @@ class QueuePermissions(object):
:ivar QueuePermissions QueuePermissions.PROCESS: Delete entities.
Get and delete messages from the queue.
'''
def __init__(self, read=False, add=False, update=False, process=False, _str=None):
'''
:param bool read:
@ -227,19 +228,20 @@ class QueuePermissions(object):
self.add = add or ('a' in _str)
self.update = update or ('u' in _str)
self.process = process or ('p' in _str)
def __or__(self, other):
return QueuePermissions(_str=str(self) + str(other))
def __add__(self, other):
return QueuePermissions(_str=str(self) + str(other))
def __str__(self):
return (('r' if self.read else '') +
('a' if self.add else '') +
('u' if self.update else '') +
('p' if self.process else ''))
QueuePermissions.READ = QueuePermissions(read=True)
QueuePermissions.ADD = QueuePermissions(add=True)
QueuePermissions.UPDATE = QueuePermissions(update=True)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,16 +11,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from azure.common import (
AzureConflictHttpError,
AzureHttpError,
)
from .._constants import (
from azure.storage.common._auth import (
_StorageSASAuthentication,
_StorageSharedKeyAuthentication,
)
from azure.storage.common._common_conversion import (
_int_to_str,
_to_str,
)
from azure.storage.common._connection import _ServiceParameters
from azure.storage.common._constants import (
SERVICE_HOST_BASE,
DEFAULT_PROTOCOL,
)
from .._error import (
from azure.storage.common._deserialization import (
_convert_xml_to_service_properties,
_convert_xml_to_signed_identifiers,
_convert_xml_to_service_stats,
)
from azure.storage.common._error import (
_dont_fail_not_exist,
_dont_fail_on_exist,
_validate_not_none,
@ -30,59 +45,44 @@ from .._error import (
_validate_encryption_required,
_validate_decryption_required,
)
from .._serialization import (
from azure.storage.common._http import (
HTTPRequest,
)
from azure.storage.common._serialization import (
_convert_signed_identifiers_to_xml,
_convert_service_properties_to_xml,
)
from azure.storage.common._serialization import (
_get_request_body,
_add_metadata_headers,
)
from .._common_conversion import (
_int_to_str,
_to_str,
)
from .._http import (
HTTPRequest,
)
from ..models import (
from azure.storage.common.models import (
Services,
ListGenerator,
_OperationContext,
)
from .models import (
QueueMessageFormat,
)
from .._auth import (
_StorageSASAuthentication,
_StorageSharedKeyAuthentication,
)
from .._connection import _ServiceParameters
from .._serialization import (
_convert_signed_identifiers_to_xml,
_convert_service_properties_to_xml,
)
from .._deserialization import (
_convert_xml_to_service_properties,
_convert_xml_to_signed_identifiers,
_convert_xml_to_service_stats,
)
from ._serialization import (
_convert_queue_message_xml,
_get_path,
from azure.storage.common.sharedaccesssignature import (
SharedAccessSignature,
)
from azure.storage.common.storageclient import StorageClient
from ._deserialization import (
_convert_xml_to_queues,
_convert_xml_to_queue_messages,
_parse_queue_message_from_headers,
_parse_metadata_and_message_count,
)
from ..sharedaccesssignature import (
SharedAccessSignature,
from ._serialization import (
_convert_queue_message_xml,
_get_path,
)
from .models import (
QueueMessageFormat,
)
from ..storageclient import StorageClient
_HTTP_RESPONSE_NO_CONTENT = 204
class QueueService(StorageClient):
class QueueService(StorageClient):
'''
This is the main class managing queue resources.
@ -130,7 +130,7 @@ class QueueService(StorageClient):
parameters for encryption/decryption must be provided. See the above comments on the key_encryption_key and resolver.
'''
def __init__(self, account_name=None, account_key=None, sas_token=None,
def __init__(self, account_name=None, account_key=None, sas_token=None,
is_emulated=False, protocol=DEFAULT_PROTOCOL, endpoint_suffix=SERVICE_HOST_BASE,
request_session=None, connection_string=None, socket_timeout=None):
'''
@ -167,16 +167,16 @@ class QueueService(StorageClient):
'''
service_params = _ServiceParameters.get_service_parameters(
'queue',
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
account_name=account_name,
account_key=account_key,
sas_token=sas_token,
is_emulated=is_emulated,
protocol=protocol,
endpoint_suffix=endpoint_suffix,
request_session=request_session,
connection_string=connection_string,
socket_timeout=socket_timeout)
super(QueueService, self).__init__(service_params)
if self.account_key:
@ -195,8 +195,8 @@ class QueueService(StorageClient):
self.key_resolver_function = None
self.require_encryption = False
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
def generate_account_shared_access_signature(self, resource_types, permission,
expiry, start=None, ip=None, protocol=None):
'''
Generates a shared access signature for the queue service.
Use the returned signature with the sas_token parameter of QueueService.
@ -240,15 +240,15 @@ class QueueService(StorageClient):
_validate_not_none('self.account_key', self.account_key)
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_account(Services.QUEUE, resource_types, permission,
return sas.generate_account(Services.QUEUE, resource_types, permission,
expiry, start=start, ip=ip, protocol=protocol)
def generate_queue_shared_access_signature(self, queue_name,
permission=None,
expiry=None,
start=None,
id=None,
ip=None, protocol=None,):
permission=None,
expiry=None,
start=None,
id=None,
ip=None, protocol=None, ):
'''
Generates a shared access signature for the queue.
Use the returned signature with the sas_token parameter of QueueService.
@ -298,9 +298,9 @@ class QueueService(StorageClient):
sas = SharedAccessSignature(self.account_name, self.account_key)
return sas.generate_queue(
queue_name,
permission=permission,
permission=permission,
expiry=expiry,
start=start,
start=start,
id=id,
ip=ip,
protocol=protocol,
@ -364,8 +364,8 @@ class QueueService(StorageClient):
return self._perform_request(request, _convert_xml_to_service_properties)
def set_queue_service_properties(self, logging=None, hour_metrics=None,
minute_metrics=None, cors=None, timeout=None):
def set_queue_service_properties(self, logging=None, hour_metrics=None,
minute_metrics=None, cors=None, timeout=None):
'''
Sets the properties of a storage account's Queue service, including
Azure Storage Analytics. If an element (ex Logging) is left as None, the
@ -404,7 +404,7 @@ class QueueService(StorageClient):
_convert_service_properties_to_xml(logging, hour_metrics, minute_metrics, cors))
self._perform_request(request)
def list_queues(self, prefix=None, num_results=None, include_metadata=False,
def list_queues(self, prefix=None, num_results=None, include_metadata=False,
marker=None, timeout=None):
'''
Returns a generator to list the queues. The generator will lazily follow
@ -436,14 +436,14 @@ class QueueService(StorageClient):
'''
include = 'metadata' if include_metadata else None
operation_context = _OperationContext(location_lock=True)
kwargs = {'prefix': prefix, 'max_results': num_results, 'include': include,
kwargs = {'prefix': prefix, 'max_results': num_results, 'include': include,
'marker': marker, 'timeout': timeout, '_context': operation_context}
resp = self._list_queues(**kwargs)
return ListGenerator(resp, self._list_queues, (), kwargs)
def _list_queues(self, prefix=None, marker=None, max_results=None,
include=None, timeout=None, _context=None):
include=None, timeout=None, _context=None):
'''
Returns a list of queues under the specified account. Makes a single list
request to the service. Used internally by the list_queues method.
@ -770,8 +770,8 @@ class QueueService(StorageClient):
self.key_encryption_key))
message_list = self._perform_request(request, _convert_xml_to_queue_messages,
[self.decode_function, False,
None, None, content])
[self.decode_function, False,
None, None, content])
return message_list[0]
def get_messages(self, queue_name, num_messages=None,
@ -926,7 +926,7 @@ class QueueService(StorageClient):
request.query = {'timeout': _int_to_str(timeout)}
self._perform_request(request)
def update_message(self, queue_name, message_id, pop_receipt, visibility_timeout,
def update_message(self, queue_name, message_id, pop_receipt, visibility_timeout,
content=None, timeout=None):
'''
Updates the visibility timeout of a message. You can also use this

Просмотреть файл

@ -0,0 +1,533 @@
"""
"wheel" copyright (c) 2012-2017 Daniel Holth <dholth@fastmail.fm> and
contributors.
The MIT License
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Create a Azure wheel (.whl) distribution (a wheel is a built archive format).
This file is a copy of the official bdist_wheel file from wheel 0.30.0a0, enhanced
of the bottom with some Microsoft extension for Azure SDK for Python
"""
import csv
import hashlib
import os
import subprocess
import warnings
import shutil
import json
import sys
try:
import sysconfig
except ImportError: # pragma nocover
# Python < 2.7
import distutils.sysconfig as sysconfig
import pkg_resources
safe_name = pkg_resources.safe_name
safe_version = pkg_resources.safe_version
from shutil import rmtree
from email.generator import Generator
from distutils.core import Command
from distutils.sysconfig import get_python_version
from distutils import log as logger
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag, get_platform
from wheel.util import native, open_for_csv
from wheel.archive import archive_wheelfile
from wheel.pkginfo import read_pkg_info, write_pkg_info
from wheel.metadata import pkginfo_to_dict
from wheel import pep425tags, metadata
from wheel import __version__ as wheel_version
def safer_name(name):
return safe_name(name).replace('-', '_')
def safer_version(version):
return safe_version(version).replace('-', '_')
class bdist_wheel(Command):
description = 'create a wheel distribution'
user_options = [('bdist-dir=', 'b',
"temporary directory for creating the distribution"),
('plat-name=', 'p',
"platform name to embed in generated filenames "
"(default: %s)" % get_platform()),
('keep-temp', 'k',
"keep the pseudo-installation tree around after " +
"creating the distribution archive"),
('dist-dir=', 'd',
"directory to put final built distributions in"),
('skip-build', None,
"skip rebuilding everything (for testing/debugging)"),
('relative', None,
"build the archive using relative paths"
"(default: false)"),
('owner=', 'u',
"Owner name used when creating a tar file"
" [default: current user]"),
('group=', 'g',
"Group name used when creating a tar file"
" [default: current group]"),
('universal', None,
"make a universal wheel"
" (default: false)"),
('python-tag=', None,
"Python implementation compatibility tag"
" (default: py%s)" % get_impl_ver()[0]),
]
boolean_options = ['keep-temp', 'skip-build', 'relative', 'universal']
def initialize_options(self):
self.bdist_dir = None
self.data_dir = None
self.plat_name = None
self.plat_tag = None
self.format = 'zip'
self.keep_temp = False
self.dist_dir = None
self.distinfo_dir = None
self.egginfo_dir = None
self.root_is_pure = None
self.skip_build = None
self.relative = False
self.owner = None
self.group = None
self.universal = False
self.python_tag = 'py' + get_impl_ver()[0]
self.plat_name_supplied = False
def finalize_options(self):
if self.bdist_dir is None:
bdist_base = self.get_finalized_command('bdist').bdist_base
self.bdist_dir = os.path.join(bdist_base, 'wheel')
self.data_dir = self.wheel_dist_name + '.data'
self.plat_name_supplied = self.plat_name is not None
need_options = ('dist_dir', 'plat_name', 'skip_build')
self.set_undefined_options('bdist',
*zip(need_options, need_options))
self.root_is_pure = not (self.distribution.has_ext_modules()
or self.distribution.has_c_libraries())
# Support legacy [wheel] section for setting universal
wheel = self.distribution.get_option_dict('wheel')
if 'universal' in wheel:
# please don't define this in your global configs
val = wheel['universal'][1].strip()
if val.lower() in ('1', 'true', 'yes'):
self.universal = True
@property
def wheel_dist_name(self):
"""Return distribution full name with - replaced with _"""
return '-'.join((safer_name(self.distribution.get_name()),
safer_version(self.distribution.get_version())))
def get_tag(self):
# bdist sets self.plat_name if unset, we should only use it for purepy
# wheels if the user supplied it.
if self.plat_name_supplied:
plat_name = self.plat_name
elif self.root_is_pure:
plat_name = 'any'
else:
plat_name = self.plat_name or get_platform()
if plat_name in ('linux-x86_64', 'linux_x86_64') and sys.maxsize == 2147483647:
plat_name = 'linux_i686'
plat_name = plat_name.replace('-', '_').replace('.', '_')
if self.root_is_pure:
if self.universal:
impl = 'py2.py3'
else:
impl = self.python_tag
tag = (impl, 'none', plat_name)
else:
impl_name = get_abbr_impl()
impl_ver = get_impl_ver()
# PEP 3149
abi_tag = str(get_abi_tag()).lower()
tag = (impl_name + impl_ver, abi_tag, plat_name)
supported_tags = pep425tags.get_supported(
supplied_platform=plat_name if self.plat_name_supplied else None)
# XXX switch to this alternate implementation for non-pure:
assert tag == supported_tags[0], "%s != %s" % (tag, supported_tags[0])
return tag
def get_archive_basename(self):
"""Return archive name without extension"""
impl_tag, abi_tag, plat_tag = self.get_tag()
archive_basename = "%s-%s-%s-%s" % (
self.wheel_dist_name,
impl_tag,
abi_tag,
plat_tag)
return archive_basename
def run(self):
build_scripts = self.reinitialize_command('build_scripts')
build_scripts.executable = 'python'
if not self.skip_build:
self.run_command('build')
install = self.reinitialize_command('install',
reinit_subcommands=True)
install.root = self.bdist_dir
install.compile = False
install.skip_build = self.skip_build
install.warn_dir = False
# A wheel without setuptools scripts is more cross-platform.
# Use the (undocumented) `no_ep` option to setuptools'
# install_scripts command to avoid creating entry point scripts.
install_scripts = self.reinitialize_command('install_scripts')
install_scripts.no_ep = True
# Use a custom scheme for the archive, because we have to decide
# at installation time which scheme to use.
for key in ('headers', 'scripts', 'data', 'purelib', 'platlib'):
setattr(install,
'install_' + key,
os.path.join(self.data_dir, key))
basedir_observed = ''
if os.name == 'nt':
# win32 barfs if any of these are ''; could be '.'?
# (distutils.command.install:change_roots bug)
basedir_observed = os.path.normpath(os.path.join(self.data_dir, '..'))
self.install_libbase = self.install_lib = basedir_observed
setattr(install,
'install_purelib' if self.root_is_pure else 'install_platlib',
basedir_observed)
logger.info("installing to %s", self.bdist_dir)
self.run_command('install')
archive_basename = self.get_archive_basename()
pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
if not self.relative:
archive_root = self.bdist_dir
else:
archive_root = os.path.join(
self.bdist_dir,
self._ensure_relative(install.install_base))
self.set_undefined_options(
'install_egg_info', ('target', 'egginfo_dir'))
self.distinfo_dir = os.path.join(self.bdist_dir,
'%s.dist-info' % self.wheel_dist_name)
self.egg2dist(self.egginfo_dir,
self.distinfo_dir)
self.write_wheelfile(self.distinfo_dir)
self.write_record(self.bdist_dir, self.distinfo_dir)
# Make the archive
if not os.path.exists(self.dist_dir):
os.makedirs(self.dist_dir)
wheel_name = archive_wheelfile(pseudoinstall_root, archive_root)
# Sign the archive
if 'WHEEL_TOOL' in os.environ:
subprocess.call([os.environ['WHEEL_TOOL'], 'sign', wheel_name])
# Add to 'Distribution.dist_files' so that the "upload" command works
getattr(self.distribution, 'dist_files', []).append(
('bdist_wheel', get_python_version(), wheel_name))
if not self.keep_temp:
if self.dry_run:
logger.info('removing %s', self.bdist_dir)
else:
rmtree(self.bdist_dir)
def write_wheelfile(self, wheelfile_base, generator='bdist_wheel (' + wheel_version + ')'):
from email.message import Message
msg = Message()
msg['Wheel-Version'] = '1.0' # of the spec
msg['Generator'] = generator
msg['Root-Is-Purelib'] = str(self.root_is_pure).lower()
# Doesn't work for bdist_wininst
impl_tag, abi_tag, plat_tag = self.get_tag()
for impl in impl_tag.split('.'):
for abi in abi_tag.split('.'):
for plat in plat_tag.split('.'):
msg['Tag'] = '-'.join((impl, abi, plat))
wheelfile_path = os.path.join(wheelfile_base, 'WHEEL')
logger.info('creating %s', wheelfile_path)
with open(wheelfile_path, 'w') as f:
Generator(f, maxheaderlen=0).flatten(msg)
def _ensure_relative(self, path):
# copied from dir_util, deleted
drive, path = os.path.splitdrive(path)
if path[0:1] == os.sep:
path = drive + path[1:]
return path
def _pkginfo_to_metadata(self, egg_info_path, pkginfo_path):
return metadata.pkginfo_to_metadata(egg_info_path, pkginfo_path)
def license_file(self):
"""Return license filename from a license-file key in setup.cfg, or None."""
metadata = self.distribution.get_option_dict('metadata')
if not 'license_file' in metadata:
return None
return metadata['license_file'][1]
def setupcfg_requirements(self):
"""Generate requirements from setup.cfg as
('Requires-Dist', 'requirement; qualifier') tuples. From a metadata
section in setup.cfg:
[metadata]
provides-extra = extra1
extra2
requires-dist = requirement; qualifier
another; qualifier2
unqualified
Yields
('Provides-Extra', 'extra1'),
('Provides-Extra', 'extra2'),
('Requires-Dist', 'requirement; qualifier'),
('Requires-Dist', 'another; qualifier2'),
('Requires-Dist', 'unqualified')
"""
metadata = self.distribution.get_option_dict('metadata')
# our .ini parser folds - to _ in key names:
for key, title in (('provides_extra', 'Provides-Extra'),
('requires_dist', 'Requires-Dist')):
if not key in metadata:
continue
field = metadata[key]
for line in field[1].splitlines():
line = line.strip()
if not line:
continue
yield (title, line)
def add_requirements(self, metadata_path):
"""Add additional requirements from setup.cfg to file metadata_path"""
additional = list(self.setupcfg_requirements())
if not additional: return
pkg_info = read_pkg_info(metadata_path)
if 'Provides-Extra' in pkg_info or 'Requires-Dist' in pkg_info:
warnings.warn('setup.cfg requirements overwrite values from setup.py')
del pkg_info['Provides-Extra']
del pkg_info['Requires-Dist']
for k, v in additional:
pkg_info[k] = v
write_pkg_info(metadata_path, pkg_info)
def egg2dist(self, egginfo_path, distinfo_path):
"""Convert an .egg-info directory into a .dist-info directory"""
def adios(p):
"""Appropriately delete directory, file or link."""
if os.path.exists(p) and not os.path.islink(p) and os.path.isdir(p):
shutil.rmtree(p)
elif os.path.exists(p):
os.unlink(p)
adios(distinfo_path)
if not os.path.exists(egginfo_path):
# There is no egg-info. This is probably because the egg-info
# file/directory is not named matching the distribution name used
# to name the archive file. Check for this case and report
# accordingly.
import glob
pat = os.path.join(os.path.dirname(egginfo_path), '*.egg-info')
possible = glob.glob(pat)
err = "Egg metadata expected at %s but not found" % (egginfo_path,)
if possible:
alt = os.path.basename(possible[0])
err += " (%s found - possible misnamed archive file?)" % (alt,)
raise ValueError(err)
if os.path.isfile(egginfo_path):
# .egg-info is a single file
pkginfo_path = egginfo_path
pkg_info = self._pkginfo_to_metadata(egginfo_path, egginfo_path)
os.mkdir(distinfo_path)
else:
# .egg-info is a directory
pkginfo_path = os.path.join(egginfo_path, 'PKG-INFO')
pkg_info = self._pkginfo_to_metadata(egginfo_path, pkginfo_path)
# ignore common egg metadata that is useless to wheel
shutil.copytree(egginfo_path, distinfo_path,
ignore=lambda x, y: set(('PKG-INFO',
'requires.txt',
'SOURCES.txt',
'not-zip-safe',)))
# delete dependency_links if it is only whitespace
dependency_links_path = os.path.join(distinfo_path, 'dependency_links.txt')
with open(dependency_links_path, 'r') as dependency_links_file:
dependency_links = dependency_links_file.read().strip()
if not dependency_links:
adios(dependency_links_path)
write_pkg_info(os.path.join(distinfo_path, 'METADATA'), pkg_info)
# XXX deprecated. Still useful for current distribute/setuptools.
metadata_path = os.path.join(distinfo_path, 'METADATA')
self.add_requirements(metadata_path)
# XXX intentionally a different path than the PEP.
metadata_json_path = os.path.join(distinfo_path, 'metadata.json')
pymeta = pkginfo_to_dict(metadata_path,
distribution=self.distribution)
if 'description' in pymeta:
description_filename = 'DESCRIPTION.rst'
description_text = pymeta.pop('description')
description_path = os.path.join(distinfo_path,
description_filename)
with open(description_path, "wb") as description_file:
description_file.write(description_text.encode('utf-8'))
pymeta['extensions']['python.details']['document_names']['description'] = description_filename
# XXX heuristically copy any LICENSE/LICENSE.txt?
license = self.license_file()
if license:
license_filename = 'LICENSE.txt'
shutil.copy(license, os.path.join(self.distinfo_dir, license_filename))
pymeta['extensions']['python.details']['document_names']['license'] = license_filename
with open(metadata_json_path, "w") as metadata_json:
json.dump(pymeta, metadata_json, sort_keys=True)
adios(egginfo_path)
def write_record(self, bdist_dir, distinfo_dir):
from wheel.util import urlsafe_b64encode
record_path = os.path.join(distinfo_dir, 'RECORD')
record_relpath = os.path.relpath(record_path, bdist_dir)
def walk():
for dir, dirs, files in os.walk(bdist_dir):
dirs.sort()
for f in sorted(files):
yield os.path.join(dir, f)
def skip(path):
"""Wheel hashes every possible file."""
return (path == record_relpath)
with open_for_csv(record_path, 'w+') as record_file:
writer = csv.writer(record_file)
for path in walk():
relpath = os.path.relpath(path, bdist_dir)
if skip(relpath):
hash = ''
size = ''
else:
with open(path, 'rb') as f:
data = f.read()
digest = hashlib.sha256(data).digest()
hash = 'sha256=' + native(urlsafe_b64encode(digest))
size = len(data)
record_path = os.path.relpath(
path, bdist_dir).replace(os.path.sep, '/')
writer.writerow((record_path, hash, size))
#-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------
from distutils import log as logger
import os.path
#from wheel.bdist_wheel import bdist_wheel
class azure_bdist_wheel(bdist_wheel):
description = "Create an Azure wheel distribution"
user_options = bdist_wheel.user_options + \
[('azure-namespace-package=', None,
"Name of the deepest nspkg used")]
def initialize_options(self):
bdist_wheel.initialize_options(self)
self.azure_namespace_package = None
def finalize_options(self):
bdist_wheel.finalize_options(self)
if self.azure_namespace_package and not self.azure_namespace_package.endswith("-nspkg"):
raise ValueError("azure_namespace_package must finish by -nspkg")
def run(self):
if not self.distribution.install_requires:
self.distribution.install_requires = []
self.distribution.install_requires.append(
"{}>=2.0.0".format(self.azure_namespace_package))
bdist_wheel.run(self)
def write_record(self, bdist_dir, distinfo_dir):
if self.azure_namespace_package:
# Split and remove last part, assuming it's "nspkg"
subparts = self.azure_namespace_package.split('-')[0:-1]
folder_with_init = [os.path.join(*subparts[0:i+1]) for i in range(len(subparts))]
for azure_sub_package in folder_with_init:
init_file = os.path.join(bdist_dir, azure_sub_package, '__init__.py')
if os.path.isfile(init_file):
logger.info("manually remove {} while building the wheel".format(init_file))
os.remove(init_file)
else:
raise ValueError("Unable to find {}. Are you sure of your namespace package?".format(init_file))
bdist_wheel.write_record(self, bdist_dir, distinfo_dir)
cmdclass = {
'bdist_wheel': azure_bdist_wheel,
}

Просмотреть файл

@ -0,0 +1,3 @@
[bdist_wheel]
universal=1
azure-namespace-package=azure-storage-nspkg

Просмотреть файл

@ -0,0 +1,92 @@
#!/usr/bin/env python
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# --------------------------------------------------------------------------
import sys
from setuptools import setup, find_packages
try:
from azure_bdist_wheel import cmdclass
except ImportError:
from distutils import log as logger
logger.warn("Wheel is not available, disabling bdist_wheel hook")
cmdclass = {}
# azure v0.x is not compatible with this package
# azure v0.x used to have a __version__ attribute (newer versions don't)
try:
import azure
try:
ver = azure.__version__
raise Exception(
'This package is incompatible with azure=={}. '.format(ver) +
'Uninstall it with "pip uninstall azure".'
)
except AttributeError:
pass
except ImportError:
pass
# azure-storage v0.36.0 and prior are not compatible with this package
try:
import azure.storage
try:
ver = azure.storage.__version__
raise Exception(
'This package is incompatible with azure-storage=={}. '.format(ver) +
' Uninstall it with "pip uninstall azure-storage".'
)
except AttributeError:
pass
except ImportError:
pass
setup(
name='azure-storage-queue',
version='0.36.0',
description='Microsoft Azure Storage Queue Client Library for Python',
long_description=open('README.rst', 'r').read(),
license='Apache License 2.0',
author='Microsoft Corporation',
author_email='ascl@microsoft.com',
url='https://github.com/Azure/azure-storage-python',
classifiers=[
'Development Status :: 4 - Beta',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'License :: OSI Approved :: Apache Software License',
],
zip_safe=False,
packages=find_packages(),
install_requires=[
'azure-common>=1.1.5',
'cryptography',
'python-dateutil',
'requests',
'azure-storage-common'
] + (['futures'] if sys.version_info < (3, 0) else []),
cmdclass=cmdclass
)

Просмотреть файл

@ -0,0 +1,3 @@
# Breaking Changes azure-storage-table
> See the [Change Log](ChangeLog.md) for a summary of storage library changes.

Просмотреть файл

@ -0,0 +1,3 @@
# Change Log azure-storage-table
> See [BreakingChanges](BreakingChanges.md) for a detailed list of API breaks.

Просмотреть файл

@ -0,0 +1,2 @@
include *.rst
include azure_bdist_wheel.py

Просмотреть файл

@ -0,0 +1,183 @@
Microsoft Azure Storage SDK for Python
======================================
This project provides a client library in Python that makes it easy to
consume Microsoft Azure Storage services. For documentation please see
the Microsoft Azure `Python Developer Center`_ and our `API Reference`_ Page.
If you are looking for the Service Bus or Azure Management
libraries, please visit
https://github.com/Azure/azure-sdk-for-python.
Compatibility
=============
**IMPORTANT**: If you have an earlier version of the azure package
(version < 1.0), you should uninstall it before installing this package.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure==0.11.0 (or any version below 1.0), uninstall it first then install it again:
.. code:: shell
pip uninstall azure
pip install azure
If you are upgrading from a version older than 0.30.0, see the upgrade doc, the
usage samples in the samples directory, and the ChangeLog and BreakingChanges.
**IMPORTANT**: If you have an earlier version of the azure-storage package
(version <= 0.36.0), you should uninstall it before installing the new packages.
You can check the version using pip:
.. code:: shell
pip freeze
If you see azure-storage==0.35.1 (or any version below 0.35.1), uninstall it first:
.. code:: shell
pip uninstall azure-storage
Features
========
- Blob
- Create/Read/Update/Delete Containers
- Create/Read/Update/Delete Blobs
- Advanced Blob Operations
- Queue
- Create/Delete Queues
- Insert/Peek Queue Messages
- Advanced Queue Operations
- Table
- Create/Read/Update/Delete Tables
- Create/Read/Update/Delete Entities
- Batch operations
- Advanced Table Operations
- Files
- Create/Update/Delete Shares
- Create/Update/Delete Directories
- Create/Read/Update/Delete Files
- Advanced File Operations
Getting Started
===============
Download
--------
The Azure Storage SDK for Python is composed of 4 packages, each corresponding to a different service:
- azure-storage-blob
- azure-storage-file
- azure-storage-queue
- azure-storage-table
Note: prior to version 0.36.0, there used to be a single package (azure-storage) containing all services
Option 1: Via PyPi
~~~~~~~~~~~~~~~~~~
To install via the Python Package Index (PyPI), type:
::
pip install azure-storage-blob
pip install azure-storage-file
pip install azure-storage-queue
pip install azure-storage-table
Option 2: Source Via Git
~~~~~~~~~~~~~~~~~~~~~~~~
To get the source code of the SDK via git just type:
::
git clone git://github.com/Azure/azure-storage-python.git
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Option 3: Source Zip
~~~~~~~~~~~~~~~~~~~~
Download a zip of the code via GitHub or PyPi. Then, type:
::
cd ./azure-storage-python/azure-storage-blob
python setup.py install
Replace azure-storage-blob with azure-storage-file, azure-storage-queue, or azure-storage-table to install the other services.
Minimum Requirements
--------------------
- Python 2.7, 3.3, 3.4, 3.5, or 3.6.
- See setup.py for dependencies
Usage
-----
To use this SDK to call Microsoft Azure storage services, you need to
first `create an account`_.
Code Sample
-----------
See the samples directory for blob, queue, table, and file usage samples.
Need Help?
==========
Be sure to check out the Microsoft Azure `Developer Forums on MSDN`_ or
the `Developer Forums on Stack Overflow`_ if you have trouble with the
provided code.
Contribute Code or Provide Feedback
===================================
If you would like to become an active contributor to this project, please
follow the instructions provided in `Azure Projects Contribution
Guidelines`_. You can find more details for contributing in the `CONTRIBUTING.md doc`_.
If you encounter any bugs with the library, please file an issue in the
`Issues`_ section of the project.
Learn More
==========
- `Python Developer Center`_
- `Azure Storage Service`_
- `Azure Storage Team Blog`_
- `API Reference`_
.. _Python Developer Center: http://azure.microsoft.com/en-us/develop/python/
.. _API Reference: https://azure-storage.readthedocs.io/en/latest/
.. _here: https://github.com/Azure/azure-storage-python/archive/master.zip
.. _create an account: https://account.windowsazure.com/signup
.. _Developer Forums on MSDN: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=windowsazuredata
.. _Developer Forums on Stack Overflow: http://stackoverflow.com/questions/tagged/azure+windows-azure-storage
.. _Azure Projects Contribution Guidelines: http://azure.github.io/guidelines.html
.. _Issues: https://github.com/Azure/azure-storage-python/issues
.. _Azure Storage Service: http://azure.microsoft.com/en-us/documentation/services/storage/
.. _Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/
.. _CONTRIBUTING.md doc: CONTRIBUTING.md

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -0,0 +1 @@
__import__('pkg_resources').declare_namespace(__name__)

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,7 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .models import (
Entity,
EntityProperty,

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,10 +11,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
import sys
from dateutil import parser
if sys.version_info < (3,):
from urllib2 import quote as url_quote
else:
@ -22,14 +23,14 @@ else:
from json import (
loads,
)
from .._http import HTTPResponse
from azure.storage.common._http import HTTPResponse
from azure.common import (
AzureException,
)
from .._common_conversion import (
from azure.storage.common._common_conversion import (
_decode_base64_to_bytes,
)
from .._error import (
from azure.storage.common._error import (
_ERROR_DECRYPTION_FAILURE,
_validate_decryption_required,
)
@ -44,15 +45,15 @@ from .models import (
EdmType,
AzureBatchOperationError,
)
from ..models import (
from azure.storage.common.models import (
_list,
_HeaderDict,
)
from ._encryption import (
_decrypt_entity,
_extract_encryption_metadata,
)
def _get_continuation_from_response_headers(response):
marker = {}
for name, value in response.headers.items():
@ -60,6 +61,7 @@ def _get_continuation_from_response_headers(response):
marker[name[len('x-ms-continuation') + 1:]] = value
return marker
# Tables of conversions to and from entity types. We support specific
# datatypes, and beyond that the user can use an EntityProperty to get
# custom data type support.
@ -81,7 +83,6 @@ def _from_entity_datetime(value):
_EDM_TYPES = [EdmType.BINARY, EdmType.INT64, EdmType.GUID, EdmType.DATETIME,
EdmType.STRING, EdmType.INT32, EdmType.DOUBLE, EdmType.BOOLEAN]
_ENTITY_TO_PYTHON_CONVERSIONS = {
EdmType.BINARY: _from_entity_binary,
EdmType.INT32: _from_entity_int32,
@ -90,6 +91,7 @@ _ENTITY_TO_PYTHON_CONVERSIONS = {
EdmType.DATETIME: _from_entity_datetime,
}
def _convert_json_response_to_entity(response, property_resolver, require_encryption,
key_encryption_key, key_resolver):
'''
@ -108,7 +110,7 @@ def _convert_json_response_to_entity(response, property_resolver, require_encryp
return None
root = loads(response.body.decode('utf-8'))
return _decrypt_and_deserialize_entity(root, property_resolver, require_encryption,
return _decrypt_and_deserialize_entity(root, property_resolver, require_encryption,
key_encryption_key, key_resolver)
@ -159,7 +161,7 @@ def _convert_json_to_entity(entry_element, property_resolver, encrypted_properti
timestamp = properties.pop('Timestamp', None)
if timestamp:
entity['Timestamp'] = _from_entity_datetime(timestamp)
for name, value in properties.items():
mtype = edmtypes.get(name);
@ -169,9 +171,9 @@ def _convert_json_to_entity(entry_element, property_resolver, encrypted_properti
# This check avoids unexpected behavior from the user-defined
# property resolver.
if not (name == '_ClientEncryptionMetadata1' or \
name == '_ClientEncryptionMetadata2'):
mtype = property_resolver(partition_key, row_key,
name, value, mtype)
name == '_ClientEncryptionMetadata2'):
mtype = property_resolver(partition_key, row_key,
name, value, mtype)
# throw if the type returned is not a valid edm type
if mtype and mtype not in _EDM_TYPES:
@ -186,7 +188,7 @@ def _convert_json_to_entity(entry_element, property_resolver, encrypted_properti
mtype = EdmType.INT32
# no type info, property should parse automatically
if not mtype:
if not mtype:
entity[name] = value
else: # need an object to hold the property
conv = _ENTITY_TO_PYTHON_CONVERSIONS.get(mtype)
@ -208,7 +210,7 @@ def _convert_json_to_entity(entry_element, property_resolver, encrypted_properti
# extract etag from entry
etag = odata.get('etag')
if timestamp:
etag = 'W/"datetime\'' + url_quote(timestamp) + '\'"'
etag = 'W/"datetime\'' + url_quote(timestamp) + '\'"'
entity['etag'] = etag
return entity
@ -260,11 +262,12 @@ def _convert_json_response_to_entities(response, property_resolver, require_encr
return entities
def _decrypt_and_deserialize_entity(entity, property_resolver, require_encryption,
def _decrypt_and_deserialize_entity(entity, property_resolver, require_encryption,
key_encryption_key, key_resolver):
try:
_validate_decryption_required(require_encryption, key_encryption_key,
key_resolver)
key_resolver)
entity_iv, encrypted_properties, content_encryption_key, isJavaV1 = None, None, None, False
if (key_encryption_key is not None) or (key_resolver is not None):
entity_iv, encrypted_properties, content_encryption_key, isJavaV1 = \
@ -273,17 +276,18 @@ def _decrypt_and_deserialize_entity(entity, property_resolver, require_encryptio
raise AzureException(_ERROR_DECRYPTION_FAILURE)
entity = _convert_json_to_entity(entity, property_resolver, encrypted_properties)
if entity_iv is not None and encrypted_properties is not None and \
content_encryption_key is not None:
content_encryption_key is not None:
try:
entity = _decrypt_entity(entity, encrypted_properties, content_encryption_key,
entity_iv, isJavaV1)
entity_iv, isJavaV1)
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
return entity
def _extract_etag(response):
''' Extracts the etag from the response headers. '''
if response and response.headers:
@ -291,6 +295,7 @@ def _extract_etag(response):
return None
def _parse_batch_response(response):
if response is None or response.body is None:
return None
@ -308,6 +313,7 @@ def _parse_batch_response(response):
return responses
def _parse_batch_response_part(part):
lines = part.splitlines();
@ -329,6 +335,7 @@ def _parse_batch_response_part(part):
return HTTPResponse(int(status), reason.strip(), headers, body)
def _parse_batch_error(part):
doc = loads(part.body.decode('utf-8'))

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,54 +11,50 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .._error import(
_ERROR_UNSUPPORTED_ENCRYPTION_VERSION,
_ERROR_DECRYPTION_FAILURE,
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM,
_ERROR_DATA_NOT_ENCRYPTED,
_validate_not_none,
_validate_key_encryption_key_wrap,
_validate_key_encryption_key_unwrap,
_validate_kek_id,
import os
from copy import deepcopy
from json import (
dumps,
loads,
)
from .._constants import(
_ENCRYPTION_PROTOCOL_V1,
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.hashes import (
Hash,
SHA256,
)
from .._common_conversion import(
from cryptography.hazmat.primitives.padding import PKCS7
from azure.storage.common._common_conversion import (
_decode_base64_to_bytes,
)
from .._encryption import(
from azure.storage.common._constants import (
_ENCRYPTION_PROTOCOL_V1,
)
from azure.storage.common._encryption import (
_generate_encryption_data_dict,
_dict_to_encryption_data,
_generate_AES_CBC_cipher,
_validate_and_unwrap_cek,
_EncryptionData,
_EncryptionAgent,
_WrappedContentKey,
_EncryptionAlgorithm
)
from ._error import(
from azure.storage.common._error import (
_ERROR_DECRYPTION_FAILURE,
_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM,
_validate_not_none,
_validate_key_encryption_key_wrap,
)
from ._error import (
_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION,
)
from .models import(
from .models import (
Entity,
EntityProperty,
EdmType,
)
from json import(
dumps,
loads,
)
import os
from copy import deepcopy
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.padding import PKCS7
from cryptography.hazmat.primitives.hashes import(
Hash,
SHA256,
)
def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
'''
@ -98,8 +94,8 @@ def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
# If the property resolver says it should be encrypted
# or it is an EntityProperty with the 'encrypt' property set.
if (isinstance(value, EntityProperty) and value.encrypt) or \
(encryption_resolver is not None \
and encryption_resolver(entity['PartitionKey'], entity['RowKey'], key)):
(encryption_resolver is not None \
and encryption_resolver(entity['PartitionKey'], entity['RowKey'], key)):
# Only strings can be encrypted and None is not an instance of str.
if isinstance(value, EntityProperty):
@ -108,9 +104,9 @@ def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
else:
raise ValueError(_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION)
if not isinstance(value, str):
raise ValueError(_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION)
raise ValueError(_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION)
# Value is now confirmed to hold a valid string value to be encrypted
# Value is now confirmed to hold a valid string value to be encrypted
# and should be added to the list of encrypted properties.
encrypted_properties.append(key)
@ -137,11 +133,11 @@ def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
encrypted_entity[key] = value
encrypted_properties = dumps(encrypted_properties)
# Generate the metadata iv.
metadataIV = _generate_property_iv(entity_initialization_vector,
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', False)
metadataIV = _generate_property_iv(entity_initialization_vector,
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', False)
encrypted_properties = encrypted_properties.encode('utf-8')
@ -161,6 +157,7 @@ def _encrypt_entity(entity, key_encryption_key, encryption_resolver):
encrypted_entity['_ClientEncryptionMetadata1'] = dumps(encryption_data)
return encrypted_entity
def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, entityIV, isJavaV1):
'''
Decrypts the specified entity using AES256 in CBC mode with 128 bit padding. Unwraps the CEK
@ -183,7 +180,7 @@ def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, e
_validate_not_none('entity', entity)
decrypted_entity = deepcopy(entity)
try:
try:
for property in entity.keys():
if property in encrypted_properties_list:
value = entity[property]
@ -197,13 +194,13 @@ def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, e
# Decrypt the property.
decryptor = cipher.decryptor()
decrypted_data = (decryptor.update(value.value) + decryptor.finalize())
# Unpad the data.
unpadder = PKCS7(128).unpadder()
decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize())
decrypted_data = decrypted_data.decode('utf-8')
decrypted_entity[property] = decrypted_data
decrypted_entity.pop('_ClientEncryptionMetadata1')
@ -212,6 +209,7 @@ def _decrypt_entity(entity, encrypted_properties_list, content_encryption_key, e
except:
raise AzureException(_ERROR_DECRYPTION_FAILURE)
def _extract_encryption_metadata(entity, require_encryption, key_encryption_key, key_resolver):
'''
Extracts the encryption metadata from the given entity, setting them to be utf-8 strings.
@ -235,7 +233,7 @@ def _extract_encryption_metadata(entity, require_encryption, key_encryption_key,
:rtype: tuple (bytes[], list, bytes[], bool)
'''
_validate_not_none('entity', entity)
try:
encrypted_properties_list = _decode_base64_to_bytes(entity['_ClientEncryptionMetadata2'])
encryption_data = entity['_ClientEncryptionMetadata1']
@ -245,22 +243,22 @@ def _extract_encryption_metadata(entity, require_encryption, key_encryption_key,
if require_encryption:
raise ValueError(_ERROR_ENTITY_NOT_ENCRYPTED)
else:
return (None,None,None,None)
return (None, None, None, None)
if not(encryption_data.encryption_agent.encryption_algorithm == _EncryptionAlgorithm.AES_CBC_256):
if not (encryption_data.encryption_agent.encryption_algorithm == _EncryptionAlgorithm.AES_CBC_256):
raise ValueError(_ERROR_UNSUPPORTED_ENCRYPTION_ALGORITHM)
content_encryption_key = _validate_and_unwrap_cek(encryption_data, key_encryption_key, key_resolver)
# Special check for compatibility with Java V1 encryption protocol.
isJavaV1 = (encryption_data.key_wrapping_metadata is None) or \
((encryption_data.encryption_agent.protocol == _ENCRYPTION_PROTOCOL_V1) and \
'EncryptionLibrary' in encryption_data.key_wrapping_metadata and \
'Java' in encryption_data.key_wrapping_metadata['EncryptionLibrary'])
((encryption_data.encryption_agent.protocol == _ENCRYPTION_PROTOCOL_V1) and \
'EncryptionLibrary' in encryption_data.key_wrapping_metadata and \
'Java' in encryption_data.key_wrapping_metadata['EncryptionLibrary'])
metadataIV = _generate_property_iv(encryption_data.content_encryption_IV,
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', isJavaV1)
entity['PartitionKey'], entity['RowKey'],
'_ClientEncryptionMetadata2', isJavaV1)
cipher = _generate_AES_CBC_cipher(content_encryption_key, metadataIV)
@ -283,6 +281,7 @@ def _extract_encryption_metadata(entity, require_encryption, key_encryption_key,
return (encryption_data.content_encryption_IV, encrypted_properties_list, content_encryption_key, isJavaV1)
def _generate_property_iv(entity_iv, pk, rk, property_name, isJavaV1):
'''
Uses the entity_iv, partition key, and row key to generate and return
@ -293,7 +292,7 @@ def _generate_property_iv(entity_iv, pk, rk, property_name, isJavaV1):
digest.update(entity_iv +
(rk + pk + property_name).encode('utf-8'))
else:
digest.update(entity_iv +
digest.update(entity_iv +
(pk + rk + property_name).encode('utf-8'))
propertyIV = digest.finalize()
return propertyIV[:16]
return propertyIV[:16]

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,9 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
# --------------------------------------------------------------------------
from .._error import (
from azure.storage.common._error import (
_validate_not_none,
_ERROR_VALUE_NONE_OR_EMPTY,
)
@ -42,10 +42,12 @@ _ERROR_TYPE_NOT_SUPPORTED = 'Type not supported when sending data to the service
_ERROR_VALUE_TOO_LARGE = '{0} is too large to be cast to type {1}.'
_ERROR_UNSUPPORTED_TYPE_FOR_ENCRYPTION = 'Encryption is only supported for not None strings.'
def _validate_object_has_param(param_name, object):
if object.get(param_name) is None:
raise ValueError(_ERROR_VALUE_NONE_OR_EMPTY.format(param_name))
def _validate_entity(entity, encrypt=None):
# Validate entity exists
_validate_not_none('entity', entity)
@ -60,7 +62,7 @@ def _validate_entity(entity, encrypt=None):
# Two properties are added during encryption. Validate sufficient space
max_properties = 255
if(encrypt):
if (encrypt):
max_properties = max_properties - 2
# Validate there are not more than 255 properties including Timestamp

Просмотреть файл

@ -1,4 +1,4 @@
#-------------------------------------------------------------------------
# -------------------------------------------------------------------------
# Copyright (c) Microsoft. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@ -11,19 +11,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#--------------------------------------------------------------------------
from .._http import HTTPRequest
from .._common_conversion import (
# --------------------------------------------------------------------------
from azure.storage.common._common_conversion import (
_to_str,
)
from .._error import (
from azure.storage.common._error import (
_validate_not_none,
_validate_encryption_required,
_validate_encryption_unsupported,
)
from .._serialization import (
from azure.storage.common._http import HTTPRequest
from azure.storage.common._serialization import (
_get_request_body,
)
from ._encryption import (
_encrypt_entity,
)
from ._error import (
_validate_entity,
)
@ -33,9 +36,7 @@ from ._serialization import (
_DEFAULT_CONTENT_TYPE_HEADER,
_DEFAULT_PREFER_HEADER,
)
from ._encryption import (
_encrypt_entity,
)
def _get_entity(partition_key, row_key, select, accept):
'''
@ -51,6 +52,7 @@ def _get_entity(partition_key, row_key, select, accept):
return request
def _insert_entity(entity, encryption_required=False,
key_encryption_key=None, encryption_resolver=None):
'''
@ -76,12 +78,13 @@ def _insert_entity(entity, encryption_required=False,
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
_DEFAULT_PREFER_HEADER[0]: _DEFAULT_PREFER_HEADER[1]
}
if(key_encryption_key):
if (key_encryption_key):
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _update_entity(entity, if_match, encryption_required=False,
key_encryption_key=None, encryption_resolver=None):
'''
@ -108,12 +111,13 @@ def _update_entity(entity, if_match, encryption_required=False,
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
'If-Match': _to_str(if_match),
}
if(key_encryption_key):
if (key_encryption_key):
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _merge_entity(entity, if_match, require_encryption=False, key_encryption_key=None):
'''
Constructs a merge entity request.
@ -133,6 +137,7 @@ def _merge_entity(entity, if_match, require_encryption=False, key_encryption_key
return request
def _delete_entity(partition_key, row_key, if_match):
'''
Constructs a delete entity request.
@ -149,6 +154,7 @@ def _delete_entity(partition_key, row_key, if_match):
return request
def _insert_or_replace_entity(entity, require_encryption=False,
key_encryption_key=None, encryption_resolver=None):
'''
@ -164,12 +170,13 @@ def _insert_or_replace_entity(entity, require_encryption=False,
_DEFAULT_ACCEPT_HEADER[0]: _DEFAULT_ACCEPT_HEADER[1],
}
if(key_encryption_key):
if (key_encryption_key):
entity = _encrypt_entity(entity, key_encryption_key, encryption_resolver)
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
def _insert_or_merge_entity(entity, require_encryption=False, key_encryption_key=None):
'''
Constructs an insert or merge entity request.
@ -195,4 +202,4 @@ def _insert_or_merge_entity(entity, require_encryption=False, key_encryption_key
}
request.body = _get_request_body(_convert_entity_to_json(entity))
return request
return request

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше