Fix counter for resume for upload

- Various doc updates
This commit is contained in:
Fred Park 2017-06-11 12:06:55 -07:00
Родитель 45ef468ceb
Коммит a79cd3ab98
5 изменённых файлов: 13 добавлений и 10 удалений

Просмотреть файл

@ -39,6 +39,7 @@ for Block blobs)
* Include and exclude filtering support
* Rsync-like delete support
* No clobber support in either direction
* Automatic content type tagging
* File logging support
## Installation

Просмотреть файл

@ -514,7 +514,9 @@ class Descriptor(object):
# chunk are complete
if blobxfer.util.is_not_empty(self._ase.replica_targets):
if chunk_num not in self._replica_counters:
self._replica_counters[chunk_num] = 0
# start counter at -1 since we need 1 "extra" for the
# primary in addition to the replica targets
self._replica_counters[chunk_num] = -1
self._replica_counters[chunk_num] += 1
if (self._replica_counters[chunk_num] !=
len(self._ase.replica_targets)):

Просмотреть файл

@ -50,16 +50,16 @@ configuration file to define multiple destinations.
### Stripe
`stripe` mode will splice a file into multiple chunks and scatter these
chunks across destinations specified. These destinations can be different
a single or multiple containers within the same storage account or even
containers distributed across multiple storage accounts if single storage
account bandwidth limits are insufficient.
chunks across destinations specified. These destinations can be single or
multiple containers within the same storage account or even containers
distributed across multiple storage accounts if single storage account
bandwidth limits are insufficient.
`blobxfer` will slice the source file into multiple chunks where the
`stripe_chunk_size_bytes` is the stripe width of each chunk. This parameter
will allow you to effectively control how many blobs/files are created on
Azure. `blobxfer` will then round-robin through all of the destinations
specified to store the slices. Information required to reconstruct the
specified to scatter the slices. Information required to reconstruct the
original file is stored on the blob or file metadata. It is important to
keep this metadata in-tact or reconstruction will fail.

Просмотреть файл

@ -75,7 +75,7 @@ instead.
## MD5 Hashing
MD5 hashing will impose some performance penalties to check if the file
should be uploaded or downloaded. For instance, if uploading and the local
file is determined to be different than it's remote counterpart, then the
file is determined to be different than its remote counterpart, then the
time spent performing the MD5 comparison is effectively "lost."
## Client-side Encryption

Просмотреть файл

@ -8,8 +8,8 @@ Azure Files.
* `stdin` sources cannot be encrypted.
* Azure KeyVault key references are currently not supported.
### Platform-specific Issues
* File attribute store/restore is not supported on Windows.
### Platform-specific
* File attribute store/restore is currently not supported on Windows.
### Resume Support
* Encrypted uploads/downloads cannot currently be resumed as the Python
@ -22,6 +22,6 @@ SHA256 object cannot be pickled.
File share which has empty directories.
* Empty directories are not deleted if `--delete` is specified and no files
remain in the directory on the Azure File share.
* Directories with no characters, e.g. `/mycontainer//mydir` are not
* Directories with no characters, e.g. `mycontainer//mydir` are not
supported.
* `/dev/null` or `nul` destinations are not supported.