* Added tier parameter in upload block blob function signature + Fixed usage + Wrote a test case for validation.
* Added tier parameter in
a. CopyFromURL, CommitBlockList of Block Blob
b. Create (Page Blob)
Fixed all occurrence
* Minor Change
* Added test
* Generated code for 12-12-2019 spec
* Fix test
* Changes
* Basic Testing and modification in WithVersionId function.
* Added Tags and Versions in BlobListingDetails.
* Added Tests
* Added TestCases
* Commented out tests which require versioning disabled.
* Added Tests
* Testcases 1-on-1 with python SDK
* Moved all tests to same file for ease of accessibility
Co-authored-by: zezha-msft <zezha@microsoft.com>
During test execution `TestCopyFromReader` hit a data race which
resulted in the test failing due to checksum mismatch.
Since the `id` struct's methods are not thread-safe there is a race
condition where two upload threads request an ID at the same moment and
both could potentially end up with the same ID.
The simplest solution is to move the ID assignment to the thread that
populates the chunks to the channel. This ensures that the chunk IDs are
assigned sequentially and removes the race condition.
The `Parallelism` value isn't set to the default value until after the `operationChannel` is created. This causes the `operationChannel` to be created with a buffer size of zero (unbuffered) instead of what I assume to be the intended value of `5`. This causes the Go routines to bottleneck on the channel as only one message can pass through at a time.
* Refactor of UploadStreamToBlockBlobOptions
Refactor to remove bugs and extra complexity.
* Migrate stream code to own file. Add concurrency.
After benchmarking, code was 25% slower without concurrency in azcopy, regardless of buffer sizing.
This change introduces back concurrency, but with a much simpler model and still eliminating the the atomic operations.
* Update to use io.ReadFull()
Adding test.
Benchmarks are now on par with the original code. Here is 10 runs using the current azcopy binary and a binary built from changed code:
Binary = bins/azcopy_original(run 1)
Benchmark 1KiB file: 338.314573ms
Benchmark 1MiB file: 484.967288ms
Benchmark 10MiB file: 760.810541ms
Benchmark 100MiB file: 1.351661794s
Benchmark 1GiB file: 10.826069714s
Binary = bins/azcopy_original(run 2)
Benchmark 1KiB file: 207.941537ms
Benchmark 1MiB file: 460.838416ms
Benchmark 10MiB file: 760.783836ms
Benchmark 100MiB file: 1.501405998s
Benchmark 1GiB file: 7.18717018s
Binary = bins/azcopy_original(run 3)
Benchmark 1KiB file: 212.47363ms
Benchmark 1MiB file: 467.623706ms
Benchmark 10MiB file: 698.447313ms
Benchmark 100MiB file: 1.292167757s
Benchmark 1GiB file: 7.637774779s
Binary = bins/azcopy_original(run 4)
Benchmark 1KiB file: 276.746547ms
Benchmark 1MiB file: 465.676606ms
Benchmark 10MiB file: 646.126277ms
Benchmark 100MiB file: 1.087617614s
Benchmark 1GiB file: 6.546629743s
Binary = bins/azcopy_original(run 5)
Benchmark 1KiB file: 224.753013ms
Benchmark 1MiB file: 468.194201ms
Benchmark 10MiB file: 658.754858ms
Benchmark 100MiB file: 1.287728254s
Benchmark 1GiB file: 7.349753091s
Binary = bins/azcopy_original(run 6)
Benchmark 1KiB file: 215.433224ms
Benchmark 1MiB file: 468.2654ms
Benchmark 10MiB file: 736.859155ms
Benchmark 100MiB file: 1.288282248s
Benchmark 1GiB file: 9.901807484s
Binary = bins/azcopy_original(run 7)
Benchmark 1KiB file: 309.374802ms
Benchmark 1MiB file: 466.3705ms
Benchmark 10MiB file: 764.919816ms
Benchmark 100MiB file: 1.288119942s
Benchmark 1GiB file: 12.568692895s
Binary = bins/azcopy_original(run 8)
Benchmark 1KiB file: 223.696311ms
Benchmark 1MiB file: 459.585207ms
Benchmark 10MiB file: 861.388787ms
Benchmark 100MiB file: 2.001739213s
Benchmark 1GiB file: 14.062394287s
Binary = bins/azcopy_original(run 9)
Benchmark 1KiB file: 213.478124ms
Benchmark 1MiB file: 472.516087ms
Benchmark 10MiB file: 888.345447ms
Benchmark 100MiB file: 1.712670977s
Benchmark 1GiB file: 7.351456844s
Binary = bins/azcopy_original(run 10)
Benchmark 1KiB file: 211.893325ms
Benchmark 1MiB file: 461.4607ms
Benchmark 10MiB file: 810.622545ms
Benchmark 100MiB file: 1.649993952s
Benchmark 1GiB file: 12.236548842s
Binary = bins/azcopy_changed(run 1)
Benchmark 1KiB file: 253.721968ms
Benchmark 1MiB file: 498.897549ms
Benchmark 10MiB file: 787.010372ms
Benchmark 100MiB file: 1.381749395s
Benchmark 1GiB file: 10.446411529s
Binary = bins/azcopy_changed(run 2)
Benchmark 1KiB file: 252.710169ms
Benchmark 1MiB file: 531.817803ms
Benchmark 10MiB file: 829.688513ms
Benchmark 100MiB file: 1.385873084s
Benchmark 1GiB file: 8.47119338s
Binary = bins/azcopy_changed(run 3)
Benchmark 1KiB file: 257.306962ms
Benchmark 1MiB file: 505.047536ms
Benchmark 10MiB file: 784.31337ms
Benchmark 100MiB file: 1.555737854s
Benchmark 1GiB file: 8.552681344s
Binary = bins/azcopy_changed(run 4)
Benchmark 1KiB file: 247.846574ms
Benchmark 1MiB file: 497.231545ms
Benchmark 10MiB file: 815.651525ms
Benchmark 100MiB file: 2.697350445s
Benchmark 1GiB file: 7.516749079s
Binary = bins/azcopy_changed(run 5)
Benchmark 1KiB file: 252.352667ms
Benchmark 1MiB file: 501.701337ms
Benchmark 10MiB file: 707.436865ms
Benchmark 100MiB file: 1.36936469s
Benchmark 1GiB file: 9.73502422s
Binary = bins/azcopy_changed(run 6)
Benchmark 1KiB file: 310.863688ms
Benchmark 1MiB file: 502.052735ms
Benchmark 10MiB file: 1.002850071s
Benchmark 100MiB file: 1.506176604s
Benchmark 1GiB file: 11.832881097s
Binary = bins/azcopy_changed(run 7)
Benchmark 1KiB file: 257.951257ms
Benchmark 1MiB file: 504.845129ms
Benchmark 10MiB file: 897.192408ms
Benchmark 100MiB file: 3.660229033s
Benchmark 1GiB file: 8.277701479s
Binary = bins/azcopy_changed(run 8)
Benchmark 1KiB file: 248.399669ms
Benchmark 1MiB file: 510.47592ms
Benchmark 10MiB file: 660.498819ms
Benchmark 100MiB file: 983.16489ms
Benchmark 1GiB file: 9.696608161s
Binary = bins/azcopy_changed(run 9)
Benchmark 1KiB file: 256.139558ms
Benchmark 1MiB file: 509.733119ms
Benchmark 10MiB file: 787.046948ms
Benchmark 100MiB file: 1.304473257s
Benchmark 1GiB file: 10.392113698s
Binary = bins/azcopy_changed(run 10)
Benchmark 1KiB file: 253.185361ms
Benchmark 1MiB file: 500.357929ms
Benchmark 10MiB file: 852.302359ms
Benchmark 100MiB file: 1.555795815s
Benchmark 1GiB file: 9.234134017s
* Improve comments, use getErr() instead of old statement, add test for write errors
Added some comment fixes.
Adding some TODOs.
Had an error detection using select that could just use getErr() instead.
Wrote support and test for having a write error.
* Updates to comments provided by ze
* Reduces construction of chunk ids for the commit list, moves azblob_test to azblob
azblob_test should be azblob test, allowing access to private types and removing the need for azblob. as a prefix. Could find no reason to make as a seperate package in a non-go standard way.
This packages' derivation from the blobstore standard of a new UUID per chunk has merits as discussed with Adele(faster, less memory, possible upload resumes, etc...). So it was decided to keep it.
However, wanted to make this easier for us to autoincrement ids and provide the list of IDs to commit instead of recreating them at the end at the cost of CPU we didn't have to spend (we were going to spend the memory anyways). So provided a better way to get the IDs.
This change required changes to the tests. Most tests use a single blockID. For those tests we now use a var created on init() that is a UUID+ math.MaxUint32. That allows us to test the maximum value.
For others, we now use our id type. This changed one test that was trying to test order, which wasn't necessary.
All tests are passing.
* Update gomod to import uuid an update of adal
* Update go.mod via go tidy command
The adal change was because one of the zt tests uses it. It always should have been there and won't cause any change in functionality.
errors gets added as an indirecty from check.v1, which it should have always had. This is because check doesn't have a go.mod file.
Adds a minimum version of Go compiler 1.13
* Update go.mod
* Just the mod updates
* Get it back into shape
Some git thing on my side, had to get us hand patched back right.
Co-authored-by: John Doak <jdoak@janama-2.redmond.corp.microsoft.com>
Co-authored-by: John Doak <jdoak@Fan061719.northamerica.corp.microsoft.com>
* Fix a nil pointer dereference when retrying
* Set all of the request hosts to secondary so it goes to the right place
I caught in Fiddler that this was still pinging the primary URL when trying to access the secondary.
* Pay attention to struct embedding
* Updated generated files & function signatures
* Expose GetUserDelegationKey to end users
* Documentation for GetUserDelegationKey; Simple method to create KeyInfo
* Not supported; Removed
* Added test & made it work
* Added swagger, changelog
* Discard leftover debug information
* Move up to Go 1.12.1
* Refactor sas_service.go and tests that use it
* Renamed everything from identity -> user delegation, fixed test
* Protected against nil pointer dereference with nil checking
* Enable container SAS for user delegation SAS
* Isolate account key type credentials into StorageAccountCredential
Prevents breaking call to NewSASQueryParameters. *may* have broken calls to ComputeHMACSHA256 (but why was it even a pointer in the first place?)
* Update comments
* Type assertions: Not even once.
* Update comments and param tags
* Remove TODO, rename test
* Correct UDK call to FormatTimesForSASSigning
* Snapshot SAS token added
* Changelog addition
* Update zc_sas_query_params.go to include snapshot time
* Test created, adjustments made to pass test
* Renamed sas_blob_snapshot_test.go to conform with naming standard
* Corrected logic & added comment to clarify purpose
* Write proper unit test & make it work
* Made snapshot SAS test more concise
* Should not supply DeleteSnapshotOptions when deleting singular snapshot
* Add expected failure condition for attempting to auth non-snapshot
runtime.FuncForPC does not guarantee you will receive *the entire stack*. As a result, functions would become inlined and the test would panic trying to find a name.