* Indicated the Data Migration Tool is offered as a community support repo
* Minor text fix
* Testing banner
* Test banner 2
* Test banner 3
* Test banner 3
* Test banner 3
* Test banner 3
* Exe names
* Added tutorial
* Removed link to doc
* Narrowed list of scenarios in heading
* Removed some Microsoft Docs tags
* small fix
* Other sources
* Case
* API links
* Checklist
* Fixed all local links
* Removed additional docs tags
* Reorganized to capture top priorities
* fix
* reorg
* Indent
* test
* Return link
* Return link
* Return links
* Return link
* within-section links
* Link fix
* Added image files
* Moved images
* Image refs fixed
* Other
* Priority
* APIs
* Capitalization
* MongoDB
* Link fix
* Link fix
* order of words
* Fixed find-replace issues
* RU range
* links
* Bad link
Co-authored-by: Andy Feldman <anfeldma@microsoft.com>
Changes in this PR include:
Update Document DB SDK version to Microsoft.Azure.DocumentDB.2.2.1
Update Tables SDK version to Microsoft.Azure.CosmosDB.Table.2.0.0
Update Mongo driver version to MongoDB.Driver.2.7.0
Fix copying of internal fields in azure tables
Add retry capabilities to azure tables and cosmos DB tables API
Support new logger : CosmosDB tables logger for error logging
Error handling improvements for command line parsing
Add EULA and ThirdParty license file to the codebase
* On import the Cosmos DB provider now ignores Json.NET metadata.
All Json.NET metadata is now passed through unchanged. This prevents failures when by chance the metadata is incorrect/unrecognized by the Json.NET serialization code.
* Fixed NumOpsPerBatch and default value of NumBytestoBufferBeforeFlushing
The Azure Tables provider code was not honoring the max-operations-per-batch limit of 100. Fixed this.
We have a setting for the max-bytes-to-buffer before starting to split it into batches and flushing the batches. The default value for this setting was 1GB. One customer reported excessive memory usage on account of this setting. Reduced this to 10MB to fix the memory usage issue. The point to note is that when reading values from Std/Preview Tables the partitionkeys are read in a sequence -- we get all records with a given paritionkey before moving onto the next one. Therefore a lower memory buffer limit should be sufficient.
* The project system now builds for x64, not Any CPU.
This is a recognition of the fact that the Cosmos DB provider depends on two native 64-bit assemblies.