Unify all markup C++ code sections to use 'cpp' (#5138)
* Unify all markup C++ code sections to use 'cpp' * Update broken link --------- Co-authored-by: Anton Kolesnyk <antkmsft@users.noreply.github.com>
This commit is contained in:
Родитель
f816bb194b
Коммит
030ad9a6ca
|
@ -162,7 +162,7 @@ Azure C++ SDK headers needed are located within the `<azure>` folder, with sub-f
|
|||
|
||||
Here's an example application to help you get started:
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
#include <iostream>
|
||||
|
||||
// Include the necessary SDK headers
|
||||
|
@ -231,7 +231,7 @@ The main shared concepts of `Azure Core` include:
|
|||
|
||||
Many client library operations **return** the templated `Azure::Core::Response<T>` type from the API calls. This type let's you get the raw HTTP response from the service request call the Azure service APIs make, along with the result of the operation to get more API specific details. This is the templated `T` operation result which can be extracted from the response, using the `Value` field.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// Azure service operations return a Response<T> templated type.
|
||||
Azure::Response<Models::BlobProperties> propertiesResponse = blockBlobClient.GetProperties();
|
||||
|
||||
|
@ -249,7 +249,7 @@ Some operations take a long time to complete and require polling for their statu
|
|||
|
||||
You can intermittently poll whether the operation has finished by using the `Poll()` method inside a loop on the returned `Operation<T>` and track progress of the operation using `Value()`, while the operation is not done (using `IsDone()`). Your per-polling custom logic can go in that loop, such as logging progress. Alternatively, if you just want to wait until the operation completes, you can use `PollUntilDone()`.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
std::string sourceUri = "<a uri to the source blob to copy>";
|
||||
|
||||
// Typically, long running operation APIs have names that begin with Start.
|
||||
|
|
|
@ -48,7 +48,7 @@ Following the examples from opentelemetry-cpp, the following can be used
|
|||
to establish an OpenTelemetry exporter which logs to the console or to an
|
||||
in-memory logger.
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
opentelemetry::nostd::shared_ptr<opentelemetry::trace::TracerProvider>
|
||||
CreateOpenTelemetryProvider()
|
||||
{
|
||||
|
@ -71,13 +71,13 @@ CreateOpenTelemetryProvider()
|
|||
}
|
||||
```
|
||||
|
||||
Other exporters exist to export to [Jaeger](https://github.com/open-telemetry/opentelemetry-cpp/tree/main/exporters/jaeger),
|
||||
Other exporters exist to export to [Elasticsearch](https://github.com/open-telemetry/opentelemetry-cpp/tree/main/exporters/elasticsearch),
|
||||
[Windows ETW](https://github.com/open-telemetry/opentelemetry-cpp/tree/main/exporters/etw) and others.
|
||||
|
||||
Once the `opentelemetry::trace::TracerProvider` has been created, The client needs to create a new `Azure::Core::Tracing::OpenTelemetry::OpenTelemetryProvider` which
|
||||
functions as an abstract class integration between OpenTelemetry and Azure Core:
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
std::shared_ptr<Azure::Core::Tracing::TracerProvider> traceProvider
|
||||
= Azure::Core::Tracing::OpenTelemetry::OpenTelemetryProvider::Create(CreateOpenTelemetryProvider());
|
||||
```
|
||||
|
@ -92,7 +92,7 @@ While using the ApplicationContext is the simplest mechanism for integration Ope
|
|||
To enable customers to further customize how tracing works, the application can set the `Telemetry.TracingProvider` field in the service client options, which will establish the tracer provider used by
|
||||
the service client.
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
auto tracerProvider(CreateOpenTelemetryProvider());
|
||||
auto provider(Azure::Core::Tracing::OpenTelemetry::OpenTelemetryProvider::Create(tracerProvider));
|
||||
|
||||
|
@ -116,13 +116,13 @@ There are two steps needed to integrate Distributed Tracing with a Service Clien
|
|||
|
||||
To add a new `DiagnosticTracingFactory` to the client, simply add the class as a member:
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
Azure::Core::Tracing::_internal::TracingContextFactory m_tracingFactory;
|
||||
```
|
||||
|
||||
And construct the new tracing factory in the service constructor:
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
explicit ServiceClient(ServiceClientOptions const& clientOptions = ServiceClientOptions{})
|
||||
: m_tracingFactory(clientOptions, "Azure.Core.OpenTelemetry.Test.Service",
|
||||
"azure-core-opentelemetry-test-service-cpp", PackageVersion::ToString())
|
||||
|
@ -139,7 +139,7 @@ And construct the new tracing factory in the service constructor:
|
|||
1. `Span::AddEvent(std::exception&)` - This registers the exception with the distributed tracing infrastructure.
|
||||
1. `Span::SetStatus` - This sets the status of the operation in the trace.
|
||||
|
||||
```c++
|
||||
```cpp
|
||||
Azure::Response<std::string> ServiceMethod(
|
||||
std::string const&,
|
||||
Azure::Core::Context const& context = Azure::Core::Context{})
|
||||
|
|
|
@ -96,7 +96,7 @@ For detailed samples please review the code provided.
|
|||
### GetSettings
|
||||
|
||||
To get all the available settings present on the Keyvault instance we will first create a client :
|
||||
```CPP
|
||||
```cpp
|
||||
auto tenantId = std::getenv("AZURE_TENANT_ID");
|
||||
auto clientId = std::getenv("AZURE_CLIENT_ID");
|
||||
auto clientSecret = std::getenv("AZURE_CLIENT_SECRET");
|
||||
|
@ -110,7 +110,7 @@ Please note that we are using the HSM URL, not the keyvault URL.
|
|||
|
||||
To get the settings we will call the GetSettings API
|
||||
|
||||
```CPP
|
||||
```cpp
|
||||
// Get all settings
|
||||
SettingsListResult settingsList = settingsClient.GetSettings().Value;
|
||||
```
|
||||
|
@ -119,14 +119,14 @@ To get the settings we will call the GetSettings API
|
|||
|
||||
To get a specific setting we will call the GetSetting API bassing the setting name as a string parameter.
|
||||
|
||||
```CPP
|
||||
```cpp
|
||||
Setting setting = settingsClient.GetSetting(settingsList.Value[0].Name).Value;
|
||||
```
|
||||
|
||||
### UpdateSetting
|
||||
|
||||
To update the value of any of the the available settings, we will call the UpdateSettings API as follows:
|
||||
```CPP
|
||||
```cpp
|
||||
UpdateSettingOptions options;
|
||||
options.Value = <setting value>;
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ v12
|
|||
|
||||
A `TokenCredential` abstract class (different API surface than v7.5) exists in the [Azure Core](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/core/azure-core) package that all libraries of the new Azure SDK family depend on, and can be used to construct Storage clients. Implementations of this class can be found separately in the [Azure Identity](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity) package.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
BlobServiceClient serviceClient(serviceUrl, std::make_shared<Azure::Identity::ClientSecretCredential>(tenantId, clientId, clientSecret));
|
||||
```
|
||||
|
||||
|
@ -76,11 +76,11 @@ v7.5
|
|||
|
||||
In general, SAS tokens can be provided on their own to be applied as needed, or as a complete, self-authenticating URL. The legacy library allowed providing a SAS through `storage_credentials` as well as constructing with a complete URL.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
cloud_blob_client blob_client(storage_uri(blob_url), storage_credentials(sas_token));
|
||||
```
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
cloud_blob_client blob_client(storage_uri(blob_url_with_sas));
|
||||
```
|
||||
|
||||
|
@ -88,7 +88,7 @@ v12
|
|||
|
||||
The new library only supports constructing a client with a fully constructed SAS URI. Note that since client URIs are immutable once created, a new client instance with a new SAS must be created in order to rotate a SAS.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
BlobClient blobClient(blobUrlWithSas);
|
||||
```
|
||||
|
||||
|
@ -98,19 +98,19 @@ The following code assumes you have acquired your connection string (you can do
|
|||
|
||||
v7.5
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
cloud_storage_account storage_account = cloud_storage_account::parse(storage_connection_string);
|
||||
cloud_blob_client service_client = storage_account.create_cloud_blob_client();
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
BlobServiceClient serviceClient = BlobServiceClient::CreateFromConnectionString(connectionString);
|
||||
```
|
||||
|
||||
You can also directly get a blob client with your connection string, instead of going through a service and container client to get to your desired blob. You just need to provide the container and blob names alongside the connection string.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
BlobClient blobClient = BlobClient::CreateFromConnectionString(connectionString, containerName, blobName);
|
||||
```
|
||||
|
||||
|
@ -121,12 +121,12 @@ Shared key authentication requires the URI to the storage endpoint, the storage
|
|||
Note that the URI to your storage account can generally be derived from the account name (though some exceptions exist), and so you can track only the account name and key. These examples will assume that is the case, though you can substitute your specific account URI if you do not follow this pattern.
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
cloud_blob_client blob_client(storage_uri(blob_service_url), storage_credentials(account_name, account_key));
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto credential = std::make_shared<StorageSharedKeyCredential>(accountName, accountKey);
|
||||
BlobServiceClient serviceClient(blobServiceUrl, credential);
|
||||
```
|
||||
|
@ -169,20 +169,20 @@ The following table lists v7.5 classes and their v12 equivalents for quick refer
|
|||
### Creating a Container
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
auto container_client = service_client.get_container_reference(container_name);
|
||||
container_client.create();
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto containerClient = serviceClient.GetBlobContainerClient(containerName);
|
||||
containerClient.Create();
|
||||
```
|
||||
|
||||
Or you can use the `BlobServiceClient.CreateBlobContainer()` method.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
serviceClient.CreateBlobContainer(containerName);
|
||||
```
|
||||
|
||||
|
@ -191,13 +191,13 @@ serviceClient.CreateBlobContainer(containerName);
|
|||
#### Uploading from a file
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
cloud_block_blob block_blob_client = container_client.get_block_blob_reference(blob_name);
|
||||
block_blob_client.upload_from_file(local_file_path);
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
BlockBlobClient blockBlobClient = containerClient.GetBlockBlobClient(blobName);
|
||||
blockBlobClient.UploadFrom(localFilePath);
|
||||
```
|
||||
|
@ -205,24 +205,24 @@ blockBlobClient.UploadFrom(localFilePath);
|
|||
#### Uploading from a stream
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
block_blob_client.upload_from_stream(stream);
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
blockBlobClient.Upload(stream);
|
||||
```
|
||||
|
||||
#### Uploading text
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
block_blob_client.upload_text("Hello Azure!");
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
uint8_t text[] = "Hello Azure!";
|
||||
blockBlobClient.UploadFrom(text, sizeof(text) - 1);
|
||||
```
|
||||
|
@ -232,13 +232,13 @@ blockBlobClient.UploadFrom(text, sizeof(text) - 1);
|
|||
#### Downloading to a file
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
auto blob_client = container_client.get_blob_reference(blob_name);
|
||||
blob_client.download_to_file(local_file_path);
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto blobClient = containerClient.GetBlobClient(blobName);
|
||||
blobClient.DownloadTo(localFilePath);
|
||||
```
|
||||
|
@ -246,12 +246,12 @@ blobClient.DownloadTo(localFilePath);
|
|||
#### Downloading to a stream
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
blob_client.download_to_stream(stream);
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto response = blobClient.Download();
|
||||
BodyStream& stream = *response.Value.BodyStream;
|
||||
```
|
||||
|
@ -259,12 +259,12 @@ BodyStream& stream = *response.Value.BodyStream;
|
|||
#### Downloading text
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
auto text = blob_client.download_text();
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto response = blobClient.Download();
|
||||
std::vector<uint8_t> blobContent = response.Value.BodyStream->ReadToEnd();
|
||||
std::string text(blobContent.begin(), blobContent.end());
|
||||
|
@ -275,7 +275,7 @@ std::string text(blobContent.begin(), blobContent.end());
|
|||
#### Flat Listing
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
for (auto iter = container_client.list_blobs(); iter != list_blob_item_iterator(); ++iter) {
|
||||
if (iter->is_blob()) {
|
||||
auto blob_client = iter->as_blob();
|
||||
|
@ -284,7 +284,7 @@ for (auto iter = container_client.list_blobs(); iter != list_blob_item_iterator(
|
|||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
for (auto blobPage = containerClient.ListBlobs(); blobPage.HasPage(); blobPage.MoveToNextPage()) {
|
||||
for (auto& blob : blobPage.Blobs) {
|
||||
|
||||
|
@ -300,7 +300,7 @@ v7.5
|
|||
|
||||
`list_blobs()` and `list_blobs_segmented()` that were used in a flat listing contain overloads with a boolean parameter `use_flat_blob_listing`, which results in a flat listing when `true`. Provide `false` to perform a hierarchical listing.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
for (auto iter = container_client.list_blobs(prefix, false, blob_listing_details::none, 0, blob_request_options, operation_context)) {
|
||||
if (iter->is_blob()) {
|
||||
auto blob_client = iter->as_blob();
|
||||
|
@ -315,7 +315,7 @@ v12
|
|||
|
||||
v12 has explicit methods for listing by hierarchy.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
for (auto blobPage = containerClient.ListBlobsByHierarchy("/"); blobPage.HasPage(); blobPage.MoveToNextPage()) {
|
||||
for (auto& blob : blobPage.Blobs) {
|
||||
|
||||
|
@ -336,7 +336,7 @@ v7.5 samples:
|
|||
|
||||
The legacy SDK maintained a metadata cache, allowing you to modify metadata on the `cloud_blob` and invoke `upload_metadata()`. Calling `download_attributes()` beforehand refreshed the metadata cache to avoid undoing recent changes.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
blob_client.download_attributes();
|
||||
blob_client.metadata()["foo"] = "bar";
|
||||
blob_client.upload_metadata();
|
||||
|
@ -344,7 +344,7 @@ blob_client.upload_metadata();
|
|||
|
||||
The legacy SDK maintained internal state for blob content uploads. Calling `download_attributes()` beforehand refreshed the metadata cache to avoid undoing recent changes.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// download blob content. blob metadata is fetched and cached on download
|
||||
blob_client.download_to_file(local_file_path);
|
||||
|
||||
|
@ -358,7 +358,7 @@ v12 samples:
|
|||
|
||||
The modern SDK requires you to hold onto metadata and update it appropriately before sending off. You cannot just add a new key-value pair, you must update the collection and send the collection.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
auto metadata = blobClient.GetProperties().Value.Metadata;
|
||||
metadata["foo"] = "bar";
|
||||
blobClient.SetMetadata(metadata);
|
||||
|
@ -366,7 +366,7 @@ blobClient.SetMetadata(metadata);
|
|||
|
||||
Additionally with blob content edits, if your blobs have metadata you need to get the metadata and re-upload with that metadata, telling the service what metadata goes with this new blob state.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// download blob content and metadata
|
||||
auto response = blobClient.DownloadTo(localFilePath);
|
||||
auto metadata = response.Value.Metadata;
|
||||
|
@ -387,7 +387,7 @@ blobClient.UploadFrom(localFilePath, uploadOptions);
|
|||
v7.5 calculated blob content MD5 for validation on download by default, assuming there was a stored MD5 in the blob properties. Calculation and storage on upload was opt-in. Note that this value is not generated or validated by the service, and is only retained for the client to validate against.
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
blob_request_options options;
|
||||
options.set_store_blob_content_md5(false); // true to calculate content MD5 on upload and store property
|
||||
options.set_disable_content_md5_validation(false); // true to disable download content validation
|
||||
|
@ -396,7 +396,7 @@ options.set_disable_content_md5_validation(false); // true to disable download
|
|||
v12 does not have an automated mechanism for blob content validation. It must be done per-request by the user.
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
// upload with blob content hash property
|
||||
UploadBlockBlobOptions uploadOptions;
|
||||
uploadOptions.HttpHeaders.ContentHash.Algorithm = HashAlgorithm::Md5;
|
||||
|
@ -419,7 +419,7 @@ v7.5 provided transactional hashing on uploads and downloads through opt-in requ
|
|||
|
||||
v7.5
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
blob_request_options options;
|
||||
options.set_use_transactional_md5(false); // true to use MD5 on all blob content transactions.
|
||||
options.set_use_transactional_crc64(false); // true to use CRC64 on all blob content transactions.
|
||||
|
@ -427,7 +427,7 @@ options.set_use_transactional_crc64(false); // true to use CRC64 on all blob co
|
|||
|
||||
v12 does not currently provide this functionality. Users who manage their own individual upload and download HTTP requests can provide a precalculated MD5 on upload and access the MD5 in the response object. v12 currently offers no API to request a transactional CRC64.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// upload a block with transactional hash calculated by user
|
||||
StageBlockOptions stageBlockOptions;
|
||||
stageBlockOptions.TransactionalContentHash = ContentHash();
|
||||
|
@ -454,13 +454,13 @@ auto hashValue = response.Value.Details.HttpHeaders.ContentHash.Value;
|
|||
#### Retry policy
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
blob_request_options options;
|
||||
options.set_retry_policy(exponential_retry_policy(delta_backoff, max_attempts));
|
||||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
Blobs::BlobClientOptions options;
|
||||
// The only supported mode is exponential.
|
||||
options.Retry.RetryDelay = std::chrono::milliseconds(delta_backoff);
|
||||
|
@ -472,7 +472,7 @@ options.Retry.MaxRetries = maxAttempts;
|
|||
Unfortunately, we don't support asynchronous interface in v12 SDK. You could wrap synchronous functions into asynchronous with some async framework like `std::async`. But note that I/O operations are still performed synchronously under the hood. There's no performance gain with this method.
|
||||
|
||||
v7.5
|
||||
```C++
|
||||
```cpp
|
||||
auto task = blob_client.download_text_async().then([](utility::string_t blob_content) {
|
||||
std::wcout << "blob content:" << blob_content << std::endl;
|
||||
});
|
||||
|
@ -481,7 +481,7 @@ task.wait();
|
|||
```
|
||||
|
||||
v12
|
||||
```C++
|
||||
```cpp
|
||||
auto task = std::async([blobClient]() {
|
||||
auto response = blobClient.Download();
|
||||
std::vector<uint8_t> blobContent = response.Value.BodyStream->ReadToEnd();
|
||||
|
|
|
@ -82,7 +82,7 @@ Client Options | [Accessing the response](https://github.com/Azure/azure-sdk-for
|
|||
|
||||
### Uploading a blob
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
const std::string connectionString = "<connection_string>";
|
||||
const std::string containerName = "sample-container";
|
||||
const std::string blobName = "sample-blob";
|
||||
|
@ -99,7 +99,7 @@ blobClinet.UploadFrom(bufferPtr, bufferLength);
|
|||
|
||||
### Downloading a blob
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// download to local file
|
||||
blobClient.DownloadTo(localFilePath);
|
||||
// or download to memory buffer
|
||||
|
@ -108,7 +108,7 @@ blobClinet.DownloadTo(bufferPtr, bufferLength);
|
|||
|
||||
### Enumerating blobs
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
for (auto blobPage = containerClient.ListBlobs(); blobPage.HasPage(); blobPage.MoveToNextPage()) {
|
||||
for (auto& blob : blobPage.Blobs) {
|
||||
// Below is what you want to do with each blob
|
||||
|
@ -123,7 +123,7 @@ All Blob service operations will throw a [StorageException](https://github.com/A
|
|||
on failure with helpful [ErrorCode](https://learn.microsoft.com/rest/api/storageservices/blob-service-error-codes)s.
|
||||
Many of these errors are recoverable.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
containerClient.Delete();
|
||||
|
|
|
@ -97,7 +97,7 @@ Client Options | [Accessing the response](https://github.com/Azure/azure-sdk-for
|
|||
|
||||
### Appending Data to a DataLake File
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
const std::string connectionString = "<connection_string>";
|
||||
const std::string fileSystemName = "sample-filesystem";
|
||||
const std::string directoryName = "sample-directory";
|
||||
|
@ -125,12 +125,12 @@ fileClient.Append(fileStream, 0);
|
|||
fileClient.Flush(fileStream.Length());
|
||||
```
|
||||
### Reading Data from a DataLake File
|
||||
```C++
|
||||
```cpp
|
||||
Response<DownloadFileResult> fileContents = fileClient.Download();
|
||||
```
|
||||
|
||||
### Enumerating DataLake Paths
|
||||
```C++
|
||||
```cpp
|
||||
for (auto pathPage = client.ListPaths(false); pathPage.HasPage(); pathPage.MoveToNextPage())
|
||||
{
|
||||
for (auto& path : pathPage.Paths)
|
||||
|
@ -147,7 +147,7 @@ All File DataLake service operations will throw a [StorageException](https://git
|
|||
on failure with helpful [ErrorCode](https://learn.microsoft.com/rest/api/storageservices/blob-service-error-codes)s.
|
||||
Many of these errors are recoverable.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
fileSystemClient.Delete();
|
||||
|
|
|
@ -73,7 +73,7 @@ Client Options | [Accessing the response](https://github.com/Azure/azure-sdk-for
|
|||
|
||||
### Create a share and upload a file
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
const std::string shareName = "sample-share";
|
||||
const std::string directoryName = "sample-directory";
|
||||
const std::string fileName = "sample-file";
|
||||
|
@ -97,7 +97,7 @@ fileClient.UploadFrom(bufferPtr, bufferLength);
|
|||
|
||||
### Download a file
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// download to local file
|
||||
fileClient.DownloadTo(localFilePath);
|
||||
// or download to memory buffer
|
||||
|
@ -106,7 +106,7 @@ fileClient.DownloadTo(bufferPtr, bufferLength);
|
|||
|
||||
### Traverse a share
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
std::vector<ShareDirectoryClient> remaining;
|
||||
remaining.push_back(shareClient.GetRootDirectoryClient());
|
||||
while (remaining.size() > 0)
|
||||
|
@ -135,7 +135,7 @@ All Azure Storage File Shares service operations will throw a [StorageException]
|
|||
on failure with helpful [ErrorCode](https://learn.microsoft.com/rest/api/storageservices/file-service-error-codes)s.
|
||||
Many of these errors are recoverable.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
shareClient.Delete();
|
||||
|
|
|
@ -72,7 +72,7 @@ Client Options | [Accessing the response](https://github.com/Azure/azure-sdk-for
|
|||
|
||||
### Send messages
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
const std::string connectionString = "<connection_string>";
|
||||
const std::string queueName = "sample-queue";
|
||||
|
||||
|
@ -86,7 +86,7 @@ queueClient.EnqueueMessage("Hello, Azure2!");
|
|||
queueClient.EnqueueMessage("Hello, Azure3!");
|
||||
```
|
||||
### Receive messages
|
||||
```C++
|
||||
```cpp
|
||||
ReceiveMessagesOptions receiveOptions;
|
||||
receiveOptions.MaxMessages = 3;
|
||||
auto receiveMessagesResult = queueClient.ReceiveMessages(receiveOptions).Value;
|
||||
|
@ -103,7 +103,7 @@ All Azure Storage Queue service operations will throw a [StorageException](http
|
|||
on failure with helpful [ErrorCode](https://learn.microsoft.com/rest/api/storageservices/queue-service-error-codes)s.
|
||||
Many of these errors are recoverable.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
queueClient.Delete();
|
||||
|
|
|
@ -43,7 +43,7 @@ The inner loop gets called for every paged result and doesn't do I/O.
|
|||
|
||||
Below is an example of listing all blobs in a blob container.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
for (auto page = blobContainerClient.ListBlobs(); page.HasPage(); page.MoveToNextPage()) {
|
||||
for (auto& blob : page.Blobs) {
|
||||
std::cout << blob.Name << std::endl;
|
||||
|
@ -53,7 +53,7 @@ for (auto page = blobContainerClient.ListBlobs(); page.HasPage(); page.MoveToNex
|
|||
|
||||
Sometimes a paged result may contain multiple collections and you may want to iterate over all of them.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
for (auto page = directoryClient.ListFilesAndDirectories(); page.HasPage(); page.MoveToNextPage())
|
||||
{
|
||||
for (const auto& d : page.Directories)
|
||||
|
@ -72,7 +72,7 @@ for (auto page = directoryClient.ListFilesAndDirectories(); page.HasPage(); page
|
|||
Yes, each client options takes an `ApiVersion` as an optional parameter, with which you can specify an API-version used for all the HTTP requests from this client.
|
||||
Clients spawned from another client instance will inherit the settings.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// serviceClient sends HTTP requests with default API-version, which will change as version evolves.
|
||||
auto serviceClient = BlobServiceClient::CreateFromConnectionString(GetConnectionString());
|
||||
|
||||
|
@ -94,7 +94,7 @@ Furthermore, this scenario is not covered by testing, although most of the APIs
|
|||
|
||||
We recommend you set an application ID with the code below, so that it can be identified from which application, SDK, and platform the request was sent. The information could be useful for troubleshooting and telemetry purposes.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
BlobClientOptions clientOptions;
|
||||
clientOptions.Telemetry.ApplicationId = "SomeApplication v1.2.3";
|
||||
|
||||
|
@ -109,7 +109,7 @@ This applies to both input variables and output.
|
|||
If your code runs in an environment where the default locale and encoding is not UTF-8, you should encode before passing variables into the SDK and decode variables returned from the SDK.
|
||||
|
||||
In the blow code snippet, we'd like to create a blob named <code>olá</code>.
|
||||
```C++
|
||||
```cpp
|
||||
// If the blob client is created from a container client, the blob name should be UTF-8 encoded.
|
||||
auto blobClient = containerClient.GetBlobClient("ol\xC3\xA1");
|
||||
// If the blob client is built from URL, it should be URL-encoded
|
||||
|
@ -130,7 +130,7 @@ for (auto page = blobContainerClient.ListBlobs(); page.HasPage(); page.MoveToNex
|
|||
|
||||
You can check whether a blob exists or not by writing a convenience method on top of getting blob properties, as follows:
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
bool BlobExists(const Azure::Storage::Blobs::BlobClient& client) {
|
||||
try {
|
||||
client.GetProperties();
|
||||
|
@ -152,7 +152,7 @@ In this case, it's more recommended to:
|
|||
|
||||
1. Use `CreateIfNotExists()`, `DeleteIfExists()` functions whenever possible. These functions internally use access conditions and can help you catch unexpected exceptions caused by resending PUT/DELETE requests on network errors.
|
||||
1. Use access conditions for other operations. The code below only sends one HTTP request, check-and-write is performed atomically. It only succeeds if the blob doesn't exist.
|
||||
```C++
|
||||
```cpp
|
||||
UploadBlockBlobOptions options;
|
||||
options.AccessConditions.IfNoneMatch = Azure::ETag::Any();
|
||||
blobClient.Upload(stream, options);
|
||||
|
@ -174,7 +174,7 @@ This one is suitable in most cases. You can expect higher throughput because the
|
|||
|
||||
Unfortunately, this SDK doesn't provide a convenient way to upload many blobs or directory contents (files and sub-directories) with just one function call.
|
||||
You have to create multiple threads, traverse the directories by yourself and upload blobs one by one in each thread to speed up the transfer. Below is a skeleton example.
|
||||
```C++
|
||||
```cpp
|
||||
const std::vector<std::string> paths; // Files to be uploaded
|
||||
std::atomic<size_t> curr{0};
|
||||
auto upload_func = [&]() {
|
||||
|
@ -224,7 +224,7 @@ Make sure you calculate the checksum as early as possible so that potential corr
|
|||
This functionality also works for download operations.
|
||||
Below is a code sample to use this feature.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
// upload data with pre-calculated checksum
|
||||
Blobs::UploadBlockBlobOptions options;
|
||||
auto checksum = ContentHash();
|
||||
|
@ -296,7 +296,7 @@ Below is an example of adding a custom header into each HTTP request.
|
|||
The header value is static and doesn't change over time, so we make it a per-operation policy.
|
||||
If you want to add some time-variant headers like authentication, you should use a per-retry policy.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
class NewPolicy final : public Azure::Core::Http::Policies::HttpPolicy {
|
||||
public:
|
||||
~NewPolicy() override {}
|
||||
|
@ -326,7 +326,7 @@ options.PerRetryPolicies.push_back(std::make_unique<NewPolicy>());
|
|||
Requests failed due to network errors or HTTP status code 408, 500, 502, 503, 504 will be retried at most 3 times (4 attempts in total) using exponential backoff with jitter.
|
||||
These parameters can be customized with `RetryOptions`. Below is an example.
|
||||
|
||||
```C++
|
||||
```cpp
|
||||
BlobClientOptions options;
|
||||
options.Retry.RetryDelay = std::chrono::milliseconds(800);
|
||||
options.Retry.MaxRetryDelay = std::chrono::seconds(60);
|
||||
|
@ -346,7 +346,7 @@ Here are a few things you can do to minimize the impact of this kind of error.
|
|||
1. Increase the retry count.
|
||||
1. Identify the throttling type and reduce the traffic sent from client side.
|
||||
You can check the exception thrown from storage function calls with the below code. The error message will indicate which scalability target was exceeded.
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
// storage function goes here
|
||||
|
@ -383,7 +383,7 @@ If you're using SAS authentication, you should:
|
|||
1. SAS token has its own scope, for example it may be scoped to a storage account, a container or a blob/file. Make sure you don't access a resource out of the SAS token's scope.
|
||||
1. Check the message in the exception.
|
||||
You could print the information in the exception with the code below.
|
||||
```C++
|
||||
```cpp
|
||||
try
|
||||
{
|
||||
// storage function goes here
|
||||
|
|
Загрузка…
Ссылка в новой задаче