This commit is contained in:
karauten 2020-04-14 14:19:28 -04:00 коммит произвёл GitHub
Родитель 1a03805341
Коммит ac63506614
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
8 изменённых файлов: 346 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,43 @@
<a id="overview">Azure Storage as a Backup Target Overview</a>
-----------------------------
Cloud adoption has grown in recent years, most notably in enterprises. As enterprises embark upon a cloud adoption journey, typically the journey starts small. Many customers find themselves in an experimentation mode for a set period, where the business tries to understand how a chosen cloud vendor works. During this experimentation phase, it becomes important to both build and learn quickly from successes and failures. Often, enterprises ask if Microsoft has a framework to follow as a “getting started with Azure” exercise. Microsoft now holds a meaningful approach to enterprise digital transformations by way of the [Cloud Adoption Framework](https://docs.microsoft.com/en-us/azure/architecture/cloud-adoption/) \(CAF\)\. We also provide a step-by-step Setup Guide for those new to Azure to help you get up and running quickly and securely - [Azure Setup Guide](https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/ready/azure-setup-guide/)\. If you would like additional assistance building your first
In working with customers as they onboard to Azure, a good first workload becomes cloud-based backups as a data protection strategy. Even those with experience operating a Cloud Deployment will continue to benefit by protecting their remaining on-premises applications with cloud-based copies. Backups in general provide an extra insurance policy in terms of business continuity and disaster recovery. Backups also lend additional assistance in protection against cyber-attacks, human errors, natural disasters, etc. However historically, backups required a variety LTO tapes and tape libraries, large disk arrays for disk-based backup, third party software/maintenance costs, and possibly an investment of WAN accelerators as backups moved to a separate co-lo. Cloud computing changes all these traditional approaches for data protection, backup, and archive. Data can be stored in the cloud versus locally, on a shared drive, or on backup tapes. Additionally, cloud-based backups can augment or replace existing on-premises solutions. Typically moving backups to the cloud reduces overall backup costs. The key is to build out and document enterprise wide recovery point objectives (RPOs) and recovery time objectives (RTOs) in a way where the right backup tool or solution is selected, based upon business need. RPO is the age of data that must be recovered from backup storage or a disaster recovery software solution for normal operations to resume if a computer, system, or network goes down as a result of a failure or disaster. RTO is the maximum tolerable length of time that systems can be down after a failure or a disaster occurs. Even though cloud adoption and migration can simplify certain tasks and responsibilities, the same considerations should be brought to the cloud related to durability, redundancy, and resiliency.
This documentation series is aimed at helping customers through the process of assessing infrastructure requirements, choosing the right Azure Storage solution, securing the Azure Storage resources, and integrating common backup and archival solutions with Microsoft Azure.
Content Outline
----------------------------
1 - [Networking](./2-az-stor-networking.md)
2 - [Choosing an Azure Storage tier and creating your first account](./3-az-stor-acct-setup-opts)
3 - [Controlling Access to you Azure Storage](4-az-stor-access-mgmt.md)
4 - [Additional Storage Security](5-az-stor-acct-security.md)
5 - [Protecting against accidental or malicious deletion](6-az-stor-arch-comp-softdelete.md)
6 - [Managing Costs](7-az-stor-data-prot-cost-mgmt.md)
7 - [Integrating Backup and Archive products](8-ptr-integ-first-party.md)
Partner Integration
----------------------------
Know the Azure basics? Want to jump right to "How-To" add Azure Storage resources to your existing or shiny new solution? Then these documents are for you!
Commvault
Cohesity Data Platform
Commvault
Dell Isilon CloudPool
Dell Cloudboost
Dell Data Domain CloudTier
IBM Spectrum Protect
Nasuni
NetApp Cloud Tiering
NetApp Cloud Volumes ONTAP
NetApp StorageGRID
Nutanix
Qumulo
Rubrik Cloud Data Management
Scality
Veeam
Veritas NetBackup
Veritas Backup Exec

Просмотреть файл

@ -0,0 +1,28 @@
<a id="network">Network Sizing</a>
-----------------------------
What about the network?
----------------------------
Whether leveraging Cloud resources to run Production, Test and Development, or as a Backup target and Recovery site it is important to understand your bandwidth needs for initial backup seeding and for on-going day-to-day transfers.
You will require ample network capacity to support daily data transfers within the required transfer window without impacting Production applications. This section will outline the tools and techniques available to assess your network needs.
How can you understand how much bandwidth you will need?
----------------------------
1) Reports from your backup software.
a. In each of the partner specific pages in the Backup and Archive workload section, we will highlight the reports available to help assess your change rate.
2) Backup software independent assessment and reporting tools like:
a. [MiTrend](https://mitrend.com/)
b. [Aptare](https://www.veritas.com/insights/aptare-it-analytics)
How will I know how much headroom I have with my current Internet connection?
----------------------------
1) Are you an existing Azure ExpressRoute customer? View your [circuit usage](https://docs.microsoft.com/en-us/azure/expressroute/expressroute-monitoring-metrics-alerts#circuits-metrics)
2) You can Contact your ISP. They should have reports to share with you illustrating your existing daily and monthly utilization.
3) There are several tools that can measure utilization by monitoring your network traffic at your router/switch level including:
a. [Solarwinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS)
b. [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring)
c. [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/index.html)
d. [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring)

Просмотреть файл

@ -0,0 +1,47 @@
<a id="accountsetup">Account Setup</a>
-----------------------------
All Azure cloud-based backups require a storage account to act as an endpoint in the cloud. Setting up a General-Purpose v2 storage account can be achieved via 1 of the following methods:
1) [Azure Portal](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal)
2) [PowerShell](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=azure-powershell)
3) [Azure CLI](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=azure-cli)
4) [Azure Resource Manager Template](https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account?tabs=template)
Azure Blob Storage Options
--------------------------
By building out a backup strategy using Azure, customers make use of [Azure Blob Storage](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction)\. Azure Blob storage is Microsoft's object storage solution. Blob storage is optimized for storing massive amounts of unstructured data, which is data that does not adhere to any data model or definition. Additionally, Azure Storage is durable, highly available, secure, and scalable. Microsofts platform offers up flexibility to select the right storage for the right workload in order to provide resiliency and protection. For a deeper and more comprehensive overview of the platform, please refer to documentation on data redundancy (which highlights all SKUs inside Azure Blob Storage) and the types of storage accounts (which covers the various supported storage account types).
As enterprises select a strategy for backup storage, there are a few different options related to backup targets in Azure. All data in a storage account is replicated to ensure durability and high availability. Azure Blob Storage copies data based upon a replication scheme to safeguard from planned and unplanned events, including transient hardware failures, network or power outages, and/or massive natural disasters. Customers can choose to replicate data within the same data center, across availability zones, across geographically separated regions.
As a quick reference, Azure backup storage falls into the following redundancy patterns, with each redundancy option being a separate SKU to select upon storage account creation:
1) [Locally Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-lrs) \(LRS\) – Provides object durability by replicating data to a storage cluster within 1 datacenter region. LRS is the lowest-cost replication SKU and offers the least durability compared to other SKUs. If a datacenter disaster occurs in a region, all replicas may be lost or unrecoverable.
<br>Backup Storage SKU - Standard Locally Redundant Storage
2) [Zone Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-zrs) \(ZRS\) – Replicates data synchronously across three storage clusters in a single region. Each storage cluster is physically separated from others and is in its own [Azure Availability Zone](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview). Each availability zone – and the ZRS cluster within the zone – is autonomous and includes separate utilities + networking features. A write request to a ZRS storage account returns successfully only after the data is written to all replicas across the three clusters.
<br>Backup Storage SKU - Standard Zone Redundant Storage
3) [Geo Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs) \(GRS\) – Replicates data asynchronously to a secondary region that is hundreds of miles away from the primary region.
Backup Storage SKU - Standard Geo Redundant Storage – In addition to asynchronous replication, data is only available if Microsoft initiates a failover from the primary to secondary region.
4) [Geo Zone Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-gzrs) \(GZRS\) - Data in a GZRS storage account is replicated across three [Azure Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) in the primary region and is also replicated to a secondary geographic region for protection from regional disasters.
Backup Storage SKU - Standard Geo Zone Redundant Storage
5) [Read Only Geo Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#read-access-geo-redundant-storage) \(RA-GRS\) - Maximizes availability of a single storage account by providing read-only access to data in teh secondary location, in addition to geo-replication across two regions.
Backup Storage SKU - Standard Read Only Geo Redundant Storage
6) [Read Only Geo Zone Redundant Storage](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-gzrs?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#use-ra-gzrs-for-high-availability) \(RA-GZRS\) - When enabled, data can be read from the secondary endpoint as well as from the primary endpoint for a given storage account. Applications should read from and write to the primary endpoint, but switch to using the secondary endpoint in the event the primary region becomes unavailable.
Backup Storage SKU - Standard Read Only Geo Zone Redundant Storage
Lastly, there are 2 different tiers of Azure Blob Storage for backups to provide a more comprehensive data tiering strategy:
1) [Cool Tier](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers#cool-access-tier)
2) [Archive Tier](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers#archive-access-tier)
Most customers look at cool and archive as the cost-efficient tiers for data protection and archive. The other tier not covered is the hot tier for storage. Typically hot access tiers have higher storage costs than cool and archive, but lowest access cost. Please refer to the following [documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers) for a deeper analysis on each tier for Azure Blob Storage.
Azure holds a few caveats related to storage tiering in Azure for backups:
1) For tiering, all storage account kinds must be GPv2.
2) Only the cool access tiers can be set at the account level and inferred for all objects underneath the storage account level.
3) The archive access tier is not available at the account level, only the individual blob level.
4) Cool and archive tiers can be set at the blob level.
5) Data in the cool access tier can tolerate slightly lower availability, but still require high durability, retrieval latency, and throughput characteristics like hot data.
6) For cool data, a slightly lower availability service-level agreement (SLA) and higher access costs compared to hot data are acceptable trade-offs for lower storage costs.
7) Archive storage stores data offline and offers the lowest storage costs but also the highest data retrieval and access costs.

Просмотреть файл

@ -0,0 +1,64 @@
<a id="storagesecurity">Azure Storage Account Security</a>
----------------------------------------------------------
Azure Storage provides a comprehensive set of security capabilities, which enables customers to secure backup blobs.
<a id="encryption">***Encryption***</a>
Encryption protects data and helps enterprises meet organizational security and/or compliance requirements. Azure Storage automatically encrypts all data being persisted in the cloud. Data in Azure Storage is encrypted and decrypted transparently using [256-bit AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) (one of the strongest block ciphers available) and is also FIPS 140-2 compliant.
<a id="encryptionmethods">***Encryption Methods***</a>
- [Encryption in Transit](https://docs.microsoft.com/en-us/azure/security/fundamentals/storage-overview#encryption-in-transit) – Provides a way of encrypting data while it is being transmitted across networks.
- [Transport-Level Encryption](https://docs.microsoft.com/en-us/azure/storage/common/storage-security-guide#transport-level-encryption--using-https) – Enforcing HTTPS as the preferred protocol for data transfer.
- [Client-Side Encryption (CSE)](https://docs.microsoft.com/en-us/azure/storage/common/storage-security-guide#using-client-side-encryption-to-secure-data-that-you-send-to-storage) – Encrypting data before being transferred into Azure Storage. Whenever data needs retrieval, the data is decrypted after being received on the client side.
- [Encryption at Rest](https://docs.microsoft.com/en-us/azure/security/fundamentals/storage-overview#encryption-at-rest) – Provides a way of encrypting unstructured data stored in storage accounts.
- [Storage Service Encryption (SSE)](https://docs.microsoft.com/en-us/azure/storage/common/storage-security-guide#encryption-at-rest) – Azure Storage encryption is enabled for all new and existing storage accounts, plus it cannot be disabled. Storage accounts are encrypted regardless of performance tier, redundancy option, and all copies of a storage account are fully encrypted. Additionally, all Azure Storage resources are encrypted (blobs, disks, files, queues, and tables) and all object metadata is encrypted. Encryption does not affect Azure Storage performance and there is no additional cost for Azure Storage encryption.
- [Client Side Encryption (CSE)](https://docs.microsoft.com/en-us/azure/storage/common/storage-security-guide#encryption-at-rest) – As mentioned above, client side encryption is important before data transfers and while it is at rest. CSE involves Encrypting data before being transferred into Azure Storage. Whenever data needs retrieval, the data is decrypted after being received on the client side.
- [Customer Managed Keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-encryption-keys-portal) – Azure Storage supports encryption at rest with either Microsoft-managed keys or customer-managed keys. Customer-managed keys enable flexibility to create, rotate, disable, and revoke access controls. Customers use Azure Key Vault to manage and audit keys, plus monitor usage. The storage account and corresponding Key Vault must be in the same region but can be in different subscriptions.
<a id="azstorfirewall">***Azure Storage Firewalls***</a>
Azure Storage provides a layered security model. By using this model, customers can secure storage accounts, so access is limited to a specific subset of networks. When network rules are configured, only backup applications requesting data over the specified set of networks can access a storage account. Customers can limit access to storage accounts by requests originating from specified IP addresses, IP ranges, or from a list of subnets in Azure Virtual Networks. To get started, please refer to the following [documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-network-security#grant-access-from-a-virtual-network).
<a id="serviceendpoints">***Service Endpoints***</a>
By using [Service Endpoints](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview), customers can extend their virtual network private address space in Azure (along with the identity of the VNet) to Azure services over a direct connection. Endpoints allow an ability to secure critical Azure service resources to only the virtual networks in Azure.
*Key Benefits:*
1) Improved Security for Azure Resources - Service endpoints provide the ability of security Azure service resources to a specific virtual network by extending the VNet identity in Azure to the service. Once service endpoints are enabled within the virtual network, customers are able to secure Azure resources to a given VNet by adding a virtual network rule to deployed resources. By implementing the rule, this removes public internet access to resources, only allowing traffic from the VNet.
2) Optimal Routing - Service Endpoints always take service traffic directly from a VNet to the service on the Microsoft Azure backbone network, removing the need for a [User Defined Route (UDR)](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview). Keeping traffic on the Azure backbone network allows customers the ability to audit and monitor outbound internet traffic originating from a VNet in Azure without impacting service traffic.
3) Simple Set-Up - Customers do not need reserved public IP addresses in VNets to secure Azure resources through IP firewall. There are no NAT or gateway devices required to set up Service Endpoints. Service endpoints are configured through a simple click on a subnet and there are no additional overhead in maintaining the endpoints.
Service Endpoints have some [limitations](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview#limitations) to take into consideration for deployment. Additionally, there is no additional charge for using Service Endpoints and there is no limit on the total number of Service Endpoints in a virtual network.
<a id="serviceendpointpolicy">***Virtual Network Service Endpoint Policies***</a>
Virtual Network Service Endpoint policies provide customers an ability to filter virtual network to Azure services, allowing only specific Azure service resources over Service Endpoints. The policies provide granular access control for virtual network traffic to Azure services (which includes Azure Storage).
*Key Benefits:*
1) Improved Security for Azure Resources - [Azure service tags for network security groups](https://aka.ms/servicetags) allow customers to restrict virtual network outbound traffic to specific Azure services. With Service Endpoint Policies, customers can now restrict virtual network outbound access to only specific Azure resources. Having this capacity provides even more granular security control for protecting data accessed within any given VNet.
2) Scalable, Highly Available Policies to Filter Azure Service Traffic - Endpoint policies provide horizontally scalable, highly available solution to filter Azure service traffic from virtual networks over Service Endpoints. No additional overhead is required to maintain central network appliances for this type of traffic within a VNet.
<a id="privatelink">***Azure Private Link***</a>
[Azure Private Link](https://docs.microsoft.com/en-us/azure/private-link/private-link-overview) enables access to Azure PaaS services over a [private endpoint](https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview) inside a virtual network (which includes Azure Storage). Traffic beteween a VNet and the Azure service traverses ove rthe Microsoft backbone network, eliminating exposure from the public internet.
*Key Benefits:*
1) Privately Access Services on the Azure Platform - Customers can connect a virtual network to services running in Azure privately, without needing a public IP address at the source or destination. The Private Link platform will handle connectivity between any consumer and services over the Azure backbone network.
2) On-Premises and Peered Networks - Access services running in Azure from on-premises over ExpressRoute private peering or VPN tunnels and peered virtual networks using Private Endpoints. There is no need to set up [Microsoft Peering](https://docs.microsoft.com/en-us/azure/expressroute/expressroute-circuit-peerings#microsoftpeering) or traverse the internet to reach an Azure service. This ability provides a secure way to migrate workloads to Azure.
3) Protection Against Data Exfiltration - With Azure Private Link, the private endpoint in the VNet is mapped to a specific instance of the PaaS resource as opposed to the entire service. By using the private endpoint, consumers can only connect to the specific resource and not to any other resource in the workload or service.
4) Global Reach - Connect privately to services running in other regions. The consumer's virtual network could be in region A and it connect to services behind Private Link in region B.
<a id="generalsecurity">***General Storage Account Security***</a>
To read more about Storage Account security in Azure, please refer to the following [documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-security-guide). Reading through this documentation will provide a broader and deeper understanding of how to implement security for an entire data estate in Azure.

Просмотреть файл

@ -0,0 +1,59 @@
Azure Storage Access Management
-------------------------------
<a id="perms">***Permissions, Access, Authorization***</a>
Support for different authentication flows is based upon 3rd party backup partner implementations. If selecting a 3rd party for Azure based backups, please consult referenced documentation inside this article or contact the vendor specifically for detailed configuration/buildout.
Permissions, access, and authorization in Azure allow for granular control over deployed resources. The recommended strategy for authorization to Azure Blob Storage accounts is to lean on [Azure Active Directory](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad). Related to access, the recommended approach is to grant [RBAC access using Azure Active Directory](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal). Azure Blob Storage holds a few built-in roles that work well for different types of access permissions related to backup workloads. Enterprises can adopt these roles by examining and following the links below:
<a id="builtin">***Built-In Roles***</a>
1) [Storage Account Contributor](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-account-contributor)
2) [Storage Account Key Operatior Service Role](https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#storage-account-key-operator-service-role)
<a id="howtoassign">***How-To Assign***</a>
1) [Azure Portal](https://docs.microsoft.com/en-us/azure/role-based-access-control/quickstart-assign-role-user-portal)
2) [ARM Template](https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-role-assignments-user-template)
3) [PowerShell - Group](https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-role-assignments-group-powershell)
4) [PowerShell - User](https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-role-assignments-user-powershell)
<a id="custom">***Custom RBAC Roles***</a>
If these roles do not hold the right levels of access or enterprises want to mix a number of different built-in roles together for a more comprehensive set of access permissions, there is capacity to lean on custom RBAC roles for user or group assignment. Custom RBAC roles typically mean an enterprise will take a built-in role and tweak it to match needs of business units, groups, or individual users.
Examples of how to configure can be found in the following documentation:
1) [PowerShell](https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-powershell)
2) [Azure CLI](https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-cli)
<a id="accesskeys">***Access Keys***</a>
Upon storage account creation, Azure generates two 512-bit storage account [access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-manage#access-keys). The keys can be used to authorize access to the newly created storage account via Shared Key. These keys should be regularly rotated without interruption to applications requiring these access keys. The storage account key is like a root password for any given storage account. Microsoft recommends avoiding distributing the key, saving it anywhere in plaintext that is accessible to others, etc.
<a id="sas">***Shared Access Signature***</a>
A shared access signature (SAS) provides secure delegated access to resources in a given storage account without compromising the security of stored data. With a SAS, customers hold granular control over how any given user or client can access data. Enterprises control what resources the user or client may access, what permissions are allowed on each resource, how long the SAS is valid, etc.
A shared access signature is a signed URI that points to one or more storage resources and includes a token that contains a special set of query parameters. The token indicates how resources may be accessed by the user. One of the query parameters (the signature) is constructed from the SAS parameters and signed with the key that was used to create the SAS. This signature is used by Azure Storage to authorize access to the storage resource.
<a id="sassignature">***SAS Signature***</a>
Signing a SAS can be done in one of two ways:
1) With a user delegation key that was created using Azure Active Directory (Azure AD) credentials. A user delegation SAS is signed with the user delegation key. To obtain the user delegation key and create the SAS, an Azure AD security principal must be assigned a role-based access control (RBAC) role that includes the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action.
2) With the storage account key. Both a service SAS and an account SAS are signed with the storage account key. To create a SAS that is signed with the account key, an application must have access to the account key.
<a id="sastoken">***SAS Token***</a>
The SAS token is a string that is generated on the client side, for example by using one of the Azure Storage client libraries. The SAS token is not tracked by Azure Storage in any way. Customers can create an unlimited number of SAS tokens on the client side. After a SAS is created, customers can distribute the token to client applications that require access to resources in a storage account.
When a client application provides a SAS URI to Azure Storage as part of a request, the service checks the SAS parameters and signature to verify if it is valid for authorizing the request. If the service verifies the signature is valid, then the request is authorized. Otherwise, the request is declined with error code 403 (Forbidden).
<a id="serviceprincipal">***Service Principals***</a>
As enterprises select the right backup solution, there may be a need to create a [Service Principal](https://docs.microsoft.com/en-us/powershell/azure/create-azure-service-principal-azureps?view=azps-2.7.0) in Azure Active Directory. An Azure service principal is an identity created for use with applications, hosted services, and automated tools which access Azure resources. Access is restricted by assigning roles to the service principal, giving customers control over which resources can be accessed and at what level. For security purposes, Microsoft recommends using service principals with automated tooling rather than allowing user identities. Please refer to the following [documentation](https://docs.microsoft.com/en-us/azure-stack/operator/azure-stack-create-service-principals?view=azs-1908#manage-an-azure-ad-service-principal) on how to set up a service principal.
<a id="managedid">***Managed Identities***</a>
Much like Service Principals, certain backup solutions may allow the use of a [Managed Identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). This feature provides Azureresources with an automatically managed identity in Azure AD for a more secure way of accessing workloads. By design, no credentials are known to any Azure administrator. For a tutorial on how to set this up, please refer to the following [documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm). Note, this only works when VM is running in Azure.

Просмотреть файл

@ -0,0 +1,18 @@
Azure Storage Archive and Compliance
------------------------------------
At the intersection of data protection and archive comes another topic more and more customers must account for within their cloud vendor: compliance. Immutable storage for Azure Blob storage enables customers to store critical data objects in a WORM (Write Once, Read Many) state. Using WORM, data remains in an unmodifiable state for a specified interval of time, usually as required by governmental regulations, secure document retention, or a legal hold. Blob objects can be created and read, but not modified or deleted for the duration of the specified amount of time.
Immutable storage supports [time-based retention](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#time-based-retention) ([time-based retention supported values](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#time-based-retention-1)), [legal holds](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#legal-holds) ([legal hold supported values](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#legal-hold)), all blob tiers, container level configuration, and audit logging. Time-based retention and legal holds are the actual policies related to immutable storage. When a policy is applied to a storage container, all existing blobs move into an immutable WORM state in less than 30 seconds. All new blobs uploaded to that container will also move into an immutable state. After all blobs move into an immutable state, the immutable policy is confirmed, and all overwrite or delete operations for existing and new objects are not allowed.
Container and Account deletion are also not allowed if there are any blobs protected by an immutable policy. The Delete Container operation will fail if at least one blob exists with a locked time-based retention policy or a legal hold. The storage account deletion will fail if there is at least one WORM container with a legal hold or a blob with an active retention interval.
Customers can set up immutable storage by using 1 of the following methods:
1) [Azure Portal](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#azure-portal)
2) [Azure CLI](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#azure-cli)
3) [PowerShell](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage#sample-powershell-code)
Soft Delete
-----------
[Soft delete](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-soft-delete) for Azure Storage blobs enables customers the ability to save and recover data where blobs or blob snapshots may have been deleted. Turning this feature on allows customers to protect data when it is erroneously modified or deleted by an application or other storage account user. With soft delete, if data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. Soft deleted objects are invisible unless explicitly listed. Enterprises have the capacity of specifying the amount of time soft deleted data is recoverable before being permanently deleted.

Просмотреть файл

@ -0,0 +1,44 @@
Data Protection and Cost Management Strategies Using Azure
----------------------------------------------------------
Typically, enterprises are selecting cloud-based backup due to cloud being an inexpensive alternative related to total cost of ownership (TCO). Firms are no longer spending money on capex costs for duplicative infrastructure at a co-lo, or on multiple tape libraries and tape kinds. In most backup to cloud scenarios, the backup data is already online and does not require a lengthy amount of time to restore. Using Azure, enterprises will not select certain Azure Storage SKUs for data protection based upon cost and need.
<a id="augment">***Augment Existing or Replace On-Premises Backup Solution***</a>
Within the traditional backup space, customers want to be protected in the event of any system failure or catastrophic event. Prior to Azure, this would mean maintaining disk-based backups on-site, off-site, and placing backups on tape. By introducing Azure into the mix, a customer could think through a scenario where they maintain a production backup copy on-site, a tape-based backup on-site, a production copy off-site, and an Azure based backup on a GRS storage account, sitting in the Cool or Archive Tier. Depending upon the scenario, customers could even consider the off-site backup to really reside in Azure versus a separate site.
<a id="archiverehydration">***Archive Storage Rehydration***</a>
By leaning on [Archive Storage](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers), customers can use an inexpensive SKU to help alleviate any duplicative costs by maintaining additional hardware or corresponding infrastructure. Additionally, archive is the lowest cost storage option in Azure.
While a blob is in the archive access tier, its considered offline and cannot be read or modified. Metadata surrounding the blob data is only available with online tiers as well. The only way to restore data from Archive Blob Storage is to change the tier to hot or cool. This process is known as [rehydration](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration) and can take hours to complete. Larger blob sizes are typically more optimal related to rehydration performance. Rehydrating several small blobs concurrently may add additional time.
There are [two priorities](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration#rehydrate-an-archived-blob-to-an-online-tier) for retrieval from Archive Blob Storage: Standard and High. Standard priority is the default option for archive. Priority requires that each request will be processed in the order it was received and may take up to 15 hours. High priority will be prioritized over Standard requests and may finish in under 1 hour. High priority may take longer than 1 hour, depending upon blob size and current demand. High priority requests are guaranteed to be prioritized over Standard priority requests. High priority also increases cost related to storage consumption in azure.
Another way to retrieve blobs from archive is to [copy an archived blob to an online tier](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration#rehydrate-an-archived-blob-to-an-online-tier). Enterprises can choose a [Copy Blob](https://docs.microsoft.com/rest/api/storageservices/copy-blob) operation, which will leave the original blob in an unmodified state in archive, while being able to work on the new blob in the online tier. Archive blobs can only be copied to online destination tiers. Copying a blob from Archive takes time, but the rehydrate priority property can be switched from Standard to High when using the copy process. Behind the scenes, the Copy Blob operation temporarily rehydrates the archive source blob to create a new online blob in the destination tier. This new blob is not available until the temporary rehydration from archive is complete and the data is written to the new blob.
<a id="lifecycle">***Azure Blob Storage Lifecycle Management***</a>
Another strategy related to data protection surrounds using [Azure Blob Storage lifecycle management](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts). By selecting a GPv2 storage account, Blob storage accounts, or Premium Block Blob storage accounts, customers can delve deep into building out a lifecycle management policy that fits the needs of each specific business (and sometimes business unit). Customers can transition blobs to cooler tiers (i.e. cool to archive) as a cost savings benefit. Leaning on a lifecycle management policy, customers are also able to delete blobs at the end of a retention period or lifecycle. Rules can be run once a day at the storage account level to allow for automatic tiering based upon specific rulesets. Azures archive storage stores data offline and offers low storage costs, but the highest data rehydrate and access costs.
When a blob is moved to a cooler tier, the operation is billed as a write operation to the destination tier, where the write operation charges of the destination tier apply. When a blob is moved to a warmer tier, the operation is billed as a read from the source tier, where the read operation and data retrieval charges of the source tier apply.
For a comparison and breakdown of block blob storage options, please refer to the following chart:
| | Cool Tier | Archive Tier |
| ----------- | ----------- | ----------- |
| Availability | 99% | Offline |
| Usage Charges | Lower storage costs, higher access and transaction costs | Lowest storage costs, highest access and transaction costs |
| Minimum Object Size | N/A | N/A |
| Minimum Storage Duration | 30 days | 180 days |
| Latency (Time to First Byte) | Milliseconds | Hours |
The chart breakdown is altered from the following [link](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers#comparing-block-blob-storage-options). Microsoft maintains a deeper detailed comparison of block blob storage options, so customers have a better understanding of storage performance as it relates to each storage account tier.
Setting up Azure Blob Tiering can be completed using any of the following methods:
* [Azure Portal](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal#add-or-remove-a-policy)
* [Azure PowerShell](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-powershell#add-or-remove-a-policy)
* [Azure CLI](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=template#add-or-remove-a-policy)
* [REST APIs](https://docs.microsoft.com/en-us/rest/api/storagerp/managementpolicies)
As a best practice, companies can make use of Azure Storage to augment an existing on-premises backup solution or replace the on-premises backup solution with an Azure based solution. The flexibility customers hold related to backup buildout, configuration, blob tiering, and cost optimization should make Azure an easy landing ground for future workloads. Making use of Azure Archive Storage offers customers an inexpensive alternative to maintaining duplicate infrastructure, co-lo space, backup tapes, and WAN optimizers. Paying extra for priority retrieval would be on an as-needed basis and if high priority turned out not to be necessary, standard priority will get backups restored in approximately 15 hours.

Просмотреть файл

@ -0,0 +1,43 @@
Partner Integration and First Party
-----------------------------------
Microsoft works closely with backup partners to broaden the backup narrative for enterprises and mutually deliver solutions on Azure. In addition, Microsoft has a first party native solution that customers can choose to deploy as well.
<a id="azurebkup">***Azure Backup***</a>
[Azure Backup](https://azure.microsoft.com/en-us/services/backup/) is Microsofts native, first party backup. The Azure service covers [several different backup scenarios](https://docs.microsoft.com/en-us/azure/backup/backup-overview#what-backup-scenarios-are-supported) and offers flexibility of backing up Azure VMs, on-premises servers, files, and various Microsoft specific workloads like SharePoint or SQL. Across the supported backup scenarios, Azure Backup coordinates with the Volume Shadow Copy Service (VSS) to take app-consistent snapshots on Windows servers and a VSS-like generic framework that ensures application consistent VM backups for Linux applications running on any Linux distribution.
<a id="cohesity">***Cohesity Cloud Edition***</a>
Cohesity delivers a web-scale platform that consolidates all secondary storage and data services onto one unified, efficient solution using Azure by extending the data to take advantage of scalability and cost-effectiveness. Cohesity supports long-term retention in Azure by archiving backup data directly. Backup data is deduped and compressed, along with indexed for fast retrieval and search, both back to on-premises and from the cloud. Cohesity also supports storage tiering in Azure by utilizing policy-based thresholds to move cold data. Customers can leverage Azure Blob Storage as another tier of data and then tier data back to an on-premises cluster. Cohesity CloudEdition can be deployed in Azure to backup applications running on customer premises. This deployment method eliminates the need to deploy backup software and target storage on-premises, and instead sends all backup data straight to Azure. For more information, head [here](https://www.cohesity.com/).
<a id="commvault">***Commvault***</a>
Customers can take advantage of the deep partnership between CommVault and Microsoft by mitigating risks and realizing improved control over data by using native backup support with Azure. Through CommVaults end-to-end approach, customers can extend on-premises backup solutions to Azure for a hybrid configuration and/or lean on cloud only based backups for a wide-ranging solution that will meet any needs driven by business units or decision makers. The management plane will organize any given backup estate into a single platform that employs centralized policies to assure data governance. Customers will also have comprehensive data backup, recovery, management, and eDiscovery capabilities to maximize the use of Azure. CommVault integrates with Azure Virtual Machines, Azure SQL Databases, and Azure Blob Storage (Hot and Cool). Additionally, data management can be enabled for Exchange, SharePoint, SQL, Active Directory and Office 365. Instructions for setup are in the following locations:
* [Vitual Machines](http://documentation.commvault.com/commvault/v11/article?p=31252.htm)
* [Azure SQL Databases](https://documentation.commvault.com/commvault/v11_sp8/article?p=products/sql/c_sql_azr_bckp.htm)
* [Azure Blob Storage](http://documentation.commvault.com/commvault/v11/article?p=30063.htm)
<a id="ibm">***IBM Spectrum Plus***</a>
IBM Spectrum Plus is a modern data protection solution that provides near-instant recovery, replication, reuse, and self-service for virtual machines, databases, and applications in hybrid or multicloud environments. The agentless architecture, combined with the virtual appliance setup in Azure, make the solution easy to maintain. IBM Spectrum Plus integrates with object storage APIs on Azure to provide low-cost, long-term data retention and disaster recovery for all supported workloads. The policies support data offloading, therefore, a single data protection policy can govern data backup, replication, and offload. Secure long-term data storage can be achieved by using IBM Cloud Object Storage immutable object storage features, such as retention enabled buckets (WORM). Instructions for setup are located [here](https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.2/srv.common/r_techchg_srv_azure_812.html).
<a id="rubrik">***Rubrik Cloud Data Management***</a>
Protect cloud-native applications by writing to Azures Blob Storage services. Customers can take application-consistent snapshots for cloud instances of Windows or Linux VMs, Microsoft Exchange, SharePoint, SQL Server, Active Directory, and Oracle RDBMS. Enterprises can maximize storage efficiency with global deduplication that scales inline with the cloud-based cluster. Rubrik supports SLA policies for VMs, applications, and databases by selecting the desired snapshot capture frequency, retention duration, and desired location. Automate custom lifecycle management workflows with a rich suite of RESTful APIs to quickly move local data to the cloud and intelligently manage cloud data to business needs. Instantly search for files across all snapshots with suggested search results. Deliver rapid recoveries for files, folders, file sets, application, and database instances in the cloud. Additionally, Rubrik delivers real-time platform insights on data management, compliance, and capacity planning across the entire cloud environment with actionable insights from data visualization. For more information, head [here](https://www.rubrik.com/).
<a id="veeam">***Veeam Cloud Connect***</a>
Veeam enables businesses to act as a disaster recover (DR) service provider by delivering off-site backup and replication services. With Veeam Cloud Connect, moving backups off-site to Azure is simple and seamless. Microsoft Azure customers can instantly provision enterprise cloud backup repositories and automatically move on-premises Veeam powered backup archives to Azure. Using this powerful combination of Microsoft and Veeam provides enterprise customers with cost-effective Azure storage plus the granular file-level recovery capabilities of Veeam.
Key Highlights:
1) Easily deploy Veeam-powered backups to Azure. Customers can provision a trial version of Veeam Cloud Connect directly from the Microsoft Azure Marketplace within a short period of time.
2) Quickly copy off-site backups to Azure. Veeams modern backup architecture makes it easy to transfer enterprise backups to Azure with forever-forward, incremental backups, as well as (grandfather-father-son) retention policies and built-in job copy functions.
Instructions for setup are located [here](https://helpcenter.veeam.com/docs/backup/hyperv/adding_azure_object_storage.html?ver=95u4).
<a id="veritas">***Veritas Cloud Point***</a>
Veritas helps organizations accelerate data center transformation to Azure by providing tooling to ensure control over a corporations backup estate. Veritas provides a single management console across private clouds, Azure, and other public cloud vendors. The product also ensures automated discovery and backup with no need to install agents. Veritas delivers application-consistent snapshots for a wide range of applications and offers granular search + recovery down to the individual file level. For regulated firms, customers can maintain and control Personally Identifiable Information. As an extra level of protection, enterprises can enable replication to other regions for added disaster recovery readiness. Instructions for setup are located [here](https://www.veritas.com/content/support/en_US/doc/58500769-127471507-0/v118626392-127471507).