This commit is contained in:
Shawn Weisfeld Build 2022-11-09 01:23:52 +00:00
Родитель bae8cc4586
Коммит 125bb3454e
58 изменённых файлов: 1334 добавлений и 732 удалений

Просмотреть файл

@ -14,7 +14,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/categories/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -255,10 +255,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -1 +1,10 @@
<!DOCTYPE html><html><head><title>https://azure.github.io/Storage/categories/</title><link rel="canonical" href="https://azure.github.io/Storage/categories/"/><meta name="robots" content="noindex"><meta charset="utf-8" /><meta http-equiv="refresh" content="0; url=https://azure.github.io/Storage/categories/" /></head></html>
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>https://azure.github.io/Storage/categories/</title>
<link rel="canonical" href="https://azure.github.io/Storage/categories/">
<meta name="robots" content="noindex">
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url=https://azure.github.io/Storage/categories/">
</head>
</html>

Просмотреть файл

@ -2,13 +2,13 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
<meta name="description" content="Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 combines features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and low cost scalability, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Gen1 and Gen2 ACL Behavior Analysis" />
<meta property="og:description" content="Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
<meta property="og:description" content="Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 combines features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and low cost scalability, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/adls-gen1-and-gen2-acl-behavior/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -373,10 +373,10 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,22 +2,22 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
<meta name="description" content="Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
This can be applicable for below migration patterns:
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.">
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Application and Workload Update" />
<meta property="og:description" content="Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
<meta property="og:description" content="Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
This can be applicable for below migration patterns:
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount." />
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/application-update/" /><meta property="article:section" content="docs" />
@ -27,7 +27,7 @@ This can be applicable for below migration patterns:
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -45,7 +45,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -188,7 +188,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -416,10 +416,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,13 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Bi-directional sync pattern Guide: A quick start template # Overview # This manual will introduce WANdisco as a recommended tool to set up bi-directional sync between ADLS Gen1 and Gen2 using the Replication feature.
<meta name="description" content="Bi-directional sync pattern Guide: A quick start template # Overview # This manual will introduce WANdisco as a recommended tool to set up bi-directional sync between ADLS Gen1 and Gen2 using the Replication feature.
Below will be covered as part of this guide:
Data Migration from Gen1 to Gen2 Data Consistency Check Application update for ADF, ADB and SQL DWH workloads Considerations for using the bi-directional sync pattern:">
Data Migration from Gen1 to Gen2 Data Consistency Check Application update for ADF, ADB and SQL DWH workloads Considerations for using the bi-directional sync pattern:
Ideal for complex scenarios that involve a large number of pipelines and dependencies where a phased approach might make more sense.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Bi-directional sync pattern Guide: A quick start template" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -17,7 +18,7 @@ Below will be covered as part of this guide:
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -36,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -179,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -294,12 +295,12 @@ https://github.com/alex-shpak/hugo-book
<li>
<p><strong>Start the Fusion</strong></p>
<p>Go to <strong>SSH Client</strong> <a href="./wandisco-set-up-and-installation/#connect-to-vm">Connect</a> and run below commands:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-scala" data-lang="scala">cd fusion<span style="color:#f92672">-</span>docker<span style="color:#f92672">-</span>compose <span style="color:#75715e">// Change to the repository directory
</span><span style="color:#75715e"></span>
<span style="color:#f92672">./</span>setup<span style="color:#f92672">-</span>env<span style="color:#f92672">.</span>sh <span style="color:#75715e">// set up script
</span><span style="color:#75715e"></span>
docker<span style="color:#f92672">-</span>compose up <span style="color:#f92672">-</span>d <span style="color:#75715e">// start the fusion
</span></code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-scala" data-lang="scala"><span style="display:flex;"><span>cd fusion<span style="color:#f92672">-</span>docker<span style="color:#f92672">-</span>compose <span style="color:#75715e">// Change to the repository directory
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">./</span>setup<span style="color:#f92672">-</span>env<span style="color:#f92672">.</span>sh <span style="color:#75715e">// set up script
</span></span></span><span style="display:flex;"><span><span style="color:#75715e"></span>
</span></span><span style="display:flex;"><span>docker<span style="color:#f92672">-</span>compose up <span style="color:#f92672">-</span>d <span style="color:#75715e">// start the fusion
</span></span></span></code></pre></div></li>
<li>
<p><strong>Login to Fusion UI</strong>. Open the web browser and give the path as below</p>
<p>URL &ndash;&gt; http://{dnsname}:8081</p>
@ -516,10 +517,10 @@ After all the applications and workloads are stable on Gen2, Turn off any remain
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,11 +12,12 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/wandisco-set-up-and-installation/</guid>
<description>WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1</description>
<description>WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account</description>
</item>
</channel>

Просмотреть файл

@ -2,20 +2,22 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1">
<meta name="description" content="WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="WANdisco Fusion Set up and Installation Guide" />
<meta property="og:description" content="WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1" />
<meta property="og:description" content="WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/wandisco-set-up-and-installation/" /><meta property="article:section" content="docs" />
@ -25,7 +27,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -43,7 +45,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -186,7 +188,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -386,16 +388,16 @@ https://github.com/alex-shpak/hugo-book
<ol>
<li>
<p>Clone the Fusion docker repository using below command in SSH Client:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-scala" data-lang="scala">git clone https<span style="color:#66d9ef">:</span><span style="color:#75715e">//github.com/WANdisco/fusion-docker-compose.git
</span></code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-scala" data-lang="scala"><span style="display:flex;"><span>git clone https<span style="color:#66d9ef">:</span><span style="color:#75715e">//github.com/WANdisco/fusion-docker-compose.git
</span></span></span></code></pre></div></li>
<li>
<p>Change to the repository directory:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-scala" data-lang="scala">cd fusion<span style="color:#f92672">-</span>docker<span style="color:#f92672">-</span>compose
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-scala" data-lang="scala"><span style="display:flex;"><span>cd fusion<span style="color:#f92672">-</span>docker<span style="color:#f92672">-</span>compose
</span></span></code></pre></div></li>
<li>
<p>Run the setup script:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-scala" data-lang="scala"><span style="color:#f92672">./</span>setup<span style="color:#f92672">-</span>env<span style="color:#f92672">.</span>sh
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-scala" data-lang="scala"><span style="display:flex;"><span><span style="color:#f92672">./</span>setup<span style="color:#f92672">-</span>env<span style="color:#f92672">.</span>sh
</span></span></code></pre></div></li>
<li>
<p>Enter the option <strong>4</strong> for <strong>Custom deployment</strong></p>
<p><img src="../../images/80396711-e6012600-8869-11ea-8161-cfbcc25c3170.png" alt="image" /></p>
@ -430,8 +432,8 @@ https://github.com/alex-shpak/hugo-book
</li>
<li>
<p>To start the Fusion run the below command:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-scala" data-lang="scala">docker<span style="color:#f92672">-</span>compose up <span style="color:#f92672">-</span>d
</code></pre></div><p><img src="../../images/80407953-4ba9de00-887b-11ea-97a5-2baa8683943d.png" alt="image" /></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-scala" data-lang="scala"><span style="display:flex;"><span>docker<span style="color:#f92672">-</span>compose up <span style="color:#f92672">-</span>d
</span></span></code></pre></div><p><img src="../../images/80407953-4ba9de00-887b-11ea-97a5-2baa8683943d.png" alt="image" /></p>
</li>
</ol>
<h2 id="adls-gen1-and-gen2-configuration">
@ -541,10 +543,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.">
<meta name="description" content="Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Dual Pipeline Pattern Guide: A quick start template" />
<meta property="og:description" content="Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint." />
<meta property="og:description" content="Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/dual-pipeline/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -493,10 +493,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,10 +2,10 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Incremental Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the Incremental copy pattern from Azure Data Lake Storage 1 (Gen1) to Azure Data Lake Storage 2 (Gen2) using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used. It is intended to be used in form of steps to follow to implement the solution from local machine.">
<meta name="description" content="Incremental Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the Incremental copy pattern from Azure Data Lake Storage 1 (Gen1) to Azure Data Lake Storage 2 (Gen2) using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used. It is intended to be used in form of steps to follow to implement the solution from local machine.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Incremental Copy Pattern Guide: A quick start template" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/incremental/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -274,19 +274,19 @@ This guide covers the following tasks:</p>
<blockquote>
<p>Note: Run as administrator</p>
</blockquote>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">// Run below code to enable running PS files
Set-ExecutionPolicy Unrestricted
// Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
Install-Module Az.Accounts -AllowClobber -Force
Install-Module Az.DataFactory -AllowClobber -Force
Install-Module Az.KeyVault -AllowClobber -Force
Install-Module Az.DataLakeStore -AllowClobber -Force
Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
// Close the PowerShell ISE and Reopen as administrator. Run the below module
Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>// Run below code to enable running PS files
</span></span><span style="display:flex;"><span>Set-ExecutionPolicy Unrestricted
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>// Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
</span></span><span style="display:flex;"><span>Install-Module Az.Accounts -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataFactory -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.KeyVault -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataLakeStore -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>// Close the PowerShell ISE and Reopen as administrator. Run the below module
</span></span><span style="display:flex;"><span>Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</span></span></code></pre></div></li>
</ul>
<h2 id="limitations">
Limitations
@ -342,43 +342,43 @@ Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery
<li>Make an entry of Gen2 connection string in the key vault as shown below :</li>
</ul>
<p><img src="../images/78953831-f1dda180-7a8e-11ea-82e9-07aa66fd2856.png" alt="image" /></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">// Below is the code snapshot <span style="color:#66d9ef">for</span> setting the configuration file
// to connect to azure data factory<span style="color:#960050;background-color:#1e0010">:</span>
<span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen1 source root path&gt;&gt;.azuredatalakestore.net/webhdfs/v1&#34;</span>,
<span style="color:#e6db74">&#34;gen2DestinationRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen2 destination root path&gt;&gt;.dfs.core.windows.net&#34;</span>,
<span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;keyVaultName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the keyVaultName&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;factoryName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the factoryName&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;resourceGroupName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the resourceGroupName under which the azure data factory pipeline will be created&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;location&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the location&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;overwrite&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the value&#34;</span> // True = It will overwrite the existing data factory ,False = It will skip creating data factory
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>// Below is the code snapshot <span style="color:#66d9ef">for</span> setting the configuration file
</span></span><span style="display:flex;"><span>// to connect to azure data factory<span style="color:#960050;background-color:#1e0010">:</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen1 source root path&gt;&gt;.azuredatalakestore.net/webhdfs/v1&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;gen2DestinationRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen2 destination root path&gt;&gt;.dfs.core.windows.net&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;keyVaultName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the keyVaultName&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;factoryName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the factoryName&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;resourceGroupName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the resourceGroupName under which the azure data factory pipeline will be created&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;location&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the location&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;overwrite&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the value&#34;</span> // True = It will overwrite the existing data factory ,False = It will skip creating data factory
</span></span></code></pre></div></li>
<li>
<p><strong>Scheduling the factory pipeline for incremental copy pattern</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell"><span style="color:#e6db74">&#34;pipelineId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter distinct pipeline id eg 1,2,3,..40&#34;</span>,
<span style="color:#e6db74">&#34;isChurningOrIsIncremental&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;true&#34;</span>,
<span style="color:#e6db74">&#34;triggerFrequency&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Provide the frequency in Minute or Hour&#34;</span>,
<span style="color:#e6db74">&#34;triggerInterval&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the time interval for scheduling (Minimum trigger interval time = 15 minute)&#34;</span>,
<span style="color:#e6db74">&#34;triggerUTCStartTime&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter UTC time to start the factory for Incremental copy pattern .Eg 2020-04-09T18:00:00Z&#34;</span>,
<span style="color:#e6db74">&#34;triggerUTCEndTime&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the UTC time to end the factory for Incremental copy pattern. Eg 2020-04-10T13:00:00Z&#34;</span>,
<span style="color:#e6db74">&#34;pipelineDetails&#34;</span><span style="color:#960050;background-color:#1e0010">:</span>[
// Activity 1 //
<span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. Eg: /path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path.Eg: path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
// Activity 2 //
<span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. Eg: /path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path.Eg: path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
// Note <span style="color:#960050;background-color:#1e0010">:</span> Maximum activities per pipeline is 40
</code></pre></div><blockquote>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span><span style="color:#e6db74">&#34;pipelineId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter distinct pipeline id eg 1,2,3,..40&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;isChurningOrIsIncremental&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;true&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;triggerFrequency&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Provide the frequency in Minute or Hour&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;triggerInterval&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the time interval for scheduling (Minimum trigger interval time = 15 minute)&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;triggerUTCStartTime&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter UTC time to start the factory for Incremental copy pattern .Eg 2020-04-09T18:00:00Z&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;triggerUTCEndTime&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the UTC time to end the factory for Incremental copy pattern. Eg 2020-04-10T13:00:00Z&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;pipelineDetails&#34;</span><span style="color:#960050;background-color:#1e0010">:</span>[
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> // Activity 1 //
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. Eg: /path-name&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path.Eg: path-name&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> // Activity 2 //
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. Eg: /path-name&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path.Eg: path-name&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>// Note <span style="color:#960050;background-color:#1e0010">:</span> Maximum activities per pipeline is 40
</span></span></code></pre></div><blockquote>
<p>Note: Please note the <strong>destinationPath</strong> string will not be having Gen2 container name. It will have the file path same as Gen1. Review the <code>Configuration/IncrementalLoadConfig.json</code> script for more reference.</p>
</blockquote>
</li>
@ -451,10 +451,10 @@ Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,11 +2,11 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Data Lake Storage Gen1 to Gen2 Migration Sample # Welcome to the documentation on migration from Gen1 to Gen2. Please review the Gen1-Gen2 Migration Approach guide to understand the patterns and approach. You can choose one of these patterns, combine them together, or design a custom pattern of your own.
NOTE: On July 14 2021 we released a Limited preview of a feature to Migrate your Azure Data Lake Storage from Gen1 to Gen2 using the Azure Portal.">
<meta name="description" content="Azure Data Lake Storage Gen1 to Gen2 Migration Sample # Welcome to the documentation on migration from Gen1 to Gen2. Please review the Gen1-Gen2 Migration Approach guide to understand the patterns and approach. You can choose one of these patterns, combine them together, or design a custom pattern of your own.
NOTE: On July 14 2021 we released a Limited preview of a feature to Migrate your Azure Data Lake Storage from Gen1 to Gen2 using the Azure Portal.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Data Lake Storage Gen1 to Gen2 Migration Sample" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -16,7 +16,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -326,10 +326,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,12 +12,12 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/application-update/</guid>
<description>Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
<description>Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
This can be applicable for below migration patterns:
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.</description>
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.</description>
</item>
<item>
@ -26,7 +26,7 @@ This can be applicable for below migration patterns:
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/dual-pipeline/</guid>
<description>Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.</description>
<description>Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.</description>
</item>
<item>
@ -35,7 +35,7 @@ This can be applicable for below migration patterns:
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/adls-gen1-and-gen2-acl-behavior/</guid>
<description>Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&amp;rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
<description>Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&amp;rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 combines features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and low cost scalability, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.</description>
</item>

Просмотреть файл

@ -2,10 +2,10 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Lift and Shift Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual in form of step by step guide for the lift and shift copy pattern from Gen1 to Gen2 storage using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used.
<meta name="description" content="Lift and Shift Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual in form of step by step guide for the lift and shift copy pattern from Gen1 to Gen2 storage using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used.
This guide covers the following tasks:">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Lift and Shift Copy Pattern Guide: A quick start template" />
<meta property="og:description" content="" />
@ -16,7 +16,7 @@ This guide covers the following tasks:">
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/lift-and-shift/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -283,19 +283,19 @@ https://github.com/alex-shpak/hugo-book
<li>
<p><strong>Windows PowerShell ISE</strong>.</p>
<p><strong>Note</strong> : Run as administrator</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">//Run below code to enable running PS files
Set-ExecutionPolicy Unrestricted
//Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
Install-Module Az.Accounts -AllowClobber -Force
Install-Module Az.DataFactory -AllowClobber -Force
Install-Module Az.KeyVault -AllowClobber -Force
Install-Module Az.DataLakeStore -AllowClobber -Force
Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
//Close the PowerShell ISE and Reopen as administrator. Run the below module
Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>//Run below code to enable running PS files
</span></span><span style="display:flex;"><span>Set-ExecutionPolicy Unrestricted
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>//Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
</span></span><span style="display:flex;"><span>Install-Module Az.Accounts -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataFactory -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.KeyVault -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataLakeStore -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>//Close the PowerShell ISE and Reopen as administrator. Run the below module
</span></span><span style="display:flex;"><span>Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</span></span></code></pre></div></li>
</ul>
<h2 id="limitations">
Limitations
@ -352,31 +352,31 @@ Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery
<p>Make an entry of Gen2 Access key in the key vault as shown below :</p>
<p><img src="../images/78953831-f1dda180-7a8e-11ea-82e9-07aa66fd2856.png" alt="image" /></p>
<p><strong>Below is the code snapshot for setting the configuration file to connect to azure data factory</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell"><span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen1 source root path&gt;&gt;.azuredatalakestore.net/webhdfs/v1&#34;</span>,
<span style="color:#e6db74">&#34;gen2DestinationRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen2 destination root path&gt;&gt;.dfs.core.windows.net&#34;</span>,
<span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;keyVaultName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the keyVaultName&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;factoryName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the factoryName&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;resourceGroupName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the resourceGroupName under which the azure data factory pipeline will be created&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;location&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the location&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;overwrite&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the value&#34;</span> // True = It will overwrite the existing data factory ,False = It will skip creating data factory
</code></pre></div><p><strong>Setting up the factory pipeline for lift and shift copy pattern</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell"><span style="color:#e6db74">&#34;pipelineId&#34;</span><span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the pipeline number. For example: 1,2&#34;</span>
<span style="color:#e6db74">&#34;fullLoad&#34;</span><span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;true&#34;</span>
// Activity 1
<span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. For example: /path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path. For example: path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
// Activity 2
<span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. For example: /path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path. For example: path-name&#34;</span>,
<span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span><span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen1 source root path&gt;&gt;.azuredatalakestore.net/webhdfs/v1&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;gen2DestinationRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;https://&lt;&lt;Enter the Gen2 destination root path&gt;&gt;.dfs.core.windows.net&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;keyVaultName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the keyVaultName&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;factoryName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the factoryName&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;resourceGroupName&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the resourceGroupName under which the azure data factory pipeline will be created&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;location&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the location&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;overwrite&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the value&#34;</span> // True = It will overwrite the existing data factory ,False = It will skip creating data factory
</span></span></code></pre></div><p><strong>Setting up the factory pipeline for lift and shift copy pattern</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span><span style="color:#e6db74">&#34;pipelineId&#34;</span><span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the pipeline number. For example: 1,2&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;fullLoad&#34;</span><span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;true&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>// Activity 1
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. For example: /path-name&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path. For example: path-name&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>// Activity 2
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;sourcePath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen1 full path. For example: /path-name&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;destinationPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 full path. For example: path-name&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;destinationContainer&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;Enter the Gen2 container name&#34;</span>
</span></span></code></pre></div></li>
</ul>
<blockquote>
<p>NOTE: The <strong>destinationPath</strong> string will not be having Gen2 container name. It will have the file path same as Gen1. See the <code>FullLoadConfig.json</code> script for more reference.</p>
@ -441,10 +441,10 @@ Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,10 +2,10 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Ageing Analysis Guide: A quick start template # Overview # The inventory Ageing analysis for any application determines the storage duration of a file, folder or data inside that. The main purpose is to find out which files, folders stay in inventory for a long time or are perhaps becoming obsolete. This also identifies the active and inactive folders in the applications from Gen1 Data Lake using directory details such as recent child modification date and size.">
<meta name="description" content="Ageing Analysis Guide: A quick start template # Overview # The inventory Ageing analysis for any application determines the storage duration of a file, folder or data inside that. The main purpose is to find out which files, folders stay in inventory for a long time or are perhaps becoming obsolete. This also identifies the active and inactive folders in the applications from Gen1 Data Lake using directory details such as recent child modification date and size.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Ageing Analysis Guide: A quick start template" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/utilities/ageing-analysis/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -281,19 +281,19 @@ https://github.com/alex-shpak/hugo-book
<blockquote>
<p>Note: Run as administrator</p>
</blockquote>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">//Run below code to enable running PS files
Set-ExecutionPolicy Unrestricted
//Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
Install-Module Az.Accounts -AllowClobber -Force
Install-Module Az.DataFactory -AllowClobber -Force
Install-Module Az.KeyVault -AllowClobber -Force
Install-Module Az.DataLakeStore -AllowClobber -Force
Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
//Close the PowerShell ISE and Reopen as administrator. Run the below module
Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>//Run below code to enable running PS files
</span></span><span style="display:flex;"><span>Set-ExecutionPolicy Unrestricted
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>//Check <span style="color:#66d9ef">for</span> the below modules <span style="color:#66d9ef">in</span> PowerShell . <span style="color:#66d9ef">If</span> not existing, install one by one<span style="color:#960050;background-color:#1e0010">:</span>
</span></span><span style="display:flex;"><span>Install-Module Az.Accounts -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataFactory -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.KeyVault -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module Az.DataLakeStore -AllowClobber -Force
</span></span><span style="display:flex;"><span>Install-Module PowerShellGet <span style="color:#960050;background-color:#1e0010"></span>Repository PSGallery <span style="color:#960050;background-color:#1e0010"></span>Force
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>//Close the PowerShell ISE and Reopen as administrator. Run the below module
</span></span><span style="display:flex;"><span>Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force
</span></span></code></pre></div></li>
</ul>
<h2 id="limitations">
Limitations
@ -331,18 +331,18 @@ Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery
</h3>
<p><strong>Important Prerequisite</strong>:</p>
<p><strong>Below is the code snapshot of ADLS connection</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell"><span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the Gen1 source root path&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;outPutPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the path where the results needs to store&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
<span style="color:#e6db74">&#34;dataLakeStore&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the dataLakeStore name&gt;&gt;&#34;</span>
</code></pre></div><p><strong>Setting up the connection to azure for inventory collection</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">$SecurePassword = ConvertTo-SecureString $ServicePrincipalKey -AsPlainText -Force
$Credential = New-Object System.Management.Automation.PSCredential ( $ServicePrincipalId, $SecurePassword)
Login-AzAccount -ServicePrincipal -TenantId $TenantId -Credential $Credential
</code></pre></div><h2 id="inventory-collection-using-powershell">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span><span style="color:#e6db74">&#34;gen1SourceRootPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the Gen1 source root path&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;outPutPath&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the path where the results needs to store&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;tenantId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the tenantId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;subscriptionId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the subscriptionId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleId&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleId&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;servicePrincipleSecret&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the servicePrincipleSecret Key&gt;&gt;&#34;</span>,
</span></span><span style="display:flex;"><span><span style="color:#e6db74">&#34;dataLakeStore&#34;</span> <span style="color:#960050;background-color:#1e0010">:</span> <span style="color:#e6db74">&#34;&lt;&lt;Enter the dataLakeStore name&gt;&gt;&#34;</span>
</span></span></code></pre></div><p><strong>Setting up the connection to azure for inventory collection</strong>:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>$SecurePassword = ConvertTo-SecureString $ServicePrincipalKey -AsPlainText -Force
</span></span><span style="display:flex;"><span>$Credential = New-Object System.Management.Automation.PSCredential ( $ServicePrincipalId, $SecurePassword)
</span></span><span style="display:flex;"><span>Login-AzAccount -ServicePrincipal -TenantId $TenantId -Credential $Credential
</span></span></code></pre></div><h2 id="inventory-collection-using-powershell">
Inventory Collection using PowerShell
<a class="anchor" href="#inventory-collection-using-powershell">#</a>
</h2>
@ -414,10 +414,10 @@ Login-AzAccount -ServicePrincipal -TenantId $TenantId -Credential $Credential
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,15 +2,15 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
<meta name="description" content="Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
Flat namespace (FNS): A mode of organization in a storage account on Azure where objects are organized using a flat structure - aka a flat list of objects.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Data Lake Storage Gen2 Billing FAQs" />
<meta property="og:description" content="Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
<meta property="og:description" content="Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
Flat namespace (FNS): A mode of organization in a storage account on Azure where objects are organized using a flat structure - aka a flat list of objects." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/azure-storage-data-lake-gen2-billing-faq/" /><meta property="article:section" content="docs" />
@ -21,7 +21,7 @@ Flat namespace (FNS): A mode of organization in a storage account on Azure where
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -39,7 +39,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -182,7 +182,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -329,10 +329,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,16 +2,16 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?">
<meta name="description" content="The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="The Hitchhiker&#39;s Guide to the Data Lake" />
<meta property="og:description" content="The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?" />
<meta property="og:description" content="The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?" />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/analytics/hitchhikers-guide-to-the-datalake/" /><meta property="article:section" content="docs" />
@ -21,7 +21,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -39,7 +39,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -182,7 +182,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -657,26 +657,26 @@ Hadoop has a set of file formats it supports for optimized storage and processin
<ul>
<li>
<p><strong>Frequent operations</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sql" data-lang="sql">StorageBlobLogs
<span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d)
<span style="color:#f92672">|</span> summarize <span style="color:#66d9ef">count</span>() <span style="color:#66d9ef">by</span> OperationName
<span style="color:#f92672">|</span> sort <span style="color:#66d9ef">by</span> count_ <span style="color:#66d9ef">desc</span>
<span style="color:#f92672">|</span> render piechart
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span>StorageBlobLogs
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d)
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> summarize <span style="color:#66d9ef">count</span>() <span style="color:#66d9ef">by</span> OperationName
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> sort <span style="color:#66d9ef">by</span> count_ <span style="color:#66d9ef">desc</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> render piechart
</span></span></code></pre></div></li>
<li>
<p><strong>High latency operations</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sql" data-lang="sql">StorageBlobLogs
<span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d)
<span style="color:#f92672">|</span> top <span style="color:#ae81ff">10</span> <span style="color:#66d9ef">by</span> DurationMs <span style="color:#66d9ef">desc</span>
<span style="color:#f92672">|</span> project TimeGenerated, OperationName, DurationMs, ServerLatencyMs, ClientLatencyMs <span style="color:#f92672">=</span> DurationMs <span style="color:#f92672">-</span> ServerLatencyMs
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span>StorageBlobLogs
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d)
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> top <span style="color:#ae81ff">10</span> <span style="color:#66d9ef">by</span> DurationMs <span style="color:#66d9ef">desc</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> project TimeGenerated, OperationName, DurationMs, ServerLatencyMs, ClientLatencyMs <span style="color:#f92672">=</span> DurationMs <span style="color:#f92672">-</span> ServerLatencyMs
</span></span></code></pre></div></li>
<li>
<p><strong>Operations causing the most errors</strong></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-sql" data-lang="sql"> StorageBlobLogs
<span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d) <span style="color:#66d9ef">and</span> StatusText <span style="color:#f92672">!</span><span style="color:#66d9ef">contains</span> <span style="color:#e6db74">&#34;Success&#34;</span>
<span style="color:#f92672">|</span> summarize <span style="color:#66d9ef">count</span>() <span style="color:#66d9ef">by</span> OperationName
<span style="color:#f92672">|</span> top <span style="color:#ae81ff">10</span> <span style="color:#66d9ef">by</span> count_ <span style="color:#66d9ef">desc</span>
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-sql" data-lang="sql"><span style="display:flex;"><span> StorageBlobLogs
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> <span style="color:#66d9ef">where</span> TimeGenerated <span style="color:#f92672">&gt;</span> ago(<span style="color:#ae81ff">3</span>d) <span style="color:#66d9ef">and</span> StatusText <span style="color:#f92672">!</span><span style="color:#66d9ef">contains</span> <span style="color:#e6db74">&#34;Success&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> summarize <span style="color:#66d9ef">count</span>() <span style="color:#66d9ef">by</span> OperationName
</span></span><span style="display:flex;"><span><span style="color:#f92672">|</span> top <span style="color:#ae81ff">10</span> <span style="color:#66d9ef">by</span> count_ <span style="color:#66d9ef">desc</span>
</span></span></code></pre></div></li>
</ul>
<p>A list of all of the built-in queries for Azure Storage logs in Azure Monitor is available in the <a href="https://github.com/microsoft/AzureMonitorCommunity">Azure Montior Community</a> on GitHub in the <a href="https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Storage%20accounts/Queries">Azure Services/Storage accounts/Queries</a> folder.</p>
<h2 id="optimizing-your-data-lake-for-better-scale-and-performance">
@ -759,10 +759,10 @@ If instead your high priority scenario is to understand the weather patterns in
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,10 +2,10 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Analytics # The Hitchhiker&rsquo;s Guide to the Data Lake - As part of helping our customers build their analytics solutions on ADLS Gen2, we have a collection of considerations and key learnings that have been effective in building highly scalable and performant data lakes on Azure. We have distilled these learnings in our guidance document Azure Data Lake Storage Gen1 to Gen2 Migration Sample Azure Data Lake Storage Gen2 Billing FAQs ">
<meta name="description" content=" Analytics # The Hitchhiker&rsquo;s Guide to the Data Lake - As part of helping our customers build their analytics solutions on ADLS Gen2, we have a collection of considerations and key learnings that have been effective in building highly scalable and performant data lakes on Azure. We have distilled these learnings in our guidance document Azure Data Lake Storage Gen1 to Gen2 Migration Sample Azure Data Lake Storage Gen2 Billing FAQs ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Analytics" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/analytics/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -242,10 +242,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,8 +12,8 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/azure-storage-data-lake-gen2-billing-faq/</guid>
<description>Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
<description>Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
Flat namespace (FNS): A mode of organization in a storage account on Azure where objects are organized using a flat structure - aka a flat list of objects.</description>
</item>
@ -23,9 +23,9 @@ Flat namespace (FNS): A mode of organization in a storage account on Azure where
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/hitchhikers-guide-to-the-datalake/</guid>
<description>The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?</description>
<description>The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?</description>
</item>
</channel>

Просмотреть файл

@ -2,14 +2,14 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp; Security Azure Defender for Storage Immutable Blob storage ">
<meta name="description" content=" Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp; Security Azure Defender for Storage Immutable Blob storage ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Blob Storage data protection features" />
<meta property="og:description" content="Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp; Security Azure Defender for Storage Immutable Blob storage " />
<meta property="og:description" content=" Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp; Security Azure Defender for Storage Immutable Blob storage " />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-data-protection-features/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -247,10 +247,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,14 +2,14 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
<meta name="description" content="Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
You can find a good overview of the service here, and instructions on how to deploy it via the portal here.
Here we are going to focus on deploying Object Replication with ARM. You will see we are doing this in 3 steps with three templates orchestrated with some CLI code.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Blob Storage - Setup Object Replication with ARM Templates" />
<meta property="og:description" content="Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
<meta property="og:description" content="Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
You can find a good overview of the service here, and instructions on how to deploy it via the portal here.
Here we are going to focus on deploying Object Replication with ARM. You will see we are doing this in 3 steps with three templates orchestrated with some CLI code." />
<meta property="og:type" content="article" />
@ -21,7 +21,7 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -39,7 +39,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -182,7 +182,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -245,14 +245,14 @@ https://github.com/alex-shpak/hugo-book
<li>In this sample we are using the same region for source and destination this is not required</li>
<li>In this sample we are using the same durability (i.e. LRS) for source and destination this is not required</li>
</ul>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">RG<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;resource group name&gt;&#34;</span>
LOCATION<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;region name i.e. westus&gt;&#34;</span>
SRCACCT<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of source storage account&gt;&#34;</span>
DESTACCT<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of destination storage account&gt;&#34;</span>
CONTAINER<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of container&gt;&#34;</span>
az group create --name $RG --location $LOCATION
</code></pre></div><h2 id="create-the-source--destination-storage-accounts">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>RG<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;resource group name&gt;&#34;</span>
</span></span><span style="display:flex;"><span>LOCATION<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;region name i.e. westus&gt;&#34;</span>
</span></span><span style="display:flex;"><span>SRCACCT<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of source storage account&gt;&#34;</span>
</span></span><span style="display:flex;"><span>DESTACCT<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of destination storage account&gt;&#34;</span>
</span></span><span style="display:flex;"><span>CONTAINER<span style="color:#f92672">=</span><span style="color:#e6db74">&#34;&lt;name of container&gt;&#34;</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>az group create --name $RG --location $LOCATION
</span></span></code></pre></div><h2 id="create-the-source--destination-storage-accounts">
Create the source &amp; destination storage accounts
<a class="anchor" href="#create-the-source--destination-storage-accounts">#</a>
</h2>
@ -260,14 +260,14 @@ az group create --name $RG --location $LOCATION
<ul>
<li>Make sure that your accounts have Change Feed and Versioning features enabled</li>
</ul>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">az deployment group create <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --template-file step01.json <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span>
</code></pre></div><h2 id="create-the-destination-object-replication-endpoint">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>az deployment group create <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --template-file step01.json <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span>
</span></span></code></pre></div><h2 id="create-the-destination-object-replication-endpoint">
Create the destination Object Replication endpoint
<a class="anchor" href="#create-the-destination-object-replication-endpoint">#</a>
</h2>
@ -275,14 +275,14 @@ az group create --name $RG --location $LOCATION
<ul>
<li>You might need to wait a bit for the features you enabled in the last step to turn on before doing this</li>
</ul>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">az deployment group create <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --template-file step02.json <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span>
</code></pre></div><h2 id="create-the-source-object-replication-endpoint">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>az deployment group create <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --template-file step02.json <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span>
</span></span></code></pre></div><h2 id="create-the-source-object-replication-endpoint">
Create the source Object Replication endpoint
<a class="anchor" href="#create-the-source-object-replication-endpoint">#</a>
</h2>
@ -290,19 +290,19 @@ az group create --name $RG --location $LOCATION
<blockquote>
<p>NOTE: Here I am just pulling the first policy and rule, since I only have 1, if you have more than 1 you will need to change the &ndash;query</p>
</blockquote>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-bash" data-lang="bash">POLICY<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>az storage account or-policy list --account-name $DESTACCT --query <span style="color:#e6db74">&#39;[0].policyId&#39;</span> --output tsv<span style="color:#66d9ef">)</span>
RULE<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>az storage account or-policy list --account-name $DESTACCT --query <span style="color:#e6db74">&#39;[0].rules[0].ruleId&#39;</span> --output tsv<span style="color:#66d9ef">)</span>
az deployment group create <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --template-file step03.json <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;policyId=</span>$POLICY<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;ruleId=</span>$RULE<span style="color:#e6db74">&#34;</span>
</code></pre></div></article>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>POLICY<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>az storage account or-policy list --account-name $DESTACCT --query <span style="color:#e6db74">&#39;[0].policyId&#39;</span> --output tsv<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>RULE<span style="color:#f92672">=</span><span style="color:#66d9ef">$(</span>az storage account or-policy list --account-name $DESTACCT --query <span style="color:#e6db74">&#39;[0].rules[0].ruleId&#39;</span> --output tsv<span style="color:#66d9ef">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>az deployment group create <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --name TestDeployment <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --resource-group $RG <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --template-file step03.json <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --parameters <span style="color:#e6db74">&#34;storageNameSrc=</span>$SRCACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;storageNameDest=</span>$DESTACCT<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;containerName=</span>$CONTAINER<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;policyId=</span>$POLICY<span style="color:#e6db74">&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> <span style="color:#e6db74">&#34;ruleId=</span>$RULE<span style="color:#e6db74">&#34;</span>
</span></span></code></pre></div></article>
@ -318,10 +318,10 @@ az deployment group create <span style="color:#ae81ff">\
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Blob Storage Upload API&rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&rsquo;s. However, a good understanding of the API&rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&rsquo;s to help you compare the differences between them.">
<meta name="description" content="Azure Blob Storage Upload API&rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&rsquo;s. However, a good understanding of the API&rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&rsquo;s to help you compare the differences between them.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Blob Storage Upload API&#39;s" />
<meta property="og:description" content="Azure Blob Storage Upload API&rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&rsquo;s. However, a good understanding of the API&rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&rsquo;s to help you compare the differences between them." />
<meta property="og:description" content="Azure Blob Storage Upload API&rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&rsquo;s. However, a good understanding of the API&rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&rsquo;s to help you compare the differences between them." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -330,10 +330,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,16 +2,16 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.">
<meta name="description" content="Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer" />
<meta property="og:description" content="Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account." />
<meta property="og:description" content="Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-storage-classic-logs-to-data-explorer/" /><meta property="article:section" content="docs" />
@ -21,7 +21,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -39,7 +39,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -182,7 +182,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -256,38 +256,38 @@ https://github.com/alex-shpak/hugo-book
</li>
<li>
<p>You can now create a table to store the logs, this is the script that I used.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-text" data-lang="text">.create table storagelogs (
VersionNumber: string,
RequestStartTime: datetime,
OperationType: string,
RequestStatus: string,
HttpStatusCode: string,
EndToEndLatencyInMS: long,
ServerLatencyInMs: long,
AuthenticationType: string,
RequesterAcountName: string,
OwnerAccountName: string,
ServiceType: string,
RequestUrl: string,
RequestedObjectKey: string,
RequestIdHeader: guid,
OperationCount: int,
RequesterIpAddress: string,
RequestVersionHeader: string,
RequestHeaderSize: long,
RequestPacketSize: long,
ResponseHeaderSize: long,
ResponsePacketSize: long,
RequestContentLength: long,
RequestMd5: string,
ServerMd5: string,
EtagIdentifier: string,
LastModifiedTime: datetime,
ConditionsUsed: string,
UserAgentHeader: string,
ReferrerHeader: string,
LogSource: string)
</code></pre></div><p>See log format details <a href="https://docs.microsoft.com/rest/api/storageservices/storage-analytics-log-format">here</a></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-text" data-lang="text"><span style="display:flex;"><span>.create table storagelogs (
</span></span><span style="display:flex;"><span> VersionNumber: string,
</span></span><span style="display:flex;"><span> RequestStartTime: datetime,
</span></span><span style="display:flex;"><span> OperationType: string,
</span></span><span style="display:flex;"><span> RequestStatus: string,
</span></span><span style="display:flex;"><span> HttpStatusCode: string,
</span></span><span style="display:flex;"><span> EndToEndLatencyInMS: long,
</span></span><span style="display:flex;"><span> ServerLatencyInMs: long,
</span></span><span style="display:flex;"><span> AuthenticationType: string,
</span></span><span style="display:flex;"><span> RequesterAcountName: string,
</span></span><span style="display:flex;"><span> OwnerAccountName: string,
</span></span><span style="display:flex;"><span> ServiceType: string,
</span></span><span style="display:flex;"><span> RequestUrl: string,
</span></span><span style="display:flex;"><span> RequestedObjectKey: string,
</span></span><span style="display:flex;"><span> RequestIdHeader: guid,
</span></span><span style="display:flex;"><span> OperationCount: int,
</span></span><span style="display:flex;"><span> RequesterIpAddress: string,
</span></span><span style="display:flex;"><span> RequestVersionHeader: string,
</span></span><span style="display:flex;"><span> RequestHeaderSize: long,
</span></span><span style="display:flex;"><span> RequestPacketSize: long,
</span></span><span style="display:flex;"><span> ResponseHeaderSize: long,
</span></span><span style="display:flex;"><span> ResponsePacketSize: long,
</span></span><span style="display:flex;"><span> RequestContentLength: long,
</span></span><span style="display:flex;"><span> RequestMd5: string,
</span></span><span style="display:flex;"><span> ServerMd5: string,
</span></span><span style="display:flex;"><span> EtagIdentifier: string,
</span></span><span style="display:flex;"><span> LastModifiedTime: datetime,
</span></span><span style="display:flex;"><span> ConditionsUsed: string,
</span></span><span style="display:flex;"><span> UserAgentHeader: string,
</span></span><span style="display:flex;"><span> ReferrerHeader: string,
</span></span><span style="display:flex;"><span> LogSource: string)
</span></span></code></pre></div><p>See log format details <a href="https://docs.microsoft.com/rest/api/storageservices/storage-analytics-log-format">here</a></p>
<p><img src="pic01.png" alt="Pic 01" /></p>
</li>
<li>
@ -362,10 +362,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,14 +2,14 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com ">
<meta name="description" content=" NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="NFS 3.0 support for Azure Blob Storage" />
<meta property="og:description" content="NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com " />
<meta property="og:description" content=" NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com " />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/basics/nfs-3-support-for-azure-blob-storage/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -248,10 +248,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,14 +2,14 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference ">
<meta name="description" content=" Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Optimize your costs with Azure Blob Storage" />
<meta property="og:description" content="Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference " />
<meta property="og:description" content=" Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference " />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/basics/optimize-your-costs-with-azure-blob-storage/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -248,10 +248,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,14 +2,14 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.">
<meta name="description" content="Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Managing concurrent uploads in Azure blob storage with blob versioning" />
<meta property="og:description" content="Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data." />
<meta property="og:description" content="Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/code-samples/concurrent-uploads-with-versioning/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -245,94 +245,94 @@ https://github.com/alex-shpak/hugo-book
<ol>
<li>
<p>Client 1 and Client 2 (BlockId QkJC) upload <code>awesomestmemeever.gif</code> simultaneously.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block&amp;blockid=QkJC&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block&amp;blockid=QUFB&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block&amp;blockid=QkJC&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block&amp;blockid=QUFB&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div></li>
<li>
<p>Client 2s upload finishes with a successful call to Put Block List before Client 1. <code>awesomestmemeever.gif</code> is saved with VersionId 2021-01-08T20:38:09.3842765Z. The committed block list can be retrieved.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-xml" data-lang="xml"><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
<span style="color:#f92672">&lt;BlockList&gt;</span>
<span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;Block&gt;</span>
<span style="color:#f92672">&lt;Name&gt;</span>QkJC<span style="color:#f92672">&lt;/Name&gt;</span>
<span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
<span style="color:#f92672">&lt;/Block&gt;</span>
<span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;UncommittedBlocks</span> <span style="color:#f92672">/&gt;</span>
<span style="color:#f92672">&lt;/BlockList&gt;</span>
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;BlockList&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Name&gt;</span>QkJC<span style="color:#f92672">&lt;/Name&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;UncommittedBlocks</span> <span style="color:#f92672">/&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;/BlockList&gt;</span>
</span></span></code></pre></div></li>
<li>
<p>Client 1s upload finishes but cannot be committed as the block list has been purged when Client 2 saved. Client 1 will receive a <code>HTTP 400 InvalidBlockList</code> exception. Client 1 issues a <code>HEAD</code> request to see if the file exists as it may have been uploaded by another client.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><p>If the blob has been successfully committed by another client, Client 1 can disregard the error or if the blob was not present for any other reason, the upload can be repeated.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><p>If the blob has been successfully committed by another client, Client 1 can disregard the error or if the blob was not present for any other reason, the upload can be repeated.</p>
</li>
<li>
<p>Client 3 attempts to upload the same file but experiences a transient network error, leaving uncommitted blocks as Put Block List is not called successfully due to missing blocks in the uncommitted block list. The uncommitted blocks are retained in the current version (last successful upload from Client 2).</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-xml" data-lang="xml"><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
<span style="color:#f92672">&lt;BlockList&gt;</span>
<span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;Block&gt;</span>
<span style="color:#f92672">&lt;Name&gt;</span>QkJC<span style="color:#f92672">&lt;/Name&gt;</span>
<span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
<span style="color:#f92672">&lt;/Block&gt;</span>
<span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;UncommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;Block&gt;</span>
<span style="color:#f92672">&lt;Name&gt;</span>Q0ND<span style="color:#f92672">&lt;/Name&gt;</span>
<span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
<span style="color:#f92672">&lt;/Block&gt;</span>
<span style="color:#f92672">&lt;/UncommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;/BlockList&gt;</span>
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;BlockList&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Name&gt;</span>QkJC<span style="color:#f92672">&lt;/Name&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;UncommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Name&gt;</span>Q0ND<span style="color:#f92672">&lt;/Name&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/UncommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;/BlockList&gt;</span>
</span></span></code></pre></div></li>
<li>
<p>Client 4 uploads the same file and is successful. The uncommitted blocks from Client 3s request are purged, and a new version is created, VersionId 2021-01-08T20:54:36.7150246Z.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-xml" data-lang="xml"><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
<span style="color:#f92672">&lt;BlockList&gt;</span>
<span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;Block&gt;</span>
<span style="color:#f92672">&lt;Name&gt;</span>RERE<span style="color:#f92672">&lt;/Name&gt;</span>
<span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
<span style="color:#f92672">&lt;/Block&gt;</span>
<span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
<span style="color:#f92672">&lt;UncommittedBlocks</span> <span style="color:#f92672">/&gt;</span>
<span style="color:#f92672">&lt;/BlockList&gt;</span>
</code></pre></div></li>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist&amp;blocklisttype=all&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX <span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-xml" data-lang="xml"><span style="display:flex;"><span><span style="color:#75715e">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;BlockList&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Name&gt;</span>RERE<span style="color:#f92672">&lt;/Name&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;Size&gt;</span>2495317<span style="color:#f92672">&lt;/Size&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/Block&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;/CommittedBlocks&gt;</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&lt;UncommittedBlocks</span> <span style="color:#f92672">/&gt;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">&lt;/BlockList&gt;</span>
</span></span></code></pre></div></li>
<li>
<p>After one day, the versions from the previous day are deleted and only the base blob remains.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><p><img src="blobproperties.png" alt="A screenshot of blob properties from the Azure portal." /></p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX<span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><p><img src="blobproperties.png" alt="A screenshot of blob properties from the Azure portal." /></p>
</li>
</ol>
<p>Note that in this approach, there is no need for the <code>If-None-Match:*</code> conditional header. Clients can simultaneously upload to the same blob and a new version will be created for each successful call to Put Block List or Put Blob. For Get Blob requests, if a <code>versionid</code> is not specified in the parameters, the latest version of the blob will be retrieved, or the calling application can also provide a valid <code>versionid</code> to retrieve a previous version before it is deleted through the lifecycle management rule. If needed, the current versions can be retrieved using List Blobs.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-rest" data-lang="rest">GET https://prosewarememestorage.blob.core.windows.net/test?restype=container&amp;comp=list&amp;include=versions&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX&amp;prefix=awesomestmemeever.gif <span style="color:#960050;background-color:#1e0010">
</span></code></pre></div><p>The following is a sample lifecycle management rule which is filtered to only blob versions and deletes versions older than 1 day:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-json" data-lang="json">{
<span style="color:#f92672">&#34;rules&#34;</span>: [
{
<span style="color:#f92672">&#34;enabled&#34;</span>: <span style="color:#66d9ef">true</span>,
<span style="color:#f92672">&#34;name&#34;</span>: <span style="color:#e6db74">&#34;DeleteVersionsOlderThan1Day&#34;</span>,
<span style="color:#f92672">&#34;type&#34;</span>: <span style="color:#e6db74">&#34;Lifecycle&#34;</span>,
<span style="color:#f92672">&#34;definition&#34;</span>: {
<span style="color:#f92672">&#34;actions&#34;</span>: {
<span style="color:#f92672">&#34;version&#34;</span>: {
<span style="color:#f92672">&#34;delete&#34;</span>: {
<span style="color:#f92672">&#34;daysAfterCreationGreaterThan&#34;</span>: <span style="color:#ae81ff">1</span>
}
}
},
<span style="color:#f92672">&#34;filters&#34;</span>: {
<span style="color:#f92672">&#34;blobTypes&#34;</span>: [
<span style="color:#e6db74">&#34;blockBlob&#34;</span>
]
}
}
}
]
}
</code></pre></div><p>In conclusion, blob versioning allows for both multiple uploads from clients and automated deletion of data that is now longer required while retaining the base blob. Only committed data is retained and there is no need for the use of conditional headers.</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-rest" data-lang="rest"><span style="display:flex;"><span>GET https://prosewarememestorage.blob.core.windows.net/test?restype=container&amp;comp=list&amp;include=versions&amp;sv=2019-12-12&amp;ss=bfqt&amp;srt=sco&amp;sp=rwdlacuptfx&amp;se=2021-01-16T04:18:47Z&amp;st=2021-01-08T20:18:47Z&amp;spr=https&amp;sig=XXXXXXXXXXXX&amp;prefix=awesomestmemeever.gif <span style="color:#960050;background-color:#1e0010">
</span></span></span></code></pre></div><p>The following is a sample lifecycle management rule which is filtered to only blob versions and deletes versions older than 1 day:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-json" data-lang="json"><span style="display:flex;"><span>{
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;rules&#34;</span>: [
</span></span><span style="display:flex;"><span> {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;enabled&#34;</span>: <span style="color:#66d9ef">true</span>,
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;name&#34;</span>: <span style="color:#e6db74">&#34;DeleteVersionsOlderThan1Day&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;type&#34;</span>: <span style="color:#e6db74">&#34;Lifecycle&#34;</span>,
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;definition&#34;</span>: {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;actions&#34;</span>: {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;version&#34;</span>: {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;delete&#34;</span>: {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;daysAfterCreationGreaterThan&#34;</span>: <span style="color:#ae81ff">1</span>
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> },
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;filters&#34;</span>: {
</span></span><span style="display:flex;"><span> <span style="color:#f92672">&#34;blobTypes&#34;</span>: [
</span></span><span style="display:flex;"><span> <span style="color:#e6db74">&#34;blockBlob&#34;</span>
</span></span><span style="display:flex;"><span> ]
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> }
</span></span><span style="display:flex;"><span> ]
</span></span><span style="display:flex;"><span>}
</span></span></code></pre></div><p>In conclusion, blob versioning allows for both multiple uploads from clients and automated deletion of data that is now longer required while retaining the base blob. Only committed data is retained and there is no need for the use of conditional headers.</p>
<h2 id="references">
References
<a class="anchor" href="#references">#</a>
@ -366,10 +366,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.">
<meta name="description" content="Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure blob storage data management and retention" />
<meta property="og:description" content="Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it." />
<meta property="og:description" content="Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/code-samples/data-retention/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -298,10 +298,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.">
<meta name="description" content="Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Estimating Pricing for Azure Block Blob Deployments" />
<meta property="og:description" content="Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate." />
<meta property="og:description" content="Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -302,10 +302,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/application-and-user-data/code-samples/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -233,10 +233,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,7 +12,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/data-retention/</guid>
<description>Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.</description>
<description>Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.</description>
</item>
<item>
@ -21,7 +21,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/supported-character-scrubber/</guid>
<description>Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
<description>Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turnkey solution to discovering unsupported characters in your file names with a simple CSV input.</description>
</item>
@ -31,7 +31,7 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/</guid>
<description>Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.</description>
<description>Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.</description>
</item>
<item>
@ -40,8 +40,8 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/concurrent-uploads-with-versioning/</guid>
<description>Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.</description>
<description>Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.</description>
</item>
</channel>

Просмотреть файл

@ -2,13 +2,13 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
<meta name="description" content="Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turnkey solution to discovering unsupported characters in your file names with a simple CSV input.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Azure Storage Supported Character Scrubber" />
<meta property="og:description" content="Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
<meta property="og:description" content="Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turnkey solution to discovering unsupported characters in your file names with a simple CSV input." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/application-and-user-data/code-samples/supported-character-scrubber/" /><meta property="article:section" content="docs" />
@ -19,7 +19,7 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -37,7 +37,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -180,7 +180,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -244,8 +244,8 @@ https://github.com/alex-shpak/hugo-book
Usage
<a class="anchor" href="#usage">#</a>
</h2>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">.\AzureStorageSupportedCharacterScrubber.ps1 -CsvInputPath .\SourceFileNames.csv -RenameItems
</code></pre></div><h2 id="sample-input">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>.\AzureStorageSupportedCharacterScrubber.ps1 -CsvInputPath .\SourceFileNames.csv -RenameItems
</span></span></code></pre></div><h2 id="sample-input">
Sample input
<a class="anchor" href="#sample-input">#</a>
</h2>
@ -269,41 +269,41 @@ https://github.com/alex-shpak/hugo-book
Shell
<a class="anchor" href="#shell">#</a>
</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4"><code class="language-powershell" data-lang="powershell">ReplacementString not provided, using <span style="color:#66d9ef">default</span> as <span style="color:#e6db74">&#39;&#39;</span>
Testing character B (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 66)
Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
Testing character d (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 100)
Testing character C (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 67)
Testing character h (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 104)
Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
Testing character Ä (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 196)
Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 141)
Unsupported char code point<span style="color:#960050;background-color:#1e0010">:</span> 141
Testing character r (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 114)
Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
Testing character c (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 99)
Testing character t (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 116)
Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
Testing character r (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 114)
Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
Testing character i (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 105)
Testing character n (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 110)
Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
Testing character t (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 116)
Testing character h (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 104)
Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
Testing character n (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 110)
Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
Testing character m (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 109)
Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
Testing character . (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 46)
Testing character p (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 112)
Testing character d (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 100)
Testing character f (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 102)
Source name<span style="color:#960050;background-color:#1e0010">:</span> BadChaÄracter <span style="color:#66d9ef">in</span> the name.pdf
Destination name<span style="color:#960050;background-color:#1e0010">:</span> BadCharacter <span style="color:#66d9ef">in</span> the name.pdf
</code></pre></div><h3 id="csv">
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-powershell" data-lang="powershell"><span style="display:flex;"><span>ReplacementString not provided, using <span style="color:#66d9ef">default</span> as <span style="color:#e6db74">&#39;&#39;</span>
</span></span><span style="display:flex;"><span>Testing character B (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 66)
</span></span><span style="display:flex;"><span>Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
</span></span><span style="display:flex;"><span>Testing character d (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 100)
</span></span><span style="display:flex;"><span>Testing character C (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 67)
</span></span><span style="display:flex;"><span>Testing character h (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 104)
</span></span><span style="display:flex;"><span>Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
</span></span><span style="display:flex;"><span>Testing character Ä (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 196)
</span></span><span style="display:flex;"><span>Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 141)
</span></span><span style="display:flex;"><span>Unsupported char code point<span style="color:#960050;background-color:#1e0010">:</span> 141
</span></span><span style="display:flex;"><span>Testing character r (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 114)
</span></span><span style="display:flex;"><span>Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
</span></span><span style="display:flex;"><span>Testing character c (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 99)
</span></span><span style="display:flex;"><span>Testing character t (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 116)
</span></span><span style="display:flex;"><span>Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
</span></span><span style="display:flex;"><span>Testing character r (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 114)
</span></span><span style="display:flex;"><span>Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
</span></span><span style="display:flex;"><span>Testing character i (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 105)
</span></span><span style="display:flex;"><span>Testing character n (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 110)
</span></span><span style="display:flex;"><span>Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
</span></span><span style="display:flex;"><span>Testing character t (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 116)
</span></span><span style="display:flex;"><span>Testing character h (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 104)
</span></span><span style="display:flex;"><span>Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
</span></span><span style="display:flex;"><span>Testing character (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 32)
</span></span><span style="display:flex;"><span>Testing character n (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 110)
</span></span><span style="display:flex;"><span>Testing character a (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 97)
</span></span><span style="display:flex;"><span>Testing character m (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 109)
</span></span><span style="display:flex;"><span>Testing character e (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 101)
</span></span><span style="display:flex;"><span>Testing character . (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 46)
</span></span><span style="display:flex;"><span>Testing character p (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 112)
</span></span><span style="display:flex;"><span>Testing character d (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 100)
</span></span><span style="display:flex;"><span>Testing character f (CodePoint<span style="color:#960050;background-color:#1e0010">:</span> 102)
</span></span><span style="display:flex;"><span>Source name<span style="color:#960050;background-color:#1e0010">:</span> BadChaÄracter <span style="color:#66d9ef">in</span> the name.pdf
</span></span><span style="display:flex;"><span>Destination name<span style="color:#960050;background-color:#1e0010">:</span> BadCharacter <span style="color:#66d9ef">in</span> the name.pdf
</span></span></code></pre></div><h3 id="csv">
CSV
<a class="anchor" href="#csv">#</a>
</h3>
@ -438,10 +438,10 @@ Destination name<span style="color:#960050;background-color:#1e0010">:</span> Ba
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Application and User Data # Case Studies # Coming soon . . .
Code Samples # Managing concurrent uploads in Azure blob storage with blob versioning Azure blob storage data management and retention Estimating Pricing for Azure Block Blob Deployments Azure Storage Supported Character Scrubber Performance Testing # Coming soon . . .
Basics # Azure Blob Storage data protection features Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage.">
<meta name="description" content="Application and User Data # Case Studies # Coming soon . . .
Code Samples # Managing concurrent uploads in Azure blob storage with blob versioning Azure blob storage data management and retention Estimating Pricing for Azure Block Blob Deployments Azure Storage Supported Character Scrubber Performance Testing # Coming soon . . .
Basics # Azure Blob Storage data protection features Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Application and User Data" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -17,7 +17,7 @@ Basics # Azure Blob Storage data protection features Enterprises, partners, an
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/application-and-user-data/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -36,7 +36,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -179,7 +179,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -278,10 +278,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,7 +12,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-object-replication-arm/</guid>
<description>Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
<description>Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
You can find a good overview of the service here, and instructions on how to deploy it via the portal here.
Here we are going to focus on deploying Object Replication with ARM. You will see we are doing this in 3 steps with three templates orchestrated with some CLI code.</description>
</item>
@ -23,8 +23,8 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-data-protection-features/</guid>
<description>Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp;amp; Security Azure Defender for Storage Immutable Blob storage </description>
<description> Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp;amp; Security Azure Defender for Storage Immutable Blob storage </description>
</item>
<item>
@ -33,7 +33,7 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/</guid>
<description>Azure Blob Storage Upload API&amp;rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&amp;rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&amp;rsquo;s. However, a good understanding of the API&amp;rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&amp;rsquo;s to help you compare the differences between them.</description>
<description>Azure Blob Storage Upload API&amp;rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&amp;rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&amp;rsquo;s. However, a good understanding of the API&amp;rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&amp;rsquo;s to help you compare the differences between them.</description>
</item>
<item>
@ -42,9 +42,9 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-storage-classic-logs-to-data-explorer/</guid>
<description>Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.</description>
<description>Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.</description>
</item>
<item>
@ -53,8 +53,8 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/nfs-3-support-for-azure-blob-storage/</guid>
<description>NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com </description>
<description> NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com </description>
</item>
<item>
@ -63,8 +63,8 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/optimize-your-costs-with-azure-blob-storage/</guid>
<description>Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference </description>
<description> Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference </description>
</item>
</channel>

Двоичный файл не отображается.

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Partner Documentation for Commvault">
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -239,10 +239,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Backup and Archive # Backup and Archive Partners
Commvault Rubrik Veeam Veritas Sample Scripts
Blob Tiering - Creates action and filter objects to apply blob tiering to block blobs matching a certain criteria. Create Storage Account - Creates a brand new resource group and storage account, based upon input variables. ">
<meta name="description" content=" Backup and Archive # Backup and Archive Partners
Commvault Rubrik Veeam Veritas Sample Scripts
Blob Tiering - Creates action and filter objects to apply blob tiering to block blobs matching a certain criteria. Create Storage Account - Creates a brand new resource group and storage account, based upon input variables. ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Backup and Archive" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/backup-and-archive/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -36,7 +36,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -179,7 +179,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -251,10 +251,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,7 +12,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/commvault/</guid>
<description>Microsoft Partner Documentation for Commvault for Azure # https://documentation.commvault.com/commvault/v11/article?p=31252.htm</description>
<description>Microsoft Partner Documentation for Commvault for Azure # https://documentation.commvault.com/commvault/v11/article?p=31252.htm</description>
</item>
<item>
@ -21,8 +21,8 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/veritas/</guid>
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
@ -32,7 +32,8 @@ on-
premises Backup
Azure VM&amp;rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.</description>
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.</description>
</item>
<item>
@ -41,8 +42,8 @@ Azure Blob X X X X X X X X Links to Marketplace Offerings # Information
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/rubrik/</guid>
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
@ -52,7 +53,8 @@ on-
premises Backup
Azure VM&amp;rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.</description>
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.</description>
</item>
<item>
@ -61,7 +63,7 @@ Azure Blob X X X X X X X X Links to Marketplace Offerings # Information
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/veeam/</guid>
<description>Links to relevant documentation # https://www.veeam.com/documentation-guides-datasheets.html </description>
<description> Links to relevant documentation # https://www.veeam.com/documentation-guides-datasheets.html </description>
</item>
</channel>

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Description Here">
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -341,10 +341,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Description Here">
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -245,10 +245,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Veritas Backup Doc - Backup Exec">
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -341,10 +341,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="HPC IoT and AI # Coming Soon. . .">
<meta name="description" content="HPC IoT and AI # Coming Soon. . .">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="HPC IoT and AI" />
<meta property="og:description" content="HPC IoT and AI # Coming Soon. . ." />
<meta property="og:description" content="HPC IoT and AI # Coming Soon. . ." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/hpc-iot-and-ai/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -239,10 +239,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/docs/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -233,10 +233,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,7 +12,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/hpc-iot-and-ai/</guid>
<description>HPC IoT and AI # Coming Soon. . .</description>
<description>HPC IoT and AI # Coming Soon. . .</description>
</item>
<item>
@ -21,7 +21,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/storage-partners/</guid>
<description>Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp;amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data </description>
<description> Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp;amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data </description>
</item>
<item>
@ -30,7 +30,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/tools-and-utilities/</guid>
<description>Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.</description>
<description>Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.</description>
</item>
<item>
@ -39,7 +39,7 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/whats-new/</guid>
<description>What&amp;rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&amp;rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) </description>
<description> What&amp;rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&amp;rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) </description>
</item>
</channel>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data ">
<meta name="description" content=" Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Storage Partners" />
<meta property="og:description" content="Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data " />
<meta property="og:description" content=" Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data " />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/storage-partners/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -374,10 +374,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.">
<meta name="description" content="Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Tools and Utilities" />
<meta property="og:description" content="Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage." />
<meta property="og:description" content="Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/tools-and-utilities/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -243,10 +243,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="What&rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) ">
<meta name="description" content=" What&rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) ">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="What&#39;s New" />
<meta property="og:description" content="What&rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) " />
<meta property="og:description" content=" What&rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) " />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://azure.github.io/Storage/docs/whats-new/" /><meta property="article:section" content="docs" />
@ -17,7 +17,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<!--
Made with Book Theme
https://github.com/alex-shpak/hugo-book
@ -35,7 +35,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -178,7 +178,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -256,10 +256,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -0,0 +1,574 @@
"use strict";(function(){const t={cache:!0};t.doc={id:"id",field:["title","content"],store:["title","href","section"]};const e=FlexSearch.create("balance",t);window.bookSearchIndex=e,e.add({id:0,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/utilities/ageing-analysis/",title:"Ageing Analysis Guide: A quick start template",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Ageing Analysis Guide: A quick start template # Overview # The inventory Ageing analysis for any application determines the storage duration of a file, folder or data inside that. The main purpose is to find out which files, folders stay in inventory for a long time or are perhaps becoming obsolete. This also identifies the active and inactive folders in the applications from Gen1 Data Lake using directory details such as recent child modification date and size. The purpose of this document is to provide a manual in the form of step by step guide for the Ageing analysis which can be done before the actual data migration during the Assessment phase. As such it provides the directions, references, sample code examples of the PowerShell functions and python code snippets been used.
This guide covers the following tasks:
Inventory collection of application folders An insight to ageing analysis using inventory list Creation of Ageing analysis to single pivot sheet using python snippet Considerations for using the ageing analysis approach:
Planning Cutover from Gen1 to Gen2 for all workloads at the same time. Determining hot, cold tiers of applications. Refer here for more details. Ideal for all applications from Gen1 (Blob Storage) to be migrated or also critical applications where the migration need to be managed. Purging can be done as part of Cost reduction Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Key Vault. Required keys and secrets to be configured here.
Service principal with read, write and execute permission to the resource group, key vault, data lake store Gen1 and data lake store Gen2. To learn more, see create service principal account and to provide SPN access to Gen1 refer to SPN access to Gen1
Windows PowerShell ISE.
Python IDE.
Note: Run as administrator
//Run below code to enable running PS files Set-ExecutionPolicy Unrestricted //Check for the below modules in PowerShell . If not existing, install one by one: Install-Module Az.Accounts -AllowClobber -Force Install-Module Az.DataFactory -AllowClobber -Force Install-Module Az.KeyVault -AllowClobber -Force Install-Module Az.DataLakeStore -AllowClobber -Force Install-Module PowerShellGet –Repository PSGallery –Force //Close the PowerShell ISE and Reopen as administrator. Run the below module Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force Limitations # This version of code will have below limitations:
Supports only for Gen1 Locations Inventory Code Developed and Supported only in Windows PowerShell ISE Pivot Code developed and supported only in python Manual intervention is required to analysis application folders patterns Ageing Analysis Setup # This section will help you with the steps needed to set up the framework and get started with the ageing analysis process.
Get Started # Download the migration source code from here to local machine:
Note: To avoid security warning error \u0026ndash;\u0026gt; Right click on the zip folder downloaded \u0026ndash;\u0026gt; Go to \u0026ndash;\u0026gt; Properties \u0026ndash;\u0026gt; General \u0026ndash;\u0026gt; Check unblock option under security section. Unzip and extract the folder.
The folder will contain below listed contents:
Inventory: This folder will have PowerShell code for inventory analysis of Applications Pivot: This Folder contains the python code snippet for pivot sheet generation from PowerShell output Sample Pivot: This Folder contains sample pivot data sheet How to Set up Configuration file # Important Prerequisite:
Below is the code snapshot of ADLS connection:
\u0026#34;gen1SourceRootPath\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the Gen1 source root path\u0026gt;\u0026gt;\u0026#34;, \u0026#34;outPutPath\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the path where the results needs to store\u0026gt;\u0026gt;\u0026#34;, \u0026#34;tenantId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the tenantId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;subscriptionId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the subscriptionId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleSecret\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleSecret Key\u0026gt;\u0026gt;\u0026#34;, \u0026#34;dataLakeStore\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the dataLakeStore name\u0026gt;\u0026gt;\u0026#34; Setting up the connection to azure for inventory collection:
$SecurePassword = ConvertTo-SecureString $ServicePrincipalKey -AsPlainText -Force $Credential = New-Object System.Management.Automation.PSCredential ( $ServicePrincipalId, $SecurePassword) Login-AzAccount -ServicePrincipal -TenantId $TenantId -Credential $Credential Inventory Collection using PowerShell # Run the script Inventory.ps1 which will trigger the inventory collection process.
The Inventory PowerShell script collects inventory details of given Application folder The PowerShell code is run with minimum folder depth, especially for Large applications The code exports .txt file with inventory details including Size, Files count, Directory Count, Last Modification Time within the given depth level The generated result is in txt file, saved into the Output folder The output file is further analyzed for determining the ageing analysis approach for identifying active and inactive folders Ageing Analysis Approach # Below is the approach on the ageing analysis using this PowerShell script:
The objective of Ageing Analysis is to find Active and Inactive Folders in an application Majorly Ageing analysis approach is done by considering the size of the folder and recent child modification time The Analysis is done on the inventory data output file extracted from the PowerShell code The sub folders in the application is identified based on Active and storage strategy or user requirements The sub folder paths are given as input to PowerShell Inventory code and exported the Datasheet csv file Ageing Analysis Datasheet # The Datasheet is the output of inventory PowerShell code The sub folders or application path is derived from analysis based on storage, Active or user requirements The Data sheet are given as input to python snippet and final pivot table is created The Data sheet differs from analysis approach for a single application Pivot Sheet using python snippet # Run python script PivotSheetGeneration.py for pivot sheet generation. Below are the steps how this script works:
The python script is used for the generating pivot table in .xlsx document The Datasheets from multiple applications are placed in the output folder, python snippet takes the csv files as input and create Data pivot sheets respectively The input and output path are provided by the user. Python snippet read all the files present in the input folder and calls the create pivot table function The Code snippet to generate the pivot table
The Final pivot Datasheet is created and saved in the same output folder.
References # Microsoft Azure PowerShell `}),e.add({id:1,href:"/Storage/docs/analytics/",title:"Analytics",section:"Docs",content:" Analytics # The Hitchhiker\u0026rsquo;s Guide to the Data Lake - As part of helping our customers build their analytics solutions on ADLS Gen2, we have a collection of considerations and key learnings that have been effective in building highly scalable and performant data lakes on Azure. We have distilled these learnings in our guidance document Azure Data Lake Storage Gen1 to Gen2 Migration Sample Azure Data Lake Storage Gen2 Billing FAQs "}),e.add({id:2,href:"/Storage/docs/application-and-user-data/",title:"Application and User Data",section:"Docs",content:` Application and User Data # Case Studies # Coming soon . . .
Code Samples # Managing concurrent uploads in Azure blob storage with blob versioning Azure blob storage data management and retention Estimating Pricing for Azure Block Blob Deployments Azure Storage Supported Character Scrubber Performance Testing # Coming soon . . .
Basics # Azure Blob Storage data protection features Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features. Optimize your costs with Azure Blob Storage In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low. NFS 3.0 support for Azure Blob Storage In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure. Azure Blob Storage Upload API\u0026rsquo;s Azure Blob Storage - Setup Object Replication with ARM Templates Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer `}),e.add({id:3,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/application-update/",title:"Application and Workload Update",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
This can be applicable for below migration patterns:
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.
NOTE: We will be covering below azure services
Azure Data Factory Load data into Azure Data Lake Storage Gen2 with Azure Data Factory Azure Databricks Use with Azure Databricks Quickstart: Analyze data in Azure Data Lake Storage Gen2 by using Azure Databricks Tutorial: Extract, transform, and load data by using Azure Databricks SQL Data Warehouse Use with Azure SQL Data Warehouse HDInsight Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters Tutorial: Extract, transform, and load data by using Azure HDInsight Prerequisites # The migration of data from Gen1 to Gen2 should be completed
How to Configure and Update Azure Databricks # Applies where Databricks is used for data ingestion to ADLS Gen1.
Before the migration:
Mount configured to Gen1 path
Sample code showing mount path configured for ADLS Gen1 using service principle:
Set up DataBricks cluster for scheduled job run
Sample snapshot of working code:
Note: Refer to Application\\IncrementalSampleLoad.py script for more details.
After the migration:
Change the mount configuration to Gen2 container
Note: Stop the job scheduler and change the mount configuration to point to Gen2 with the same mount name.
Note: Refer to Application\\MountConfiguration.py script for more details.
Reschedule the job scheduler
Check for the new files getting generated at the Gen2 root folder path
How to Configure and Update Azure Datafactory # Once the data migration using ADF is completed from ADLS Gen1 to Gen2, follow the below steps:
Stop the trigger to Gen1 used as part of Incremental copy pattern.
Modify the existing factory by creating new linked service to point to Gen2 storage.
Go to \u0026ndash;\u0026gt; Azure Data Factory \u0026ndash;\u0026gt; Click on Author \u0026ndash;\u0026gt; Connections \u0026ndash;\u0026gt; Linked Service \u0026ndash;\u0026gt; click on New \u0026ndash;\u0026gt; Choose Azure Data Lake Storage Gen2 \u0026ndash;\u0026gt; Click on Continue button
Provide the details to create new Linked service to point to Gen2 storage account.
Modify the existing factory by creating new dataset in Gen2 storage.
Go to \u0026ndash;\u0026gt; Azure Data Factory \u0026ndash;\u0026gt; Click on Author \u0026ndash;\u0026gt; Click on Pipelines \u0026ndash;\u0026gt; Select the pipeline \u0026ndash;\u0026gt; Click on Activity \u0026ndash;\u0026gt; Click on sink tab \u0026ndash;\u0026gt; Choose the dataset to point to Gen2
Click on Publish all
Go to Triggers and activate it.
Check for the new files getting generated at the Gen2 root folder path
How to Configure and update HDInsight # Applies where HDInsight is used as workload to process the Raw data and execute the transformations. Below is the step by step process used as part of Dual pipeline pattern.
Prerequisite
Two HDInsight clusters to be created for each Gen1 and Gen2 storage.
Before Migration
The Hive script is mounted to Gen1 endpoint as shown below:
After Migration
The Hive script is mounted to Gen2 endpoint as shown below:
Once all the existing data is moved from Gen1 to Gen2, Start running the worloads at Gen2 endpoint.
How to configure and update Azure Synapse Analytics # Applies to the data pipelines having Azure synapse analytics formerly called as Azure SQL DW as one of the workloads. Below is the step by step process used as part of Dual pipeline pattern
Before Migration
The stored procedure activity is pointed to Gen1 mount path.
After Migration
The stored procedure activity is pointed to Gen2 endpoint.
Run the trigger
Check the SQL table in the Data warehouse for new data load.
Cutover from Gen1 to Gen2 # After you\u0026rsquo;re confident that your applications and workloads are stable on Gen2, you can begin using Gen2 to satisfy your business scenarios. Turn off any remaining pipelines that are running on Gen1 and decommission your Gen1 account.
`}),e.add({id:4,href:"/Storage/docs/application-and-user-data/basics/azure-blob-storage-object-replication-arm/",title:"Azure Blob Storage - Setup Object Replication with ARM Templates",section:"Application and User Data",content:" Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.\nYou can find a good overview of the service here, and instructions on how to deploy it via the portal here.\nHere we are going to focus on deploying Object Replication with ARM. You will see we are doing this in 3 steps with three templates orchestrated with some CLI code. This needs to be done in separate steps to 1) allow time for the Change Feed and Versioning features to be provisioned before creating the destination object replication endpoint and 2) to allow us to query the policy and rule information from the destination endpoint to pass into the creation of the source object replication endpoint.\nVariable and Resource Group Setup # In this sample we are using the same container name for source and destination this is not required In this sample we are using the same region for source and destination this is not required In this sample we are using the same durability (i.e. LRS) for source and destination this is not required RG=\u0026#34;\u0026lt;resource group name\u0026gt;\u0026#34; LOCATION=\u0026#34;\u0026lt;region name i.e. westus\u0026gt;\u0026#34; SRCACCT=\u0026#34;\u0026lt;name of source storage account\u0026gt;\u0026#34; DESTACCT=\u0026#34;\u0026lt;name of destination storage account\u0026gt;\u0026#34; CONTAINER=\u0026#34;\u0026lt;name of container\u0026gt;\u0026#34; az group create --name $RG --location $LOCATION Create the source \u0026amp; destination storage accounts # Get the ARM template here\nMake sure that your accounts have Change Feed and Versioning features enabled az deployment group create \\ --name TestDeployment \\ --resource-group $RG \\ --template-file step01.json \\ --parameters \u0026#34;storageNameSrc=$SRCACCT\u0026#34; \\ \u0026#34;storageNameDest=$DESTACCT\u0026#34; \\ \u0026#34;containerName=$CONTAINER\u0026#34; Create the destination Object Replication endpoint # Get the ARM template here\nYou might need to wait a bit for the features you enabled in the last step to turn on before doing this az deployment group create \\ --name TestDeployment \\ --resource-group $RG \\ --template-file step02.json \\ --parameters \u0026#34;storageNameSrc=$SRCACCT\u0026#34; \\ \u0026#34;storageNameDest=$DESTACCT\u0026#34; \\ \u0026#34;containerName=$CONTAINER\u0026#34; Create the source Object Replication endpoint # Get the ARM template here\nNOTE: Here I am just pulling the first policy and rule, since I only have 1, if you have more than 1 you will need to change the \u0026ndash;query\nPOLICY=$(az storage account or-policy list --account-name $DESTACCT --query \u0026#39;[0].policyId\u0026#39; --output tsv) RULE=$(az storage account or-policy list --account-name $DESTACCT --query \u0026#39;[0].rules[0].ruleId\u0026#39; --output tsv) az deployment group create \\ --name TestDeployment \\ --resource-group $RG \\ --template-file step03.json \\ --parameters \u0026#34;storageNameSrc=$SRCACCT\u0026#34; \\ \u0026#34;storageNameDest=$DESTACCT\u0026#34; \\ \u0026#34;containerName=$CONTAINER\u0026#34; \\ \u0026#34;policyId=$POLICY\u0026#34; \\ \u0026#34;ruleId=$RULE\u0026#34; "}),e.add({id:5,href:"/Storage/docs/application-and-user-data/code-samples/data-retention/",title:"Azure blob storage data management and retention",section:"Application and User Data",content:` Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.
A good place to start with understanding how your data is protected in Azure is Azure customer data protection. This article provides details on data protection, customer data ownership, records management, and electronic discovery (e-discovery).
The definitive source for understanding how your data is managed and retained in Azure and other Microsoft® services is the Online Service Terms (OST). When you subscribe to an Online Service through a Microsoft Volume Licensing program, the terms for how you can use the service are defined in the Volume Licensing Online Services Terms (OST) document and program agreement. There are also additional data processing and security terms which you should become familiar with that are defined in the Microsoft Online Services Data Protection Addendum (DPA). The DPA is an addendum to the OST. Links to the current OST and DPA in multiple languages are available on the Licensing Terms page. You can also find links to archived editions of the OST and archived editions of the DPA if you would like to understand how these terms have evolved over time.
Quick links:
Microsoft Online Services Terms (English) Online Services DPA (English) The whitepaper Protecting Data in Microsoft Azure includes several statements which can help you understand what happens when you issue a delete request through the storage APIs (for example Delete Blob):
Where appropriate, confidentiality should persist beyond the useful lifecycle of data. The Azure Storage subsystem makes customer data unavailable once delete operations are performed. All storage operations including delete are designed to be instantly consistent. Successful execution of a delete operation removes all references to the associated data item and it cannot be accessed via the Azure storage APIs. Also, Azure Storage interfaces do not permit the reading of uninitialized data, thus mitigating the same or another customer from reading deleted data before it is overwritten. All copies of deleted data are then garbage-collected. The physical bits are overwritten when the associated storage block is reused for storing other data, as is typical with standard computer hard drives.
If you are interested in a deeper dive on the Azure storage subsystem, the whitepaper Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency details the architecture of the system and includes details on garbage collection. There is also a video and presentation which provide a condensed view of the whitepaper.
The destruction of physical media is addressed in the Microsoft Trust Center:
If a disk drive used for storage suffers a hardware failure, it is securely erased or destroyed before Microsoft returns it to the manufacturer for replacement or repair. The data on the drive is completely overwritten to ensure the data cannot be recovered by any means.
When such devices are decommissioned, they are purged or destroyed according to NIST 800-88 Guidelines for Media Sanitation.
Encryption # Encryption of customer data is also addressed in public documentation, including additional controls that you can implement such as customer-managed encryption keys. Azure Storage encryption for data at rest states:
Data in Azure Storage is encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Azure Storage encryption is similar to BitLocker encryption on Windows.
Azure Storage encryption is enabled for all storage accounts, including both Resource Manager and classic storage accounts. Azure Storage encryption cannot be disabled. Because your data is secured by default, you don\u0026rsquo;t need to modify your code or applications to take advantage of Azure Storage encryption.
Data in a storage account is encrypted regardless of performance tier (standard or premium), access tier (hot or cool), or deployment model (Azure Resource Manager or classic). All blobs in the archive tier are also encrypted. All Azure Storage redundancy options support encryption, and all data in both the primary and secondary regions is encrypted when geo-replication is enabled. All Azure Storage resources are encrypted, including blobs, disks, files, queues, and tables. All object metadata is also encrypted. There is no additional cost for Azure Storage encryption.
Data can also be encrypted with a customer-managed key when used in combination with Azure Key Vault. Revocation of a customer-managed key can be performed at any time and upon revocation client calls to the Storage APIs will fail for operations including the retrieval of blobs, updates to existing blobs.
Prevent accidental deletion # There are also customer controls available which allow you to manage the lifecycle of your data and prevent the accidental deletion of critical information, including soft delete for blobs and containers. It is also possible to implement resource locks to prevent the accidental deletion of your storage account and the resource group it resides in. Finally, you can also recover a deleted storage account in some cases within 14 day by utilizing storage account recovery.
References # Azure customer data protection Microsoft Licensing Terms Microsoft Online Services Terms (English) Online Services DPA (English) Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency Microsoft Trust Center NIST 800-88 Guidelines for Media Sanitation Azure Storage encryption for data at rest Customer-managed keys for Azure Storage encryption Microsoft Azure Data Security (Data Cleansing and Leakage) Soft delete for blobs Soft delete for containers Lock resources to prevent unexpected changes Recover a deleted storage account `}),e.add({id:6,href:"/Storage/docs/application-and-user-data/basics/azure-blob-storage-data-protection-features/",title:"Azure Blob Storage data protection features",section:"Application and User Data",content:` Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection \u0026amp; Security Azure Defender for Storage Immutable Blob storage `}),e.add({id:7,href:"/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/",title:"Azure Blob Storage Upload API's",section:"Application and User Data",content:` Azure Blob Storage Upload API\u0026rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK\u0026rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API\u0026rsquo;s. However, a good understanding of the API\u0026rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API\u0026rsquo;s to help you compare the differences between them.
NOTE: There are some nuances with how the APIs are used based on what version of the API you are using. See the API documentation for full details.
Put Blob # Source: You provide the bytes Size: Blob must be smaller than 256 MiB. (Limit increasing to 5 GiB, currently in preview) Official Docs: here .NET SDK Methods: Upload \u0026amp; UploadAsync. NOTE: Upload/UploadAsync will default to using PutBlob if the file to upload is small, however will use PutBlock/PutBlockList for larger uploads. Put Blob From URL # Source: Any object retrievable via a standard GET HTTP request on the given URL (e.g. public access or pre-signed URL) can be used. This includes any accessible object, inside or outside of Azure. Destination: A block blob Size: Blob must be smaller than 256 MiB. (Limit increasing to 5 GiB, currently in preview) Performance: Completes synchronously. Prefer this API when ingesting Block Blobs to a storage account from any external source. Choose this over Copy Blob from URL when you have larger objects, when you don\u0026rsquo;t care about preserving the block list if the source is an Azure Blob, want to set HTTP Content Properties, or want to take advantage of advance features like Encryption Scopes, or Put to Tier, etc. Official Docs: here .NET SDK Methods: SyncUploadFromUri \u0026amp; SyncUploadFromUriAsync Copy Blob # Source: The source blob for a copy operation may be a block blob, an append blob, or a page blob, a snapshot, or a file in the Azure File service. Destination: The same object type as the source Size: Each blob must be smaller than 4.75 TiB. (Limit increasing to 190.7 TiB, currently in preview). More Info: Maximum size of a block blob Performance: Completes asynchronously. Official Docs: here .NET SDK Methods: StartCopyFromUri \u0026amp; StartCopyFromUriAsync Copy Blob From URL # Source: Any object retrievable via a standard GET HTTP request on the given URL (e.g. public access or pre-signed URL) can be used. This includes any accessible object, inside or outside of Azure. If the object is an Azure source, it must be a block blob (i.e. Page Blobs are not supported). Destination: A block blob Size: Blob must be smaller than 256 MiB. Performance: Completes synchronously. Choose this over Put Blob from URL when compatibility with the Copy Blob API is required or when you want the committed block list to be preserved during the copy. Official Docs: here .NET SDK Methods: SyncCopyFromUri \u0026amp; SyncCopyFromUriAsync Put Block # Source: You provide the bytes Size: Each block must be smaller than 100 MiB. (Limit increasing to 4 GiB, currently in preview). More Info: Maximum size of a block in a block blob Official Docs: here .NET SDK Methods: StageBlock \u0026amp; StageBlockAsync Put Block From URL # Source: Any object range retrievable via a standard GET HTTP request on the given URL (e.g. public access or pre-signed URL) can be used. This includes any accessible object, inside or outside of Azure. Size: Each block must be smaller than 100 MiB. (Limit increasing to 4 GiB, currently in preview). More Info: Maximum size of a block in a block blob Performance: Completes synchronously. Official Docs: here .NET SDK Methods: StageBlockFromUri \u0026amp; StageBlockFromUriAsync Put Block List # Called after all the blocks are written. This API commits a blob by specifying the list of block IDs that make up the blob. Official Docs: here .NET SDK Methods: CommitBlockList \u0026amp; CommitBlockListAsync `}),e.add({id:8,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/",title:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",section:"Analytics",content:` Azure Data Lake Storage Gen1 to Gen2 Migration Sample # Welcome to the documentation on migration from Gen1 to Gen2. Please review the Gen1-Gen2 Migration Approach guide to understand the patterns and approach. You can choose one of these patterns, combine them together, or design a custom pattern of your own.
NOTE: On July 14 2021 we released a Limited preview of a feature to Migrate your Azure Data Lake Storage from Gen1 to Gen2 using the Azure Portal. Check it out here
Migration Patterns # You will find here the resources to help with below patterns:
Incremental copy pattern using Azure data factory # Refer Incremental copy pattern guide to know more and get started.
Bi-directional sync pattern using WANdisco Fusion # Refer Bi-directional sync pattern guide to know more and get started.
Lift and Shift pattern using Azure data factory # Refer Lift and Shift pattern guide to know more and get started.
Dual Pipeline pattern # Refer Dual pipeline pattern guide to know more and get started.
How to migrate the workloads and Applications post data migration # Refer here for more details on the steps to update the workloads and application post migration.
Security # Gen1 and Gen2 ACL behavior and differences # Gen1 and Gen2 ACL behavior and differences - This article summarizes the behavioral differences of the access control models for Data Lake Storage Gen1 and Gen2.
Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs). Utilities # Utilities that can be used during the Gen1 to Gen2 Migration process.
Ageing Analysis # Refer Ageing Analysis to know more and get started.
References # Azure Data Lake Storage migration from Gen1 to Gen2 Why WANdisco fusion `}),e.add({id:9,href:"/Storage/docs/analytics/azure-storage-data-lake-gen2-billing-faq/",title:"Azure Data Lake Storage Gen2 Billing FAQs",section:"Analytics",content:` Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
Flat namespace (FNS): A mode of organization in a storage account on Azure where objects are organized using a flat structure - aka a flat list of objects. This is the default configuration for a storage account.
Hierarchical namespace (HNS): With hierarchical namespaces, you can organize data into structured folders and directories. A hierarchical namespace allows operations like folder renames and deletes to be performed in a single atomic operation, which with a flat namespace requires a number of operations proportionate to the number of objects in the structure. Hierarchical namespaces store additional meta-data for your directory and folder structure, and allows Filesystem ACLs. However, as your data volume grows, hierarchical namespaces keeps your data organized and more importantly yields better storage performance on your analytic jobs thus lowering your overall TCO to run analytic jobs.
HNS enabled account: A Storage Account with the Hierarchical Namespace enabled.
Query Acceleration: Query acceleration enables applications and analytics frameworks to dramatically optimize data processing by retrieving only the data that they require to perform a given operation. This reduces the time and processing power that is required to gain critical insights into stored data.
Data: Data is the content and is the stored information. Example: Files in a folder.
Metadata: Meta Data is the context for the data and consists of one or more name-value pairs that you specify for a Blob storage resource. You can use metadata to store additional values with the resource. Metadata values are for your own purposes only, and don\u0026rsquo;t affect how the resource behaves. Metadata also includes the size used by Path / Name of the object.
FAQs # How do calls to certain API\u0026rsquo;s translate into the number of 4MB transactions that will be billed? # When uploading or appending data to existing files, or when reading data from a file, the operation gets chunked into 4MB pieces. You will then be billed for each 4MB chunk. The Copy File, Rename, Set Properties, etc. would not be charged using the per 4 MB rule. They are not free operations but would be charged as a single transaction. For files \u0026lt; 4MB, a full transaction will be charged for each file. It is recommended to write larger files as they will yield better analytics performance and are more cost effective. How can I figure out the metadata size for my account? # The metadata size is calculated for every file by the following: 512 bytes + size of file name + size of file properties What APIs are considered Iterative read operations? # Iterative read operations are operations performed on a folder that requires the system to iterate through all the subfolders and files in that folder to complete. Examples: ListFileSystem, ListFileSystemDir,ListPath and all List* operations If I were to store parquet files into ADLS, would that be a write transaction based on size or just a storage cost? # Write transactions apply whenever you ingest or update any type of file. However, you do pay for the data at rest. What is the difference between how transactions are billed in a flat namespace account (FNS) and a Hierarchical Namespace account? # Customers can access the storage account using either an FNS account or a HNS account. The API\u0026rsquo;s can also be regular API\u0026rsquo;s or ADLS API\u0026rsquo;s. The following table shows you when data operations get split into 4MB chunks for billing and when the 30% uplift is applied.
Renames/moves as part of analytics job commit activities will end up with a lot of operations (proportional to object count) with FNS while this would be a single transaction in HNS. How are iterative operations billed? # An example would be a rename of a folder containing 10K files. The rename would be charged as a single metadata operation (iterative writes). Recommended reading # Overview for Azure Data Lake Storage Gen2 `}),e.add({id:10,href:"/Storage/docs/backup-and-archive/commvault/",title:"Azure Partner Backup Documentation",section:"Backup and Archive",content:` Microsoft Partner Documentation for Commvault for Azure # https://documentation.commvault.com/commvault/v11/article?p=31252.htm
`}),e.add({id:11,href:"/Storage/docs/backup-and-archive/veritas/",title:"Azure Partner Backup Documentation",section:"Backup and Archive",content:` Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
Support Required Azure
Resources Restore
on-
premises Backup
Azure VM\u0026rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.
Link 1 Link 2 Partner Reference Architectures # Implementation Guide # Description of the implementation Links Policy Configuration Guidance Network Bandwidth and storage account guidance Screenshots Monitoring the Deployment going forward # Azure Performance Monitoring Where can the customer go to view performance reports, job completion and befin basic troubleshooting Links to user guide content on partner support site Partner Videos and Links # How to Open Support Case # Next steps # Review Link Title
`}),e.add({id:12,href:"/Storage/docs/backup-and-archive/rubrik/",title:"Azure Partner Backup Documentation - title",section:"Backup and Archive",content:` Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
Support Required Azure
Resources Restore
on-
premises Backup
Azure VM\u0026rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.
Link 1 Link 2 Partner Reference Architectures # Implementation Guide # Description of the implementation Links Policy Configuration Guidance Network Bandwidth and storage account guidance Screenshots Monitoring the Deployment going forward # Azure Performance Monitoring Where can the customer go to view performance reports, job completion and befin basic troubleshooting Links to user guide content on partner support site Partner Videos and Links # How to Open Support Case # Next steps # Review Link Title
`}),e.add({id:13,href:"/Storage/docs/backup-and-archive/veeam/",title:"Azure Partner Backup Documentation - title",section:"Backup and Archive",content:" Links to relevant documentation # https://www.veeam.com/documentation-guides-datasheets.html "}),e.add({id:14,href:"/Storage/docs/application-and-user-data/code-samples/supported-character-scrubber/",title:"Azure Storage Supported Character Scrubber",section:"Application and User Data",content:` Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turnkey solution to discovering unsupported characters in your file names with a simple CSV input. If you choose to rename your files to conform to Azure blob storage, you can also choose to create a mapping CSV output which can be used create your objects with a new destination file name (if required).
To leverage the script, you can download the sample input CSV (SourceFileNames.csv). This file contains a single column, SourceFileName. The PowerShell script will evaluate each row in the CSV and optionally create a new mapping file (FixedFileNames.csv) which provides alternative names by replacing unsupported characters with a valid character of your choosing.
Usage # .\\AzureStorageSupportedCharacterScrubber.ps1 -CsvInputPath .\\SourceFileNames.csv -RenameItems Sample input # SourceFileName BadChačracter in the name.pdf Sample output # Shell # ReplacementString not provided, using default as \u0026#39;\u0026#39; Testing character B (CodePoint: 66) Testing character a (CodePoint: 97) Testing character d (CodePoint: 100) Testing character C (CodePoint: 67) Testing character h (CodePoint: 104) Testing character a (CodePoint: 97) Testing character Ä (CodePoint: 196) Testing character (CodePoint: 141) Unsupported char code point: 141 Testing character r (CodePoint: 114) Testing character a (CodePoint: 97) Testing character c (CodePoint: 99) Testing character t (CodePoint: 116) Testing character e (CodePoint: 101) Testing character r (CodePoint: 114) Testing character (CodePoint: 32) Testing character i (CodePoint: 105) Testing character n (CodePoint: 110) Testing character (CodePoint: 32) Testing character t (CodePoint: 116) Testing character h (CodePoint: 104) Testing character e (CodePoint: 101) Testing character (CodePoint: 32) Testing character n (CodePoint: 110) Testing character a (CodePoint: 97) Testing character m (CodePoint: 109) Testing character e (CodePoint: 101) Testing character . (CodePoint: 46) Testing character p (CodePoint: 112) Testing character d (CodePoint: 100) Testing character f (CodePoint: 102) Source name: BadChaÄracter in the name.pdf Destination name: BadCharacter in the name.pdf CSV # SourceFileName DestinationFileName BadChačracter in the name.pdf BadCharacter in the name.pdf Disallowed Characters # The following is a quick list of illegal characters. Note this is not an exhaustive list which the script provides.
Character Code Point Description * 0x0000002A \u0026quot; 0x00000022 Quotation mark ? 0x0000003F Question mark \u0026gt; 0x0000003E Greater than \u0026lt; 0x0000003C Less than : 0x0000003A Colon | 0x0000007C / 0x0000002F Forward slash \\ 0x0000005C Backslash del 0x0000007F Delete 0x00000081 High octet preset 0x0000008D Ri reverse line feed 0x0000008F ss3 single shift three 0x00000090 dcs device control string 0x0000009D osc operating system command Resources # Naming and Referencing Containers, Blobs, and Metadata RFC 2616 RFC 3987 Unicode characters `}),e.add({id:15,href:"/Storage/docs/backup-and-archive/",title:"Backup and Archive",section:"Docs",content:` Backup and Archive # Backup and Archive Partners
Commvault Rubrik Veeam Veritas Sample Scripts
Blob Tiering - Creates action and filter objects to apply blob tiering to block blobs matching a certain criteria. Create Storage Account - Creates a brand new resource group and storage account, based upon input variables. `}),e.add({id:16,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/",title:"Bi-directional sync pattern Guide: A quick start template",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Bi-directional sync pattern Guide: A quick start template # Overview # This manual will introduce WANdisco as a recommended tool to set up bi-directional sync between ADLS Gen1 and Gen2 using the Replication feature.
Below will be covered as part of this guide:
Data Migration from Gen1 to Gen2 Data Consistency Check Application update for ADF, ADB and SQL DWH workloads Considerations for using the bi-directional sync pattern:
Ideal for complex scenarios that involve a large number of pipelines and dependencies where a phased approach might make more sense. Migration effort is high, but it provides side-by-side support for Gen1 and Gen2. Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2 For more details please refer to :link: create azure storage account
Licenses for WANdisco Fusion that accommodate the volume of data that you want to make available to ADLS Gen2
Azure Linux Virtual Machine Please refer here to know How to create Azure VM
Windows SSH client like Putty, Git for Windows, Cygwin, MobaXterm
Login to Fusion UI # Start the VM in azure portal if not in Running status.
Start the Fusion
Go to SSH Client Connect and run below commands:
cd fusion-docker-compose // Change to the repository directory ./setup-env.sh // set up script docker-compose up -d // start the fusion Login to Fusion UI. Open the web browser and give the path as below
URL \u0026ndash;\u0026gt; http://{dnsname}:8081
Note: The DNS name can be taken from VM Overview details.
Set up ADLS Gen1 and Gen2 storage. Click here to know more.
Create Replication Rule # File system content is replicated selectively by defining Replication Rules.These specify the directory in the file system that will be replicated and the Zones that will participate in that replication.
Without any Replication Rules defined, each Zones file system operates independently of the others. With the combination of Zones and Replication Rules, WANdisco Fusion gives you complete control over how data is replicated between your file systems and/or object stores.
On the dashboard, create a HCFS rule with the following parameters:
Rule Name = migration (Give any unique name)
Path for all storages = /
Default exclusions
Preserve HCFS Block Size = False
To know more click :link: how to create rule
Click Finish and wait for the rule to appear on the dashboard.
Consistency Check # Once you have created a replication rule as per above mentioned steps, run a consistency check to compare the contents between both zones.
On the Rules table, click to View rule.
On the rule page, start consistency check and wait for the Consistency status to update.
The Consistency Status will determine the next steps:
Consistent - no action needed
Inconsistent - migration required
Consistency check before migration:
To know more refer to Consistency Check using WANdisco fusion
Note: START CONSISTENCY CHECK is recommended for small set of data volume.
Migration using LiveMigrator # Once HCFS replication rule is created, migration activity can be started using the LiveMigrator. This allows migration of data in a single pass while keeping up with all changes to the source storage(ADLS Gen1). As an outcome data consistency is gauranteed between source and target.
Note: The Gen2 is synchronized with Gen1 source using consistency checks and scheduled migrations.
Get Sample data
Upload sample data to the ADLS Gen1 storage account, see the guide to know more.
Place it within the home mount point.
On the Fusion UI dashboard, view the HCFS rule.
The overwrite settings needs to be configured. This determines what happens if the LiveMigrator encounters content in the target path with the same name and size.
Skip: If the filesize is identical between the source and target, the file is skipped. If its a different size, the whole file is replaced.
Overwrite: Everything is replaced, even if the file size is identical.
Start your migration with the following settings:
Source Zone = adls1
Target Zone = adls2
Overwrite Settings = Skip / Overwrite
Wait until the migration is complete, and check the contents in the ADLS Gen2 container.
Consistency check after migration:
NOTE: A hidden folder :file_folder: .fusion will be present in the ADLS Gen2 path.
Limitation: Client based replication is not supported by Fusion UI , so replication process here is manually driven.
Managing Replication # To know more visit How to manage replication
Application Update # As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount after the migration is complete.
We will be covering below azure services
Azure Data Factory Load data into Azure Data Lake Storage Gen2 with Azure Data Factory Azure Databricks Use with Azure Databricks Quickstart: Analyze data in Azure Data Lake Storage Gen2 by using Azure Databricks Tutorial: Extract, transform, and load data by using Azure Databricks SQL Data Warehouse Use with Azure SQL Data Warehouse This can be achieved by following a phased approach where in the migration of data, work loads and applications will be validated incrementally.
Mount path configuration # This will show how to set and configure the mount paths for Gen1 and Gen2 in the MountConfiguration script.
Gen1 mount path configuration:
Gen2 mount path configuration:
Beginning State # Before Migration\u0026ndash; The data pipeline is on Gen1
In this state the data ingestion from ADB to Raw folder, writing the processed data into the Processed folder and loading the processed data to SQL DW will be happening at Gen1.
Sample design:
All the ADB notebooks will be pointing to Gen1 Mount path in this state and the data will be ingested, processed and loaded to SQL DW from Gen1.
Interim State # In this state we will start with the migration of the existing Gen1 data to Gen2 using Wandisco fusion. The data pipeline will be set to Gen1 and Gen2 partially which will include the data ingestion and processing happening at Gen1 meanwhile writing the processed data to SQL DW at Gen2.
Follow the steps for the migration of Gen1 data to Gen2 for the Raw and Processed data.
Once the data is migrated, run Consistency check in Fusion UI. There should be no inconsistencies.
How to change the mount path
This will show how to configure the mount path for the work load Azure Databricks which will load processed data to SQL DW at Gen2. In the master pipeline in Azure Datafactory, Go to the notebook settings \u0026ndash;\u0026gt; Base parameters and mount the RootPathParam to Gen2.
Note: At the end of this state we will acheive to establish data pipeline partially at Gen1 and Gen2.
Eventual End State # After Migration \u0026ndash; The data pipeline moved to Gen2
This state depicts when all the work loads and applications are moved from Gen1 to Gen2 and the bi directional replication is ready to be turned off.
Below are the steps to change the mount path from Gen1 to Gen2 for the Azure Databricks notebook in Azure Datafactory pipeline
Run the master pipeline consisting of all the ADB notebooks and check the data. The ingestion and processing of raw data should take place at Gen2 now along with writing to SQL DW. This can be verified by checking the data at Gen2 storage. And when Consistency check is run using WAndisco Fusion UI , there will be files missing at Gen1.
Note: This marks the state where all the workloads are moved to Gen2. Gen1 will not revieve any new data in any form.
Cutover from Gen1 to Gen2 After all the applications and workloads are stable on Gen2, Turn off any remaining pipelines that are running on Gen1 and decommission your Gen1 account. This will include deletion of the rules created for the migration and replication process. Shutting down the Azure VM and deleting the resource group.
References # WANdisco fusion Installation and set up guide WANdisco LiveMigrator `}),e.add({id:17,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/dual-pipeline/",title:"Dual Pipeline Pattern Guide: A quick start template",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.
Considerations for using the dual pipeline pattern:
Gen1 and Gen2 pipelines run side-by-side. Supports zero downtime. Ideal in situations where your workloads and applications can\u0026rsquo;t afford any downtime, and you can ingest into both storage accounts. Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account
Azure Key Vault. Required keys and secrets to be configured here.
Service principal with read, write and execute permission to the resource group, key vault, data lake store Gen1 and data lake store Gen2. To learn more, see create service principal account and to provide SPN access to Gen1 refer to SPN access to Gen1
Data pipeline set up for Gen1 and Gen2 # As part of this pattern, Gen1 and Gen2 pipelines will run side by side.
Below is the sample pipeline set up for Gen1 and Gen2 using Azure Databricks for data ingestion, HDInsight for data processing and Azure SQL DW for storing the processed data for analytics.
Prerequisite
Create HDInsight cluster for Gen1. Refer here for more details.
Create HDInsight cluster for Gen2. Refer here for more details.
Create user assigned managed identity. Refer here to know more.
Permission should be set up for the managed identity for Gen2 storage account. Refer here for more details.
Additional blob storage should be created for Gen1 to support HDInsight linked service in ADF. Refer here for more details.
Note: To set up the data pipeline in ADF, two separate HDInsight clusters should be created each for Gen1 and Gen2.
Here ADF is used for orchestrating data-processing pipelines supporting data ingestion, copying data from and to different storage types (Gen1 and Gen2) in azure, loading the processed data to datawarehouse and executing transformation logic.
Creation of linked service for Gen1 and Gen2 in ADF # As part of this pipeline set up, below linked services needs to be created as first step in ADF:
Go to ADF \u0026ndash;\u0026gt; Manage \u0026ndash;\u0026gt; Linked service \u0026ndash;\u0026gt; Click on + New
Create ADB linked service.
Create HDInsight linked service.
Create Stored procedure linked service.
How to create HDInsight linked service for Gen1(Blob storage)
Go to Linked Services \u0026ndash;\u0026gt; click on + New \u0026ndash;\u0026gt; New linked service \u0026ndash;\u0026gt; Compute \u0026ndash;\u0026gt; Azure HDInsight \u0026ndash;\u0026gt; Continue
Provide the details from Azure subscription with respect to each field and choose Blob Storage under Azure Storage linked service
Provide the user name and password details.
Click on Create button.
How to create HDInsight linked service for Gen2
Go to Linked Services \u0026ndash;\u0026gt; click on + New \u0026ndash;\u0026gt; New linked service \u0026ndash;\u0026gt; Compute \u0026ndash;\u0026gt; Azure HDInsight \u0026ndash;\u0026gt; Continue
Provide the details from Azure subscription with respect to each field and choose ADLS Gen 2 under Azure Storage linked service
Provide the storage container name in the File system field. Give the user name and password.
Click on Create button.
How to set up Gen1 data pipeline # Create a master pipeline in ADF for Gen1 and invoke all activities listed below:
Raw data ingestion using ADB script
Create a pipeline for data ingestion process using ADB activity. Refer here for more details.
Mount path configured to Gen1 endpoint
Data processing using HDInsight
Create a pipeline for data processing using HDInsight activity. Refer here for more details.
Mount path configured to Gen1 endpoint
Sample input path: adl://gen1storage.azuredatalakestore.net/AdventureWorks/Raw/FactFinance/
Sample output path: adl://gen1storage.azuredatalakestore.net/AdventureWorks/ProcessedHDI/FactFinance/
Loading to Azure synapse analytics (SQL DW) using stored procedure
Create a pipeline using Stored Procedure Activity to invoke a stored procedure in Azure SQL data warehouse.
Stored procedure Settings:
How to set up Gen2 data pipeline # Create a master pipeline in ADF for Gen2 invoking all activities as listed below:
Raw data ingestion using ADB script
Create a pipeline for data ingestion process using ADB activity. Refer here for more details.
Mount path configured to Gen2 endpoint
Data processing using HDInsight
Create a pipeline for data processing using HDInsight activity. Refer here for more details.
Mount path configured to Gen2 endpoint
Sample input path: abfs://gen2storage@g2hdistorage.dfs.core.windows.net/AdventureWorks/Raw/FactInternetSales/
Sample output path: abfs://gen2storage@g2hdistorage.dfs.core.windows.net/AdventureWorks/ProcessedHDI/FactInternetSales/
Loading to Azure synapse analytics (SQL DW) using stored procedure
Create a pipeline for loading the processed data to SQL DW using stored procedure activity.
Stored procedure Settings:
Stored procedures created to load processed data to main tables:
External Table structure in SQL DW:
Steps to be followed # This section will talk about the approach and steps to move ahead with this pattern once the data pipelines are set up for both Gen1 and Gen2.
Migrate data from Gen1 to Gen2 # To migrate the existing data from Gen1 to Gen2, please refer to lift and shift pattern.
Data ingestion to Gen1 and Gen2 # This step will ingest new data to both Gen1 and Gen2.
Create a pipeline in ADF to execute both data ingestion acitvity for Gen1 and Gen2.
Setting of the Base parameter:
Check the storage path at Gen1 and Gen2 end points. New data should be ingested simultaneously at both paths.
Run workloads at Gen2 # This step make sure that the workloads are run at Gen2 endpoint only.
Create a pipeline in ADF to execute the workloads for Gen2. Run the pipeline.
Check the Gen2 storage path for the new files. The SQL DW should be loading with new processed data.
Cutover from Gen1 to Gen2 # After you\u0026rsquo;re confident that your applications and workloads are stable on Gen2, you can begin using Gen2 to satisfy your business scenarios. Turn off any remaining pipelines that are running on Gen1 and decommission your Gen1 account.
References # Migrate Azure Data Lake Storage from Gen1 to Gen2 `}),e.add({id:18,href:"/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/",title:"Estimating Pricing for Azure Block Blob Deployments",section:"Application and User Data",content:` Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.
Note: The goal of this template is to give you a starting point to build an estimate. The template will provide some general estimations you can use to put into the pricing calculator. However, it makes many assumptions for simplicity. You can tweak the formulas in Excel to alter the assumptions to meet your requirements. It is not intended to be a replacement for a good architect.
Click here to download the template
Helpful Links:
Azure Storage Blobs Pricing Azure Pricing Calculator Plan and manage costs for Azure Blob storage How To use:
Fill in the following columns on the Inputs tab Workload this is the name of your workload, note you might need multiple rows if your workload requires deployments into different regions, durability, or tiers. Region this is the Azure Region where your workload will be deployed (i.e., East US, West US, etc.) Durability LRS, ZRS, GRS/RA-GRS, GZRS/RA-GZRS. Verify that the durability you select is available in your selected region. See Durability and availability parameters and Products available by region. Tier Premium, Hot, Cool, Archive. See Comparing block blob storage options. GBs Today Average File Size (in MB) New GB (per month) GBs Read (per month) GBs Deleted (per month) Update the Assumptions as needed Migration Estimate Tab Write Operations = GB/Block Size + 1 per file: the defaults for most of the SDKs are between 4 MB and 8 MB however they do have logic to alter this based on the file size you are moving, or you can override it to use a block size of your choice. List/Create Operations = 1 per file: this can vary based on how you interact with your data. Read Operations = 0, you might decide to read a percentage of your data to verify that the migration completed successfully. Other Operations = 0 Data Retrieval = 0 Data Write = GBs Today Geo-Replication Data Transfer = GBs Today if using a durability that requires replication Monthly Estimate Tab Write Operations = GB/Block Size + 1 per file: the defaults for most of the SDKs are between 4 MB and 8 MB however they do have logic to alter this based on the file size you are moving, or you can override it to use a block size of your choice. List/Create Operations = 1 per file: this can vary based on how you interact with your data. Read Operations = GB/Block Size Other Operations = 1 per file: this can vary based on how you interact with your data. Data Retrieval = GBs Read Data Write = New GBs Geo-Replication Data Transfer = New GBs if using a durability that requires replication Future GB Estimate Tab Assumes a linear growth over time of GBs start + GBs added GBs removed Input the results into the Azure Pricing Calculator Open the Azure Pricing Calculator For each defined workload, add a Storage account to the calculator and the estimates from the template `}),e.add({id:19,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/adls-gen1-and-gen2-acl-behavior/",title:"Gen1 and Gen2 ACL Behavior Analysis",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft\u0026rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 combines features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and low cost scalability, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage. Azure Data Lake Storage Gen1 implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-based access control (Azure RBAC) and POSIX-like access control lists (ACLs).
This article summarizes the behavioral differences of the access control models for Data Lake Storage Gen1 and Gen2.
Prerequisites # Active Azure Subscription Azure Data Lake Storage Gen1 and Gen2 Azure Key Vault. Required keys and secrets to be configured here. Service principal with read, write and execute permission to the resource group, key vault, data lake store Gen1 and data lake store Gen2. To learn more, see create service principal account and to provide SPN access to Gen1 refer to SPN access to Gen1 Java Development Kit (JDK 7 or higher, using Java version 1.7 or higher) for Filesystem operations on Azure Data Lake Storage Gen1 and Gen2 ACL Behavior in ADLS Gen1 and Gen2 # Account Root Permissions # Check GetFileStatus and GetAclStatus APIs with or without permissions on root Account path
GEN1 Behavior: Permission required on Account root- RX(minimum) or RWX , to get an account root content view GEN2 Behavior: A user with or without permissions on container root can view account root content OID-UPN Conversion # Check the identity inputs for UPN format APIs (Eg:GetAclStatus, Liststatus ,GetFileStatus) and OID format APIs (Eg:SetAcl, ModifyAclEntries, RemoveAclEntries)
GEN1 Behavior: OID \u0026lt;-\u0026gt; UPN conversion is supported for Users, Service principals and groups Note: For groups, as there is no UPN, conversion is done to Display name property
GEN2 Behavior: Supports only User OID-UPN conversion. Note: For service principal or group ,as UPN or Display Name is not unique, the derived OID could end up being an unintended identity
RBAC User Role Significance # RBAC roles and access control
GEN1 Behavior: All users in RBAC Owner role are superusers. All other users (non-superusers),need to have permission that abides by File Folder ACL. Refer here for more details GEN2 Behavior: All users in RBAC -Storage blob data owner role are superusers. All other users can be provided different roles(contributor, reader etc.) that govern their read, write and delete permissions, this takes precedence to the ACLs sent on individual file or folder. Refer here for more details Store Default Permission # Check if default permission is considered during file and directory creation
GEN1 Behavior: Permissions for an item(file/directory) cannot be inherited from the parent items. Refer here GEN2 Behavior: Permissions are only inherited if default permissions have been set on the parent items before the child items have been created. Refer here User Provided Permission on File/Directory Creation # Create a file/directory with explicit permission
GEN1 Behavior: File/Directory is created, and the final permission will be same as the user provided permission GEN2 Behavior: File/Directory is created, and the final permission will be computed as [user provided permission ^ umask (currently 027 in code)] Set Permission with No Permission Provided # Set permission Api is called with permission = null/space and permission parameter not present
GEN1 Behavior: A default value of 770 is set for both file and directory GEN2 Behavior: Gen2 will return bad request as permission parameter is mandatory Nested File or Directory Creation For Non-Owner User # Check if wx permission on parent is copied to nested file/directory when non-owner creates it. (i.e. dir1 exists and user desires to create dir2/dir3/a.txt or dir2/dir3/dir4)
GEN1 Behavior: Adds wx permissions for owner in the sub directory GEN2 Behavior: Doesn\u0026rsquo;t add wx permissions in the sub directory Umask Support # Permissions of file/directory can be controlled by applying UMASK on it.
GEN1 Behavior: Client needs to apply umask on the permission on new file/directory before sending the request to server. Note: Server doesn\u0026rsquo;t provide explicit support in accepting umask as an input
GEN2 Behavior: Clients can provide umask as request query params during file and directory creations. If client does not pass umask parameter, default umask 027 will be applied on file/directory References # ACL in ADLS Gen2 ACL in ADLS Gen1 Securing data stored in Azure Data Lake Storage Gen1 `}),e.add({id:20,href:"/Storage/docs/hpc-iot-and-ai/",title:"HPC IoT and AI",section:"Docs",content:` HPC IoT and AI # Coming Soon. . .
`}),e.add({id:21,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/incremental/",title:"Incremental Copy Pattern Guide: A quick start template",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Incremental Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the Incremental copy pattern from Azure Data Lake Storage 1 (Gen1) to Azure Data Lake Storage 2 (Gen2) using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used. It is intended to be used in form of steps to follow to implement the solution from local machine. This guide covers the following tasks:
Set up kit for Incremental copy pattern from Gen1 to Gen2 Data Validation between Gen1 and Gen2 post migration Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2
For more details please refer to create azure storage account
Azure Key Vault
Required keys and secrets to be configured here.
Service principal with read, write and execute permission to the resource group, key vault, data lake store Gen1 and data lake store Gen2.
To learn more, see create service principal account and to provide SPN access to Gen1 refer to SPN access to Gen1
Windows PowerShell ISE
Note: Run as administrator
// Run below code to enable running PS files Set-ExecutionPolicy Unrestricted // Check for the below modules in PowerShell . If not existing, install one by one: Install-Module Az.Accounts -AllowClobber -Force Install-Module Az.DataFactory -AllowClobber -Force Install-Module Az.KeyVault -AllowClobber -Force Install-Module Az.DataLakeStore -AllowClobber -Force Install-Module PowerShellGet –Repository PSGallery –Force // Close the PowerShell ISE and Reopen as administrator. Run the below module Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force Limitations # This version of code will have below limitations:
Gen1 \u0026amp; Gen2 should be in same subscription Supports only for single Gen1 source and Gen2 destination Trigger event is manual process for incremental copy Code Developed and Supported only in Windows PowerShell ISE Migration Framework Setup # Download the migration source code here to local machine:
Note: To avoid security warning error \u0026ndash;\u0026gt; Right click on the zip folder downloaded \u0026ndash;\u0026gt; Go to \u0026ndash;\u0026gt; Properties \u0026ndash;\u0026gt; General \u0026ndash;\u0026gt; Check unblock option under security section. Unzip and extract the folder.
The folder will contain:
Configuration: This folder will have the configuration file IncrementalLoadConfig.json and all the details of resource group and subscription along with source and destination path of ADLS Gen1 and Gen2.
Migration: Contains the json files, templates to create dynamic data factory pipeline and copy the data from Gen1 to Gen2.
Validation: Contains the PowerShell scripts which will read the Gen1 and Gen2 data and validate it post migration to generate post migration report.
StartIncrementalLoadMigration.ps1: Script to invoke the migration activity by creating increment pipeline in the data factory.
StartIncrementalLoadValidation.ps1: Script to invoke the Validation process to compare the data between Gen1 and Gen2 post migration and generate summary report.
Note: The StartFullLoadMigrationAndValidation.ps1] script is to migrate the full data load from Gen1 to Gen2.
Set up the Configuration file to connect to azure data factory:
Important Prerequisite:
Provide Service principal access to configure keyvault as below: Make an entry of Gen2 connection string in the key vault as shown below : // Below is the code snapshot for setting the configuration file // to connect to azure data factory: \u0026#34;gen1SourceRootPath\u0026#34; : \u0026#34;https://\u0026lt;\u0026lt;Enter the Gen1 source root path\u0026gt;\u0026gt;.azuredatalakestore.net/webhdfs/v1\u0026#34;, \u0026#34;gen2DestinationRootPath\u0026#34; : \u0026#34;https://\u0026lt;\u0026lt;Enter the Gen2 destination root path\u0026gt;\u0026gt;.dfs.core.windows.net\u0026#34;, \u0026#34;tenantId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the tenantId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;subscriptionId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the subscriptionId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleSecret\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleSecret Key\u0026gt;\u0026gt;\u0026#34;, \u0026#34;keyVaultName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the keyVaultName\u0026gt;\u0026gt;\u0026#34;, \u0026#34;factoryName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the factoryName\u0026gt;\u0026gt;\u0026#34;, \u0026#34;resourceGroupName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the resourceGroupName under which the azure data factory pipeline will be created\u0026gt;\u0026gt;\u0026#34;, \u0026#34;location\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the location\u0026gt;\u0026gt;\u0026#34;, \u0026#34;overwrite\u0026#34; : \u0026#34;Enter the value\u0026#34; // True = It will overwrite the existing data factory ,False = It will skip creating data factory Scheduling the factory pipeline for incremental copy pattern
\u0026#34;pipelineId\u0026#34; : \u0026#34;Enter distinct pipeline id eg 1,2,3,..40\u0026#34;, \u0026#34;isChurningOrIsIncremental\u0026#34; : \u0026#34;true\u0026#34;, \u0026#34;triggerFrequency\u0026#34; : \u0026#34;Provide the frequency in Minute or Hour\u0026#34;, \u0026#34;triggerInterval\u0026#34; : \u0026#34;Enter the time interval for scheduling (Minimum trigger interval time = 15 minute)\u0026#34;, \u0026#34;triggerUTCStartTime\u0026#34; : \u0026#34;Enter UTC time to start the factory for Incremental copy pattern .Eg 2020-04-09T18:00:00Z\u0026#34;, \u0026#34;triggerUTCEndTime\u0026#34; : \u0026#34;Enter the UTC time to end the factory for Incremental copy pattern. Eg 2020-04-10T13:00:00Z\u0026#34;, \u0026#34;pipelineDetails\u0026#34;:[ // Activity 1 // \u0026#34;sourcePath\u0026#34; : \u0026#34;Enter the Gen1 full path. Eg: /path-name\u0026#34;, \u0026#34;destinationPath\u0026#34; : \u0026#34;Enter the Gen2 full path.Eg: path-name\u0026#34;, \u0026#34;destinationContainer\u0026#34; : \u0026#34;Enter the Gen2 container name\u0026#34; // Activity 2 // \u0026#34;sourcePath\u0026#34; : \u0026#34;Enter the Gen1 full path. Eg: /path-name\u0026#34;, \u0026#34;destinationPath\u0026#34; : \u0026#34;Enter the Gen2 full path.Eg: path-name\u0026#34;, \u0026#34;destinationContainer\u0026#34; : \u0026#34;Enter the Gen2 container name\u0026#34; // Note : Maximum activities per pipeline is 40 Note: Please note the destinationPath string will not be having Gen2 container name. It will have the file path same as Gen1. Review the Configuration/IncrementalLoadConfig.json script for more reference.
Azure data factory pipeline creation and execution
Run the script StartIncrementalLoadMigration.ps1 to start the incremental copy process.
Azure Data factory pipeline monitoring
The pipeline will be created in Azure data factory and can be monitored in below way:
Data Validation # This step will validate the Gen1 and Gen2 data based on file path and file size.
Data Validation Prerequisites # No Incremental copy should be happening before running the validation script.
Stop the trigger in the azure data factory as below:
Run the script StartIncrementalLoadValidation.ps1 in PowerShell.
Note: This script should be run only after the azure data factory pipeline run is complete (run status = succeeded).
Data Comparison Report # Once the Gen1 and Gen2 data is compared and validated, the result is generated in CSV file into the Output folder as below:
The CSV file will show the matched and unmatched records with Gen1 and Gen2 file path, Gen1 and Gen2 file size and Ismatching status.
Note: IsMatching status = Yes (For matched records ) and No (Unmatched records)
Application update # This step will configure the path in the work loads and applications to Gen2 endpoint.
Please refer to Application and Workload Update
References # Azure Data Lake Storage migration from Gen1 to Gen2 Azure Databricks guide `}),e.add({id:22,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/lift-and-shift/",title:"Lift and Shift Copy Pattern Guide: A quick start template",section:"Azure Data Lake Storage Gen1 to Gen2 Migration Sample",content:` Lift and Shift Copy Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual in form of step by step guide for the lift and shift copy pattern from Gen1 to Gen2 storage using Azure Data Factory and PowerShell. As such it provides the directions, references, sample code examples of the PowerShell functions been used.
This guide covers the following tasks:
Set up kit for lift and shift copy pattern from Gen1 to Gen2 Data Validation between Gen1 and Gen2 post migration Application update for the workloads Considerations for using the lift and shift pattern
Cutover from Gen1 to Gen2 for all workloads at the same time. Expect downtime during the migration and the cutover period. Ideal for pipelines that can afford downtime and all apps can be upgraded at one time. Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account
Azure Key Vault. Required keys and secrets to be configured here.
Service principal with read, write and execute permission to the resource group, key vault, data lake store Gen1 and data lake store Gen2. To learn more, see create service principal account and to provide SPN access to Gen1 refer to SPN access to Gen1
Windows PowerShell ISE.
Note : Run as administrator
//Run below code to enable running PS files Set-ExecutionPolicy Unrestricted //Check for the below modules in PowerShell . If not existing, install one by one: Install-Module Az.Accounts -AllowClobber -Force Install-Module Az.DataFactory -AllowClobber -Force Install-Module Az.KeyVault -AllowClobber -Force Install-Module Az.DataLakeStore -AllowClobber -Force Install-Module PowerShellGet –Repository PSGallery –Force //Close the PowerShell ISE and Reopen as administrator. Run the below module Install-Module az.storage -RequiredVersion 1.13.3-preview -Repository PSGallery -AllowClobber -AllowPrerelease -Force Limitations # This version of code will have below limitations:
Gen1 \u0026amp; Gen2 should be in same subscription Supports only for single Gen1 source and Gen2 destination Code Developed and Supported only in Windows PowerShell ISE Migration Framework Setup # This section will help you with the steps needed to set up the framework and get started with the migration process.
Get Started # Download the migration source code located here to your local machine:
Note: To avoid security warning error \u0026ndash;\u0026gt; Right click on the zip folder downloaded \u0026ndash;\u0026gt; Go to \u0026ndash;\u0026gt; Properties \u0026ndash;\u0026gt; General \u0026ndash;\u0026gt; Check unblock option under security section. Unzip and extract the folder.
The download will contain below listed contents:
Application: This folder will have sample code for Mount path configuration.
Configuration: This folder will have the configuration file FullLoadConfig.json and all the required details of resource group and subscription along with source and destination path of ADLS Gen1 and Gen2.
Migration: Contains the templates to create dynamic data factory pipeline and copy the data from Gen1 to Gen2.
Validation: Contains the PowerShell scripts which will read the Gen1 and Gen2 data and write the comparison report post migration.
StartFullLoadMigrationAndValidation.ps1: Script to invoke the full load Migration and Validation process to compare the data between Gen1 and Gen2 post migration and generate summary report.
How to Set up Configuration file # Important Prerequisite:
Provide Service principal access to configure key vault as below:
Make an entry of Gen2 Access key in the key vault as shown below :
Below is the code snapshot for setting the configuration file to connect to azure data factory:
\u0026#34;gen1SourceRootPath\u0026#34; : \u0026#34;https://\u0026lt;\u0026lt;Enter the Gen1 source root path\u0026gt;\u0026gt;.azuredatalakestore.net/webhdfs/v1\u0026#34;, \u0026#34;gen2DestinationRootPath\u0026#34; : \u0026#34;https://\u0026lt;\u0026lt;Enter the Gen2 destination root path\u0026gt;\u0026gt;.dfs.core.windows.net\u0026#34;, \u0026#34;tenantId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the tenantId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;subscriptionId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the subscriptionId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleId\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleId\u0026gt;\u0026gt;\u0026#34;, \u0026#34;servicePrincipleSecret\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the servicePrincipleSecret Key\u0026gt;\u0026gt;\u0026#34;, \u0026#34;keyVaultName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the keyVaultName\u0026gt;\u0026gt;\u0026#34;, \u0026#34;factoryName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the factoryName\u0026gt;\u0026gt;\u0026#34;, \u0026#34;resourceGroupName\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the resourceGroupName under which the azure data factory pipeline will be created\u0026gt;\u0026gt;\u0026#34;, \u0026#34;location\u0026#34; : \u0026#34;\u0026lt;\u0026lt;Enter the location\u0026gt;\u0026gt;\u0026#34;, \u0026#34;overwrite\u0026#34; : \u0026#34;Enter the value\u0026#34; // True = It will overwrite the existing data factory ,False = It will skip creating data factory Setting up the factory pipeline for lift and shift copy pattern
\u0026#34;pipelineId\u0026#34;: \u0026#34;\u0026lt;\u0026lt;Enter the pipeline number. For example: 1,2\u0026#34; \u0026#34;fullLoad\u0026#34;: \u0026#34;true\u0026#34; // Activity 1 \u0026#34;sourcePath\u0026#34; : \u0026#34;Enter the Gen1 full path. For example: /path-name\u0026#34;, \u0026#34;destinationPath\u0026#34; : \u0026#34;Enter the Gen2 full path. For example: path-name\u0026#34;, \u0026#34;destinationContainer\u0026#34; : \u0026#34;Enter the Gen2 container name\u0026#34; // Activity 2 \u0026#34;sourcePath\u0026#34; : \u0026#34;Enter the Gen1 full path. For example: /path-name\u0026#34;, \u0026#34;destinationPath\u0026#34; : \u0026#34;Enter the Gen2 full path. For example: path-name\u0026#34;, \u0026#34;destinationContainer\u0026#34; : \u0026#34;Enter the Gen2 container name\u0026#34; NOTE: The destinationPath string will not be having Gen2 container name. It will have the file path same as Gen1. See the FullLoadConfig.json script for more reference.
Azure data factory pipeline creation and execution # Run the script StartFullLoadMigrationAndValidation.ps1 which will trigger the migration and validation process. This step will create the data factory as per the configuration file.
Azure Data factory pipeline monitoring # The pipeline is created in Azure data factory and can be monitored in below way:
Data Validation # The StartFullLoadMigrationAndValidation.ps1 script triggers the data validation process between Gen1 and Gen2 once the migration is completed in above step.
To monitor the execution details for each copy activity, select the Details link (eyeglasses image) under Actions in the activity monitoring view. You can monitor details like the volume of data copied from the source to the sink, data throughput, execution steps with corresponding duration, and used configurations.
Verify that the data is copied into your Azure Data Lake Storage Gen2 account.
Data Comparison Report # Once the Gen1 and Gen2 data is compared and validated, the result is generated in CSV file into the Output folder as below:
The CSV file will show the matched and unmatched records with file path, file size, Permission for Gen1 and Gen2 records and Ismatching status.
Note: IsMatching status = Yes (For matched records) and No (For Unmatched records)
Application update # This step will configure the path in the workloads to Gen2 endpoint.
Refer to Application and Workload Update on how to plan and migrate workloads and applications to Gen2.
References # Migrate Azure Data Lake Storage from Gen1 to Gen2 `}),e.add({id:23,href:"/Storage/docs/application-and-user-data/basics/azure-storage-classic-logs-to-data-explorer/",title:"Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer",section:"Application and User Data",content:` Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account. More info
What if you wanted the convince of Kusto queries but had a requirement to use the classic storage logging.
You can achieve this using Data Explorer. This is the same datastore technology that Azure Monitor uses. Additionally you can automate the ingestion of the text logs into Data Explorer with Data Factory.
Read about both those technologies here: Data Factory (ADF) Data Explorer (ADX)
Step by Step # Create your storage account (if not already done) More Info
Enable Storage logs (if not already done) More Info
Create a Data Explorer Cluster \u0026amp; Database More Info
You can now create a table to store the logs, this is the script that I used.
.create table storagelogs ( VersionNumber: string, RequestStartTime: datetime, OperationType: string, RequestStatus: string, HttpStatusCode: string, EndToEndLatencyInMS: long, ServerLatencyInMs: long, AuthenticationType: string, RequesterAcountName: string, OwnerAccountName: string, ServiceType: string, RequestUrl: string, RequestedObjectKey: string, RequestIdHeader: guid, OperationCount: int, RequesterIpAddress: string, RequestVersionHeader: string, RequestHeaderSize: long, RequestPacketSize: long, ResponseHeaderSize: long, ResponsePacketSize: long, RequestContentLength: long, RequestMd5: string, ServerMd5: string, EtagIdentifier: string, LastModifiedTime: datetime, ConditionsUsed: string, UserAgentHeader: string, ReferrerHeader: string, LogSource: string) See log format details here
Create the Azure Data Factory here (just this section, not the entire lab)
The last step should be to launch the \u0026ldquo;Author \u0026amp; Monitor\u0026rdquo; tool
When the tool is launched select \u0026ldquo;Copy Data\u0026rdquo;
Give your task a name/description, select Tumbling Window, and set the recurrence to the period you want. Press Next.
Shorter windows will reduce delay between when something is logged and when it appears in Data Explorer, however it will increase your Data Factory costs.
Select create a new connection and \u0026ldquo;Azure Blob Storage\u0026rdquo; as the linked service. Populate the configuration for the linked service. Select the options that are appropriate for your environment, test the connection, then create the linked service. And press next to select the source dataset.
I am using a managed identity here, I have given this managed identity \u0026ldquo;Storage Blob Data Reader\u0026rdquo; permissions on the storage account.
Choose the input file or folder, type in $logs/ in the file or folder box. NOTE: you cannot use the browse feature. Select Incremental Load: LastModifiedDate, and press next
This should now go and pull some data from the container for you to tell ADF how to parse the log files. Here I leave the defaults, but ADD an additional column to record what log file the record came from. Press Next.
Pick your destination, select \u0026ldquo;create new connection\u0026rdquo;, select Azure Data Explorer, populate the configuration for the linked service. Select the options that are appropriate for your environment, test the connection, then create the linked service. And press next to select the destination dataset.
I am using a managed identity here, I have given this managed identity \u0026ldquo;Admin\u0026rdquo; permissions on the database (NOT the Data Explorer server) You need admin to do the mapping in the next step.
You should now be prompted to select a table in data explorer to load the data into., do so then select next.
You should now see that ADF has mapped over our source to destination columns 1 by 1 for us, all these defaults should be good, press next.
Adjust the settings per your needs, I am just going to leave the defaults and press next.
Review the summary and press next, it should validate, and set everything up for you, now your ADF job is all setup. When new logs arrive, they will be parsed and inserted into ADX.
You should now be able to query your ADX database to review the logs.
`}),e.add({id:24,href:"/Storage/docs/application-and-user-data/code-samples/concurrent-uploads-with-versioning/",title:"Managing concurrent uploads in Azure blob storage with blob versioning",section:"Application and User Data",content:` Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data. For example, if two users viewing a wiki page make an update to that page, then the wiki platform must ensure that the second update does not overwrite the first update. It must also ensure that both users understand whether their update was successful. This strategy is most often used in web applications. Pessimistic concurrency: An application looking to perform an update will take a lock on an object preventing other users from updating the data until the lock is released. For example, in a primary/secondary data replication scenario in which only the primary performs updates, the primary typically holds an exclusive lock on the data for an extended period to ensure no one else can update it. Last writer wins: An approach that allows update operations to proceed without first determining whether another application has updated the data since it was read. This approach is typically used when data is partitioned in such a way that multiple users will not access the same data at the same time. It can also be useful where short-lived data streams are being processed. Azure Storage supports all three strategies, although it is distinctive in its ability to provide full support for optimistic and pessimistic concurrency. Azure Storage was designed to embrace a strong consistency model that guarantees that after the service performs an insert or update operation, subsequent read operations return the latest update. With conditional headers present in the blob service REST API you can control all this logic in your applications, using headers such as If-Match and If-None-Match.
To understand how blob versioning can help with managing concurrent uploads, imagine the following scenario which requires an optimistic concurrency strategy:
Proseware, Inc. is developing a new mobile application which will allow customers manage photos on their devices such as memes and GIFs. To ensure customers can access their libraries from multiple devices, customers will be able to backup the contents of their libraries to the cloud. Customers will also be able to browse a existing marketplace of memes and GIFs and sync them to their library or publish their own unique memes to the marketplace. This means that multiple customers will need access to the same files, and Proseware, Inc. will implement de-duplication logic in the application to make sure that only one copy of a file exists at time in cloud storage, regardless of the number of customers who have that file in their library.
If User A and User B both have the same animated GIF on their device, when the clients upload their backups to cloud storage only one copy should remain. Proseware, Inc. cannot control when clients will upload and has determined that they will always attempt to upload the file. If it is a new file, it should be stored in the service in a manner that if another use attempts to upload the same file, only 1 copy will be retained over time. If the file already exists, only one copy is needed, and the upload can be discarded.
Blob versioning allows for the automatic retention of previous versions, with a new version created on each successful call to Put Blob or Put Block List. When combined with lifecycle management in blob storage, versions older than a specified number of days can be automatically deleted or moved to a different tier of storage (e.g., from hot to cool). With blob versioning, the application can be built in a way that allows for every client to always attempt their upload. If the file already exists, the upload will succeed, adding a new version which can they be safely deleted with lifecycle management after the specified retention period has passed. This removes the complexities of having to manage conditional headers and allows the application to meet its goals with out-of-the-box features of Azure blob storage that are easily configurable through the Azure portal.
Now that we understand the scenario and have seen how blob versioning can help, lets look at a sample upload of the same file by four mobile clients.
Client 1 and Client 2 (BlockId QkJC) upload awesomestmemeever.gif simultaneously.
PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block\u0026amp;blockid=QkJC\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX PUT https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=block\u0026amp;blockid=QUFB\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX Client 2s upload finishes with a successful call to Put Block List before Client 1. awesomestmemeever.gif is saved with VersionId 2021-01-08T20:38:09.3842765Z. The committed block list can be retrieved.
GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist\u0026amp;blocklisttype=all\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;BlockList\u0026gt; \u0026lt;CommittedBlocks\u0026gt; \u0026lt;Block\u0026gt; \u0026lt;Name\u0026gt;QkJC\u0026lt;/Name\u0026gt; \u0026lt;Size\u0026gt;2495317\u0026lt;/Size\u0026gt; \u0026lt;/Block\u0026gt; \u0026lt;/CommittedBlocks\u0026gt; \u0026lt;UncommittedBlocks /\u0026gt; \u0026lt;/BlockList\u0026gt; Client 1s upload finishes but cannot be committed as the block list has been purged when Client 2 saved. Client 1 will receive a HTTP 400 InvalidBlockList exception. Client 1 issues a HEAD request to see if the file exists as it may have been uploaded by another client.
HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX If the blob has been successfully committed by another client, Client 1 can disregard the error or if the blob was not present for any other reason, the upload can be repeated.
Client 3 attempts to upload the same file but experiences a transient network error, leaving uncommitted blocks as Put Block List is not called successfully due to missing blocks in the uncommitted block list. The uncommitted blocks are retained in the current version (last successful upload from Client 2).
GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist\u0026amp;blocklisttype=all\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;BlockList\u0026gt; \u0026lt;CommittedBlocks\u0026gt; \u0026lt;Block\u0026gt; \u0026lt;Name\u0026gt;QkJC\u0026lt;/Name\u0026gt; \u0026lt;Size\u0026gt;2495317\u0026lt;/Size\u0026gt; \u0026lt;/Block\u0026gt; \u0026lt;/CommittedBlocks\u0026gt; \u0026lt;UncommittedBlocks\u0026gt; \u0026lt;Block\u0026gt; \u0026lt;Name\u0026gt;Q0ND\u0026lt;/Name\u0026gt; \u0026lt;Size\u0026gt;2495317\u0026lt;/Size\u0026gt; \u0026lt;/Block\u0026gt; \u0026lt;/UncommittedBlocks\u0026gt; \u0026lt;/BlockList\u0026gt; Client 4 uploads the same file and is successful. The uncommitted blocks from Client 3s request are purged, and a new version is created, VersionId 2021-01-08T20:54:36.7150246Z.
GET https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?comp=blocklist\u0026amp;blocklisttype=all\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX \u0026lt;?xml version=\u0026#34;1.0\u0026#34; encoding=\u0026#34;utf-8\u0026#34;?\u0026gt; \u0026lt;BlockList\u0026gt; \u0026lt;CommittedBlocks\u0026gt; \u0026lt;Block\u0026gt; \u0026lt;Name\u0026gt;RERE\u0026lt;/Name\u0026gt; \u0026lt;Size\u0026gt;2495317\u0026lt;/Size\u0026gt; \u0026lt;/Block\u0026gt; \u0026lt;/CommittedBlocks\u0026gt; \u0026lt;UncommittedBlocks /\u0026gt; \u0026lt;/BlockList\u0026gt; After one day, the versions from the previous day are deleted and only the base blob remains.
HEAD https://prosewarememestorage.blob.core.windows.netbackups/awesomestmemeever.gif?sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX Note that in this approach, there is no need for the If-None-Match:* conditional header. Clients can simultaneously upload to the same blob and a new version will be created for each successful call to Put Block List or Put Blob. For Get Blob requests, if a versionid is not specified in the parameters, the latest version of the blob will be retrieved, or the calling application can also provide a valid versionid to retrieve a previous version before it is deleted through the lifecycle management rule. If needed, the current versions can be retrieved using List Blobs.
GET https://prosewarememestorage.blob.core.windows.net/test?restype=container\u0026amp;comp=list\u0026amp;include=versions\u0026amp;sv=2019-12-12\u0026amp;ss=bfqt\u0026amp;srt=sco\u0026amp;sp=rwdlacuptfx\u0026amp;se=2021-01-16T04:18:47Z\u0026amp;st=2021-01-08T20:18:47Z\u0026amp;spr=https\u0026amp;sig=XXXXXXXXXXXX\u0026amp;prefix=awesomestmemeever.gif The following is a sample lifecycle management rule which is filtered to only blob versions and deletes versions older than 1 day:
{ \u0026#34;rules\u0026#34;: [ { \u0026#34;enabled\u0026#34;: true, \u0026#34;name\u0026#34;: \u0026#34;DeleteVersionsOlderThan1Day\u0026#34;, \u0026#34;type\u0026#34;: \u0026#34;Lifecycle\u0026#34;, \u0026#34;definition\u0026#34;: { \u0026#34;actions\u0026#34;: { \u0026#34;version\u0026#34;: { \u0026#34;delete\u0026#34;: { \u0026#34;daysAfterCreationGreaterThan\u0026#34;: 1 } } }, \u0026#34;filters\u0026#34;: { \u0026#34;blobTypes\u0026#34;: [ \u0026#34;blockBlob\u0026#34; ] } } } ] } In conclusion, blob versioning allows for both multiple uploads from clients and automated deletion of data that is now longer required while retaining the base blob. Only committed data is retained and there is no need for the use of conditional headers.
References # Managing Concurrency in Blob storage Blob service REST API Specifying conditional headers for Blob service operations Get Blob List Blobs Put Blob Put Block List Optimize costs by automating Azure Blob Storage access tiers `}),e.add({id:25,href:"/Storage/docs/application-and-user-data/basics/nfs-3-support-for-azure-blob-storage/",title:"NFS 3.0 support for Azure Blob Storage",section:"Application and User Data",content:` NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com `}),e.add({id:26,href:"/Storage/docs/application-and-user-data/basics/optimize-your-costs-with-azure-blob-storage/",title:"Optimize your costs with Azure Blob Storage",section:"Application and User Data",content:` Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference `}),e.add({id:27,href:"/Storage/docs/storage-partners/",title:"Storage Partners",section:"Docs",content:" Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil \u0026amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data "}),e.add({id:28,href:"/Storage/docs/analytics/hitchhikers-guide-to-the-datalake/",title:"The Hitchhiker's Guide to the Data Lake",section:"Analytics",content:` The Hitchhiker's Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker's Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation? How do I organize my data? How do I manage access to my data? What data format do I choose? How do I manage my data lake cost? How do I monitor my data lake? Optimizing your data lake for better scale and performance File sizes and number of files File Formats Partitioning schemes Use Query Acceleration Recommended reading Questions, comments or feedback? Azure Data Lake Storage Gen2 (ADLS Gen2) is a highly scalable and cost-effective data lake solution for big data analytics. As we continue to work with our customers to unlock key insights out of their data using ADLS Gen2, we have identified a few key patterns and considerations that help them effectively utilize ADLS Gen2 in large scale Big Data platform architectures.
This document captures these considerations and best practices that we have learnt based on working with our customers. For the purposes of this document, we will focus on the Modern Data Warehouse pattern used prolifically by our large-scale enterprise customers on Azure , including our solutions such as Azure Synapse Analytics.
We will improve this document to include more analytics patterns in future iterations.
Important: Please consider the content of this document as guidance and best practices to help you make your architectural and implementation decisions. This is not an official HOW-TO documentation.
When is ADLS Gen2 the right choice for your data lake? # An enterprise data lake is designed to be a central repository of unstructured , semi-structured and structured data used in your big data platform. The goal of the enterprise data lake is to eliminate data silos (where the data can only be accessed by one part of your organization) and promote a single storage layer that can accommodate the various data needs of the organization For more information on picking the right storage for your solution, please visit the Choosing a big data storage technology in Azure article.
A common question that comes up is when to use a data warehouse vs a data lake. We urge you to think about data lake and data warehouse as complementary solutions that work together to help you derive key insights from your data. A data lake is a store for all types of data from various sources. The data in its natural form is stored as raw data, and schema and transformations are applied on this raw data to gain valuable business insights depending on the key questions the business is trying to answer. A data warehouse is a store for highly structured schematized data that is usually organized and processed to derive very specific insights. E.g. a retail customer can store the past 5 years worth of sales data in a data lake, and in addition they can process data from social media to extract the new trends in consumption and intelligence from retail analytics solutions on the competitive landscape and use all these as input together to generate a data set that can be used to project the next years sales targets. They can then store the highly structured data in a data warehouse where BI analysts can build the target sales projections. In addition, they can use the same sales data and social media trends in the data lake to build intelligent machine learning models for personalized recommendations on their website.
ADLS Gen2 is an enterprise ready hyperscale repository of data for your big data analytics workloads. ADLS Gen2 offers faster performance and Hadoop compatible access with the hierarchical namespace, lower cost and security with fine grained access controls and native AAD integration. This lends itself as the choice for your enterprise data lake focused on big data analytics scenarios extracting high value structured data out of unstructured data using transformations, advanced analytics using machine learning or real time data ingestion and analytics for fast insights. Its worth noting that we have seen customers have different definition of what hyperscale means this depends on the data stored, the number of transactions and the throughput of the transactions. When we say hyperscale, we are typically referring to multi-petabytes of data and hundreds of Gbps in throughput the challenges involved with this kind of analytics is very different from a few hundred GB of data and a few Gbps of transactions in throughput.
Key considerations in designing your data lake # As you are building your enterprise data lake on ADLS Gen2, its important to understand your requirements around your key use cases, including
What am I storing in my data lake? How much data am I storing in the data lake? What portion of your data do you run your analytics workloads on? Who needs access to what parts of my data lake? What are the various analytics workloads that Im going to run on my data lake? What are the various transaction patterns on the analytics workloads? What is the budget Im working with? We would like to anchor the rest of this document in the following structure for a few key design/architecture questions that we have heard consistently from our customers.
Available options with the pros and cons Factors to consider when picking the option that works for you Recommended patterns where applicable Anti-patterns that you want to avoid To best utilize this document, identify your key scenarios and requirements and weigh in our options against your requirements to decide on your approach. If you are not able to pick an option that perfectly fits your scenarios, we recommend that you do a proof of concept (PoC) with a few options to let the data guide your decision.
Terminology # Before we talk about the best practices in building your data lake, its important to get familiar with the various terminology we will use this document in the context of building your data lake with ADLS Gen2. This document assumes that you have an account in Azure.
Resource: A manageable item that is available through Azure. Virtual machines, storage accounts, VNETs are examples of resources.
Subscription: An Azure subscription is a logical entity that is used to separate the administration and financial (billing) logic of your Azure resources. A subscription is associated with limits and quotas on Azure resources, you can read about them here.
Resource group: A logical container to hold the resources required for an Azure solution can be managed together as a group. You can read more about resource groups here.
Storage account: An Azure resource that contains all of your Azure Storage data objects: blobs, files, queues, tables and disks. You can read more about storage accounts here. For the purposes of this document, we will be focusing on the ADLS Gen2 storage account which is essentially a Azure Blob Storage account with Hierarchical Namespace enabled, you can read more about it here.
container (also referred to as container for non-HNS enabled accounts): A container organizes a set of objects (or files). A storage account has no limits on the number of containers, and the container can store an unlimited number of folders and files. There are properties that can be applied at a container level such as RBACs and SAS keys.
Folder/Directory: A folder (also referred to as a directory) organizes a set of objects (other folders or files). There are no limits on how many folders or files can be created under a folder. A folder also has access control lists (ACLs) associated with it, there are two types of ACLs associated with a folder access ACLs and default ACLs, you can read more about them here.
Object/file: A file is an entity that holds data that can be read/written. A file has an access control list associated with it. A file has only access ACLs and no default ACLs.
Organizing and managing data in your data lake # As our enterprise customers build out their data lake strategy, one of the key value proposition of ADLS Gen2 is to serve as the single data store for all their analytics scenarios. Please remember that this single data store is a logical entity that could manifest either as a single ADLS Gen2 account or as multiple accounts depending on the design considerations. Some customers have end to end ownership of the components of an analytics pipeline, and other customers have a central team/ organization managing the infrastructure, operations and governance of the data lake while serving multiple customers either other organizations in their enterprise or other customers external to their enterprise.
In this section, we have addressed our thoughts and recommendations on the common set of questions that we hear from our customers as they design their enterprise data lake. For illustration, we will take the example of a large retail customer, Contoso.com, building out their data lake strategy to help with various predictive analytics scenarios.
Do I want a centralized or a federated data lake implementation? # As an enterprise data lake, you have two available options either centralize all the data management for your analytics needs within one organization, or have a federated model, where your customers manage their own data lakes while the centralized data team provides guidance and also manages a few key aspects of the data lake such as security and data governance. It is important to remember that both the centralized and federated data lake strategies can be implemented with one single storage account or multiple storage accounts.
A common question our customers ask us is if they can build their data lake in a single storage account or if they need multiple storage accounts. While technically a single ADLS Gen2 could solve your business needs, there are various reasons why a customer would choose multiple storage accounts, including, but not limited to the following scenarios in the rest of this section.
Key considerations # When deciding the number of storage accounts you want to create, the following considerations are helpful in deciding the number of storage accounts you want to provision.
A single storage account gives you the ability to manage a single set of control plane management operations such as RBACs, firewall settings, data lifecycle management policies for all the data in your storage account, while allowing you to organize your data using containers, files and folders on the storage account. If you want to optimize for ease of management, specially if you adopt a centralized data lake strategy, this would be a good model to consider. Multiple storage accounts provide you the ability to isolate data across different accounts so different management policies can be applied to them or manage their billing/cost logic separately. If you are considering a federated data lake strategy with each organization or business unit having their own set of manageability requirements, then this model might work best for you. Let us put these aspects in context with a few scenarios.
Enterprise data lake with a global footprint # Driven by global markets and/or geographically distributed organizations, there are scenarios where enterprises have their analytics scenarios factoring multiple geographic regions. The data itself can be categorized into two broad categories.
Data that can be shared globally across all regions E.g. Contoso is trying to project their sales targets for the next fiscal year and want to get the sales data from their various regions. Data that needs to be isolated to a region E.g. Contoso wants to provide a personalized buyer experience based on their profile and buying patterns. Given this is customer data, there are sovereignty requirements that need to be met, so the data cannot leave the region. In this scenario, the customer would provision region-specific storage accounts to store data for a particular region and allow sharing of specific data with other regions. There is still one centralized logical data lake with a central set of infrastructure management, data governance and other operations that comprises of multiple storage accounts here.
Customer or data specific isolation # There are scenarios where enterprise data lakes serve multiple customer (internal/external) scenarios that may be subject to different requirements different query patterns and different access requirements. Let us take our Contoso.com example where they have analytics scenarios to manage the company operations. In this case, they have various data sources employee data, customers/campaign data and financial data that are subject to different governance and access rules and are also possibly managed by different organizations within the company. In this case, they could choose to create different data lakes for the various data sources.
In another scenario, enterprises that serve as a multi-tenant analytics platform serving multiple customers could end up provisioning individual data lakes for their customers in different subscriptions to help ensure that the customer data and their associated analytics workloads are isolated from other customers to help manage their cost and billing models.
Recommendations # Create different storage accounts (ideally in different subscriptions) for your development and production environments. In addition to ensuring that there is enough isolation between your development and production environments requiring different SLAs, this also helps you track and optimize your management and billing policies efficiently. Identify the different logical sets of your data and think about your needs to manage them in a unified or isolated fashion this will help determine your account boundaries. Start your design approach with one storage account and think about reasons why you need multiple storage accounts (isolation, region based requirements etc) instead of the other way around. There are also subscription limits and quotas on other resources (such as VM cores, ADF instances) factor that into consideration when designing your data lake. Anti-patterns # Beware of multiple data lake management # When you decide on the number of ADLS Gen2 storage accounts, ensure that you are optimizing for your consumption patterns. If you do not require isolation and you are not utilizing your storage accounts to their fullest capabilities, you will be incurring the overhead of managing multiple accounts without a meaningful return on investment.
Copying data back and forth # When you have multiple data lakes, one thing you would want to treat carefully is if and how you are replicating data across the multiple accounts. This creates a management problem of what is the source of truth and how fresh it needs to be, and also consumes transactions involved in copying data back and forth. We have features in our roadmap that makes this workflow easier if you have a legitimate scenario to replicate your data.
A note on scale # One common question that our customers ask is if a single storage account can infinitely continue to scale to their data, transaction and throughput needs. Our goal in ADLS Gen2 is to meet the customer where they want in terms of their limits. We do request that when you have a scenario where you have requirements for really storing really large amounts of data (multi-petabytes) and require the account to support a really large transaction and throughput pattern (tens of thousands of TPS and hundreds of Gbps throughput), typically observed by requiring 1000s of cores of compute power for analytics processing via Databricks or HDInsight, please do contact our product group so we can plan to support your requirements appropriately.
How do I organize my data? # Data organization in a an ADLS Gen2 account can be done in the hierarchy of containers, folders and files in that order, as we saw above. A very common point of discussion as we work with our customers to build their data lake strategy is how they can best organize their data. There are multiple approaches to organizing the data in a data lake, this section documents a common approach that has been adopted by many customers building a data platform.
This organization follows the lifecycle of the data as it flows through the source systems all the way to the end consumers the BI analysts or Data Scientists. As an example, let us follow the journey of sales data as it travels through the data analytics platform of Contoso.com.
As an example , think of the raw data as a lake/pond with water in its natural state, the data is ingested and stored as is without transformations, the enriched data is water in a reservoir that is cleaned and stored in a predictable state (schematized in the case of our data), the curated data is like bottled water that is ready for consumption. Workspace data is like a laboratory where scientists can bring their own for testing. Its worth noting that while all this data layers are present in a single logical data lake, they could be spread across different physical storage accounts. In these cases, having a metastore is helpful for discovery.
Raw data: This is data as it comes from the source systems. This data is stored as is in the data lake and is consumed by an analytics engine such as Spark to perform cleansing and enrichment operations to generate the curated data. The data in the raw zone is sometimes also stored as an aggregated data set, e.g. in the case of streaming scenarios, data is ingested via message bus such as Event Hub, and then aggregated via a real time processing engine such as Azure Stream analytics or Spark Streaming before storing in the data lake. Depending on what your business needs, you can choose to leave the data as is (E.g. log messages from servers) or aggregate it (E.g. real time streaming data). This layer of data is highly controlled by the central data engineering team and is rarely given access to other consumers. Depending on the retention policies of your enterprise, this data is either stored as is for the period required by the retention policy or it can be deleted when you think the data is of no more use. E.g. this would be raw sales data that is ingested from Contosos sales management tool that is running in their on-prem systems.
Enriched data: This layer of data is the version where raw data (as is or aggregated) has a defined schema and also, the data is cleansed, enriched (with other sources) and is available to analytics engines to extract high value data. Data engineers generate these datasets and also proceed to extract high value/curated data from these datasets. E.g. this would be enriched sales data ensuring that the sales data is schematized, enriched with other product or inventory information and also separated into multiple datasets for the different business units inside Contoso.
Curated data: This layer of data contains the high value information that is served to the consumers of the data the BI analysts and the data scientists. This data has structure and can be served to the consumers either as is (E.g. data science notebooks) or through a data warehouse. Data assets in this layer is usually highly governed and well documented. E.g. high-quality sales data (that is data in the enriched data zone correlated with other demand forecasting signals such as social media trending patterns) for a business unit that is used for predictive analytics on determining the sales projections for the next fiscal year.
Workspace data: In addition to the data that is ingested by the data engineering team from the source, the consumers of the data can also choose to bring other data sets that could be valuable. In this case, the data platform can allocate a workspace for these consumers so they can use the curated data along with the other data sets they bring to generate valuable insights. E.g. a Data Science team is trying to determine the product placement strategy for a new region, they could bring other data sets such as customer demographics and data on usage of other similar products from that region and use the high value sales insights data to analyze the product market fit and the offering strategy.
Archive data: This is your organizations data vault - that has data stored to primarily comply with retention policies and has very restrictive usage, such as supporting audits. You can use the Cool and Archive tiers in ADLS Gen2 to store this data. You can read more about our data lifecycle management policies to identify a plan that works for you.
Key considerations # When deciding the structure of your data, consider both the semantics of the data itself as well as the consumers who access the data to identify the right data organization strategy for you.
Recommendations # Create different folders or containers (more below on considerations between folders vs containers) for the different data zones - raw, enriched, curated and workspace data sets. Inside a zone, choose to organize data in folders according to logical separation, e.g. datetime or business units or both. You can find more examples and scenarios on directory layout in our best practices document. Consider the analytics consumption patterns when designing your folder structures. E.g. if you have a Spark job reading all sales data of a product from a specific region for the past 3 months, then an ideal folder structure here would be /enriched/product/region/timestamp. Consider the access control model you would want to follow when deciding your folder structures. The table below provides a framework for you to think about the different zones of the data and the associated management of the zones with a commonly observed pattern. Consideration Raw data Enriched data Curated data Workspace data Consumer Data engineering team Data engineering team, with adhoc access patterns by the Data scientists/BI analysts Data engineers, BI analysts, Data scientists Data scientists/BI analysts Access control Locked for access by data engineering team Full control to data engineering team, with read access to the BI analysts/data scientists Full control to data engineering team, with read and write access to the BI analysts/data scientists Full control to data engineers, data scientists/Bi analysts Data lifecycle management Once enriched data is generated, can be moved to a cooler tier of storage to manage costs. Older data can be moved to a cooler tier. Older data can be moved to a cooler tier. While the end consumers have control of this workspace, ensure that there are processes and policies to clean up data that is not necessary using policy based DLM for e.g., the data could build up very easily. Folder structure and hierarchy Folder structure to mirror the ingestion patterns. Folder structure mirrors organization, e.g. business units. Folder structure mirrors organization, e.g. business units. Folder structures mirror teams that the workspace is used by. Example /raw/sensordata /raw/lobappdata /raw/userclickdata /enriched/sales /enriched/manufacturing /curated/sales /curated/manufacturing /workspace/salesBI /workspace/manufacturindatascience Another common questions that our customers ask if when to use containers and when to use folders to organize the data. While at a higher level, they both are used for logical organizations of the data, they have a few key differences. Consideration container Folder Hierarchy container can contain folders or files. Folder can contain other folders or files. Access control using AAD At the container level, you can set coarse grained access controls using RBACs. These RBACs apply to all data inside the container. At the folder level, you can set fine grained access controls using ACLs. The ACLs apply to the folder only (unless you use default ACLs, in which case, they are snapshotted when new files/folders are created under the folder). Non-AAD access control At a container level, you can enable anonymous access (via shared keys) or set SAS keys specific to the container. A folder does not support non-AAD access control. Anti-patterns # Indefinite growth of irrelevant data # While ADLS Gen2 storage is not very expensive and lets you store a large amount of data in your storage accounts, lack of lifecycle management policies could end up growing the data in the storage very quickly even if you dont require the entire corpus of data for your scenarios. Two common patterns where we see this kind of data growth is :-
Data refresh with a newer version of data Customers typically keep a few older versions of the data for analysis when there is a period refresh of the same data, e.g. when customer engagement data over the last month is refreshed daily over a rolling window of 30 days, you get 30 days engagement data everyday and when you dont have a clean up process in place, your data could grow exponentially. Workspace data accumulation In the workspace data zone, the customers of your data platform, i.e. the BI analysts or data scientists can bring their own data sets Typically, we have seen that this data could also accumulate just as easily when the data not used is left lying around in the storage spaces. How do I manage access to my data? # ADLS Gen2 supports access control models that combine both RBACs and ACLs to manage access to the data. You can find more information about the access control here. In addition to managing access using AAD identities using RBACs and ACLs, ADLS Gen2 also supports using SAS tokens and shared keys for managing access to data in your Gen2 account.
A common question that we hear from our customers is when to use RBACs and when to use ACLs to manage access to the data. RBACs let you assign roles to security principals (user, group, service principal or managed identity in AAD) and these roles are associated with sets of permissions to the data in your container. RBACs can help manage roles related to control plane operations (such as adding other users and assigning roles, manage encryption settings, firewall rules etc) or for data plane operations (such as creating containers, reading and writing data etc). Fore more information on RBACs, you can read this article.
RBACs are essentially scoped to top-level resources either storage accounts or containers in ADLS Gen2. You can also apply RBACs across resources at a resource group or subscription level. ACLs let you manage a specific set of permissions for a security principal to a much narrower scope a file or a directory in ADLS Gen2. There are 2 types of ACLs Access ADLs that control access to a file or a directory, Default ACLs are templates of ACLs set for directories that are associated with a directory, a snapshot of these ACLs are inherited by any child items that are created under that directory.
Key considerations # The table below provides a quick overview of how ACLs and RBACs can be used to manage permissions to the data in your ADLS Gen2 accounts at a high level, use RBACs to manage coarse grained permissions (that apply to storage accounts or containers) and use ACLs to manage fine grained permissions (that apply to files and directories).
Consideration RBACs ACLs Scope Storage accounts, containers. Cross resource RBACs at subscription or resource group level. Files, directories Limits 2000 RBACs in a subscription 32 ACLs (effectively 28 ACLs) per file, 32 ACLs (effectively 28 ACLs) per folder, default and access ACLs each Supported levels of permission Built-in RBACs or custom RBACs ACL permissions When using RBAC at the container level as the only mechanism for data access control, be cautious of the 2000 limit, particularly if you are likely to have a large number of containers. You can view the number of role assigments per subscription in any of the access control (IAM) blades in the portal.
Recommendations # Create security groups for the level of permissions you want for an object (typically a directory from what we have seen with our customers) and add them to the ACLs. For specific security principals you want to provide permissions, add them to the security group instead of creating specific ACLs for them. Following this practice will help you minimize the process of managing access for new identities which would take a really long time if you want to add the new identity to every single file and folder in your container recursively. Let us take an example where you have a directory, /logs, in your data lake with log data from your server. You ingest data into this folder via ADF and also let specific users from the service engineering team upload logs and manage other users to this folder. In addition, you also have various Databricks clusters analyzing the logs. You will create the /logs directory and create two AAD groups LogsWriter and LogsReader with the following permissions. LogsWriter added to the ACLs of the /logs folder with rwx permissions. LogsReader added to the ACLs of the /logs folder with r-x permissions. The SPNs/MSIs for ADF as well as the users and the service engineering team can be added to the LogsWriter group. The SPNs/MSIs for Databricks will be added to the LogsReader group. What data format do I choose? # Data may arrive to your data lake account in a variety of formats human readable formats such as JSON, CSV or XML files, compressed binary formats such as .tar.gz and a variety of sizes huge files (a few TBs) such as an export of a SQL table from your on-premise systems or a large number of tiny files (a few KBs) such as real-time events from your IoT solution. While ADLS Gen2 supports storing all kinds of data without imposing any restrictions, it is better to think about data formats to maximize efficiency of your processing pipelines and optimize costs you can achieve both of these by picking the right format and the right file sizes. Hadoop has a set of file formats it supports for optimized storage and processing of structured data. Let us look at some common file formats Avro, Parquet and ORC. All of these are machine-readable binary file formats, offer compression to manage the file size and are self-describing in nature with a schema embedded in the file. The difference between the formats is in how data is stored Avro stores data in a row-based format and Parquet and ORC formats store data in a columnar format.
Key considerations # Avro file format is favored where the I/O patterns are more write heavy or the query patterns favor retrieving multiple rows of records in their entirety. E.g. Avro format is favored by message bus such as Event Hub or Kafka writes multiple events/messages in succession. Parquet and ORC file formats are favored when the I/O patterns are more read heavy and/or when the query patterns are focused on a subset of columns in the records where the read transactions can be optimized to retrieve specific columns instead of reading the entire record. How do I manage my data lake cost? # ADLS Gen2 offers a data lake store for your analytics scenarios with the goal of lowering your total cost of ownership. The pricing for ADLS Gen2 can be found here. As our enterprise customers serve the needs of multiple organizations including analytics use-cases on a central data lake, their data and transactions tend to increase dramatically. With little or no centralized control, so will the associated costs increase. This section provides key considerations that you can use to manage and optimize the cost of your data lake.
Key considerations # ADLS Gen2 provides policy management that you can use to leverage the lifecycle of data stored in your Gen2 account. You can read more about these policies here. E.g. if your organization has a retention policy requirement to keep the data for 5 years, you can set a policy to automatically delete the data if it has not been modified for 5 years. If your analytics scenarios primarily operate on data that is ingested in the past month, you can move the data older than the month to a lower tier (cool or archive) which have a lower cost for data stored. Please note that the lower tiers have a lower price for data at rest, but higher policies for transactions, so do not move data to lower tiers if you expect the data to be frequently transacted on. Ensure that you are choosing the right replication option for your accounts, you can read the data redundancy article to learn more about your options. E.g. while GRS accounts ensure that your data is replicated across multiple regions, it also costs higher than an LRS account (where data is replicated on the same datacenter). When you have a production environment, replication options such as GRS are highly valuable to ensure business continuity with high availability and disaster recovery. However, an LRS account might just suffice for your development environment. As you can see from the pricing page of ADLS Gen2, your read and write transactions are billed in 4 MB increments. E.g. if you do 10,000 read operations and each file read is 16 MB in size, you will be charged for 40,000 transactions. When you have scenarios where you read a few KBs of data in a transaction, you will still be charged for the a 4 MB transaction. Optimizing for more data in a single transaction, i.e. optimizing for higher throughtput in your transactions does not just save cost, but also highly improves your performance. How do I monitor my data lake? # Understanding how your data lake is used and how it performs is a key component of operationalizing your service and ensuring it is available for use by any workloads which consume the data contained within it. This includes:
Being able to audit your data lake in terms of frequent operations Having visibiliy into key performace indicators such as operations with high latency Undestanding common errors, the operations that caused the error, and operations which cause service-side throttling Key considerations # All of the telemetry for your data lake is available through Azure Storage logs in Azure Monitor. Azure Storage logs in Azure Monitor is a new preview feature for Azure Storage which allows for a direct integration between your storage accounts and Log Analytics, Event Hubs, and archival of logs to another storage account utilizing standard diagnostic settings. A reference of the the full list of metrics and resources logs and their associated schema can be found in the Azure Storage monitoring data reference.
Where your choose to store your logs from Azure Storage logs becomes important when you consider how you will access it: If you want to access your logs in near real-time and be able to correlate events in logs with other metrics from Azure Monitor, you can store your logs in a Log Analytics workspace. This allows you to query your logs using KQL and author queries which enumerate the StorageBlobLogs table in your workspace. If you want to store your logs for both near real-time query and long term retention, you can configure your diagnostic settings to send logs to both a Log Analytics workspace and a storage account. If you want to access your logs through another query engine such as Splunk, you can configure your dianostic settings to send logs to an Event Hub and ingest logs from the Event Hub to your chosen destination. Azure Storage logs in Azure Monitor can be enabled through the Azure Portal, PowerShell, the Azure CLI, and Azure Resource Manager templates. For at-scale deployments, Azure Policy can be used with full support for remediation tasks. For more details, see: Azure/Community-Policy ciphertxt/AzureStoragePolicy Common KQL queries for Azure Storage logs in Azure Monitor # The following queries can be used to discover insights into the performance and health of your data lake:
Frequent operations
StorageBlobLogs | where TimeGenerated \u0026gt; ago(3d) | summarize count() by OperationName | sort by count_ desc | render piechart High latency operations
StorageBlobLogs | where TimeGenerated \u0026gt; ago(3d) | top 10 by DurationMs desc | project TimeGenerated, OperationName, DurationMs, ServerLatencyMs, ClientLatencyMs = DurationMs - ServerLatencyMs Operations causing the most errors
StorageBlobLogs | where TimeGenerated \u0026gt; ago(3d) and StatusText !contains \u0026#34;Success\u0026#34; | summarize count() by OperationName | top 10 by count_ desc A list of all of the built-in queries for Azure Storage logs in Azure Monitor is available in the Azure Montior Community on GitHub in the Azure Services/Storage accounts/Queries folder.
Optimizing your data lake for better scale and performance # Under construction, looking for contributions
In this section, we will address how to optimize your data lake store for your performance in your analytics pipeline. in this section, we will focus on the basic principles that help you optimize the storage transactions. To ensure we have the right context, there is no silver bullet or a 12 step process to optimize your data lake since a lot of considerations depend on the specific usage and the business problems you are trying to solve. However, when we talk about optimizing your data lake for performance, scalability and even cost, it boils down to two key factors :-
Optimize for high throughput target getting at least a few MBs (higher the better) per transaction. Optimize data access patterns reduce unnecessary scanning of files, read only the data you need to read. As a pre-requisite to optimizations, it is important for you to understand more about the transaction profile and data organization. Given the varied nature of analytics scenarios, the optimizations depend on your analytics pipeline, storage I/O patterns and the data sets you operate on, specifically the following aspects of your data lake.
Please note that the scenarios that we talk about is primarily with the focus of optimizing ADLS Gen2 performance. The overall performance of your analytics pipeline would have considerations specific to the analytics engines in addition to the storage performance consideration, our partnerships with the analytics offerings on Azure such as Azure Synapse Analytics, HDInsight and Azure Databricks ensure that we focus on making the overall experience better. In the meantime, while we call out specific engines as examples, please do note that these samples talk primarily about storage performance.
File sizes and number of files # Analytics engines (your ingest or data processing pipelines) incur an overhead for every file they read (related to listing, checking access and other metadata operations) and too many small files can negatively affect the performance of your overall job. Further, when you have files that are too small (in the KBs range), the amount of throughput you achieve with the I/O operations is also low, requiring more I/Os to get the data you want. In general, its a best practice to organize your data into larger sized files (target at least 100 MB or more) for better performance.
In a lot of cases, if your raw data (from various sources) itself is not large, you have the following options to ensure the data set your analytics engines operate on is still optimized with large file sizes.
Add a data processing layer in your analytics pipeline to coalesce data from multiple small files into a large file. You can also use this opportunity to store data in a read-optimized format such as Parquet for downstream processing. In the case of processing real time data, you can use a real time streaming engine (such as Azure Stream Analytics or Spark Streaming) in conjunction with a message broker (such as Event Hub or Apache Kafka) to store your data as larger files. File Formats # As we have already talked about, optimizing your storage I/O patterns can largely benefit the overall performance of your analytics pipeline. It is worth calling out that choosing the right file format can lower your data storage costs in addition to offering better performance. Parquet is one such prevalent data format that is worth exploring for your big data analytics pipeline.
Apache Parquet is an open source file format that is optimized for read heavy analytics pipelines. The columnar storage structure of Parquet lets you skip over non-relevant data making your queries much more efficient. This ability to skip also results in only the data you want being sent from the storage to the analytics engine resulting in lower cost along with better performance. In addition, since the similar data types (for a column) are stored together, Parquet lends itself friendly to efficient data compression and encoding schemes lowering your data storage costs as well, compared to storing the same data in a text file format.
Services such as Azure Synapse Analytics, Azure Databricks and Azure Data Factory have native functionality built in to take advantage of Parquet file formats as well.
Partitioning schemes # An effective paritioning scheme for your data can imrpove the performance of your analytics pipeline and also reduce the overall transaction costs incurred with your query. In simplistic terms, partitioning is a way of organizing your data by grouping datasets with similar attributes together in a storage entity, such as a folder. When your data processing pipeline is querying for data with that similar attribute (E.g. all the data in the past 12 hours), the partitioning scheme (in this case, done by datetime) lets you skip over the irrelevant data and only seek the data that you want.
Let us take an example of an IoT scenario at Contoso where data is ingested real time from various sensors into the data lake. Now, you have various options of storing the data, including (but not limited to) the ones listed below :
Option 1 - /\u0026lt;sensorid\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;temperature\u0026gt;, \u0026lt;sensorid\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;pressure\u0026gt;, \u0026lt;sensorid\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;humidity\u0026gt; Option 2 - /\u0026lt;datetime\u0026gt;/\u0026lt;sensorid\u0026gt;/\u0026lt;temperature\u0026gt;, /\u0026lt;datetime\u0026gt;/\u0026lt;sensorid\u0026gt;/\u0026lt;pressure\u0026gt;, /datetime\u0026gt;/\u0026lt;sensorid\u0026gt;/\u0026lt;humidity\u0026gt; Option 3 - \u0026lt;temperature\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;sensorid\u0026gt;, \u0026lt;pressure\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;sensorid\u0026gt;, \u0026lt;humidity\u0026gt;/\u0026lt;datetime\u0026gt;/\u0026lt;sensorid\u0026gt; If a high priority scenario is to understand the health of the sensors based on the values they send to ensure the sensors are working fine, then you would have analytics pipelines running every hour or so to triangulate data from a specific sensor with data from other sensors to ensure they are working fine. In this case, Option 2 would be the optimal way for organizing the data. If instead your high priority scenario is to understand the weather patterns in the area based on the sensor data to ensure what remedial action you need to take, you would have analytics pipelines running periodically to assess the weather based on the sensor data from the area. In this case, you would want to optimize for the organization by date and attribute over the sensorID.
Open source computing frameworks such as Apache Spark provide native support for partitioning schemes that you can leverage in your big data application.
Use Query Acceleration # Azure Data Lake Storage has a capability called Query Acceleration available in preview that is intended to optimize your performance while lowering the cost. Query acceleration lets you filter for the specific rows and columns of data that you want in your dataset by specifying one more predicates (think of these as similar to the conditions you would provide in your WHERE clause in a SQL query) and column projections (think of these as columns you would specify in the SELECT statement in your SQL query) on unstructured data.
In addition to improving performance by filtering the specific data used by the query, Query Acceleration also lowers the overall cost of your analytics pipeline by optimizing the data transferred, and hence reducing the overall storage transaction costs, and also saving you the cost of compute resources you would have otherwise spun up to read the entire dataset and filter for the subset of data that you need.
Recommended reading # Azure Databricks Best Practices
Use Azure Data Factory to migrate data from an on-premises Hadoop cluster to ADLS Gen2(Azure Storage)
Use Azure Data Factory to migrate data from an AWS S3 to ADLS Gen2(Azure Storage)
Securing access to ADLS Gen2 from Azure Databricks
Understanding access control and data lake configurations in ADLS Gen2
`}),e.add({id:29,href:"/Storage/docs/tools-and-utilities/",title:"Tools and Utilities",section:"Docs",content:" Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage. "}),e.add({id:30,href:"/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/wandisco-set-up-and-installation/",title:"WANdisco Fusion Set up and Installation Guide",section:"Bi-directional sync pattern Guide: A quick start template",content:` WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account
Windows SSH client like Putty, Git for Windows, Cygwin, MobaXterm
Azure Linux Virtual Machine Creation # Go To Azure Portal Home page
Click on + Create a resource
Search for Ubuntu Server. Select Ubuntu Server 16.04 LTS.
Click on Create
In the Basics tab, under Project details, make sure the correct subscription is selected and then choose existing Resource group or Create new one.
Under Instance details, type any name for the Virtual machine name, choose East US for your Region, and choose Ubuntu 18.04 LTS for Image. Leave the other defaults.
Under Administrator account, select SSH public key or password. Fill the details as required. Under Inbound port rules \u0026gt; Public inbound ports, choose Allow selected ports and then select SSH (22) and HTTP (80) from the drop-down.
Leave the defaults under Disks, Networking, Management . In the Advanced tab under Cloud init text field, paste the cloud init content.
Leave the remaining defaults and then select the Review + create button at the bottom of the page.
On the Create a virtual machine page, you can see the details about the VM you are about to create. When you are ready, select Create.
Virtual Machine Connection set up # Create an SSH connection with the VM # Select the Connect button on the overview page for your VM.
Go to Networking under Settings.
Click on Add inbound port rule.
Select Source as IP addresses from the drop down. Provide your source ip address in Source IP addresses/CIDR ranges. Provide list of port ranges as 22,8081,8083,8084 in the Destination port ranges field. Choose TCP under Protocol. Give any Name
Click on Add button.
Connect to VM # To connect to the VM created above, you need a secure shell protocol (SSH) client like Putty, Git for Windows, Cygwin, MobaXterm
Start the VM if it isn\u0026rsquo;t already running. Under Overview configure the DNSname dynamic and set DNS name.
The above DNS name can be used to login into SSH client.
Open the SSH client (Putty , Git , Cygwin, MobaXterm).
:bulb: Note : Here we will be using MobaXterm.
Go to Session \u0026mdash;\u0026gt; Click on SSH
Provide the DNSname into the Remote host field along with username defined for SSH client while creating VM.
Click OK
Provide the password for SSH client.
WANdisco Fusion Set up # Clone the Fusion docker repository using below command in SSH Client:
git clone https://github.com/WANdisco/fusion-docker-compose.git Change to the repository directory:
cd fusion-docker-compose Run the setup script:
./setup-env.sh Enter the option 4 for Custom deployment
Enter the first zone type as adls1
Set the first zone name as [adls1] . Hit enter at the prompt.
Enter the second zone type as adls2
Set the second zone name as [adls2]. Hit enter key at the prompt.
Enter your license file path. Hit enter key at the prompt.
Enter the docker hostname. Hit enter key at the prompt for setting default name.
Enter the HDI version for adls1 as 3.6
Enter HDI version for adls2 as 4.0 . Hit enter key for rest prompts
The docker set up is complete.
To start the Fusion run the below command:
docker-compose up -d ADLS Gen1 and Gen2 Configuration # ADLS Gen1 storage Configuration # Log in to Fusion via a web browser.
Enter the address in the form of: http://{dnsname}:8081
Note: Get the DNS name from portal \u0026ndash;\u0026gt; Go to Virtual machine \u0026ndash;\u0026gt; Overview \u0026ndash;\u0026gt; DNS name
Create account and login to the Fusion.
Click on settings icon for the adls1 storage. Select the ADLS Gen1 storage type
Enter the details for the ADLS Gen1 storage account
ADLS Gen1 storage account details
Hostname / Endpoint (Example: .azuredatalakestore.net)
Home Mount Point / Directory (Example: / or /path/to/mountpoint)
Note: Fusion will be able to read and write to everything contained within the Mount Point.
Client ID / Application ID (Example: a73t6742-2e93-45ty-bd6d-4a8art6578ip)
Refresh URL (Example: https://login.microsoftonline.com/\u0026lt;tenant-id\u0026gt;/oauth2/token)
Handshake User / Service principal name (Example: fusion-app)
ADL credential / Application secret (Example: 8A767YUIa900IuaDEF786DTY67t-u=:])
Click on APPLY CONFIGURATION
ADLS Gen2 storage Configuration # Click on settings icon for the adls2 storage. Select the ADLS Gen2 storage type
Enter the details for the ADLS Gen2 storage account
ADLS Gen2 storage account details
Account name (Example: adlsg2storage)
Container name (Example: fusionreplication)
Access key (Example: eTFdESnXOuG2qoUrqlDyCL+e6456789opasweghtfFMKAHjJg5JkCG8t1h2U1BzXvBwtYfoj5nZaDF87UK09po==)
Click on APPLY CONFIGURATION
References # WANdisco fusion Installation and set up guide How to use SSH key with Windows on Azure `}),e.add({id:31,href:"/Storage/docs/whats-new/",title:"What's New",section:"Docs",content:" What\u0026rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker\u0026rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) "})})()

Просмотреть файл

@ -0,0 +1 @@
"use strict";(function(){const e=document.querySelector("#book-search-input"),t=document.querySelector("#book-search-results");if(!e)return;e.addEventListener("focus",n),e.addEventListener("keyup",s),document.addEventListener("keypress",i);function i(t){if(e===document.activeElement)return;const n=String.fromCharCode(t.charCode);if(!a(n))return;e.focus(),t.preventDefault()}function a(t){const n=e.getAttribute("data-hotkeys")||"";return n.indexOf(t)>=0}function n(){e.removeEventListener("focus",n),e.required=!0,o("/Storage/flexsearch.min.js"),o("/Storage/en.search-data.min.ffc555f8f44ba6cee88be6de1c081d2c0125c73896ac98e12ce713e0c84262b1.js",function(){e.required=!1,s()})}function s(){for(;t.firstChild;)t.removeChild(t.firstChild);if(!e.value)return;const n=window.bookSearchIndex.search(e.value,10);n.forEach(function(e){const n=r("<li><a href></a><small></small></li>"),s=n.querySelector("a"),o=n.querySelector("small");s.href=e.href,s.textContent=e.title,o.textContent=e.section,t.appendChild(n)})}function o(e,t){const n=document.createElement("script");n.defer=!0,n.async=!1,n.src=e,n.onload=t,document.head.appendChild(n)}function r(e){const t=document.createElement("div");return t.innerHTML=e,t.firstChild}})()

Просмотреть файл

@ -2,12 +2,12 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Welcome to the Azure Storage Content Repository. In this repository we have compiled the content that we have found helpful on our engagements with customers.
This repository is intended to be a supplement to the official Azure Storage documentation with additional specialized content and links to non-Microsoft generated content. We encourage you to review the official documentation also.
Azure Storage Day 2021 # Did you miss Azure Storage Day? Check out all the sessions on demand here.">
Azure Storage Day 2021 # Did you miss Azure Storage Day? Check out all the sessions on demand here.">
<meta name="theme-color" content="#FFFFFF"><meta property="og:title" content="Home" />
<meta property="og:description" content="" />
<meta property="og:type" content="website" />
@ -17,7 +17,7 @@ Azure Storage Day 2021 # Did you miss Azure Storage Day? Check out all the sess
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -36,7 +36,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -179,7 +179,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -721,10 +721,10 @@ or trademarks, whether by implication, estoppel or otherwise.</p>
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -12,12 +12,12 @@
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/application-update/</guid>
<description>Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
<description>Application and Workload Update # Overview # The purpose of this document is to provide steps and ways to migrate the workloads and applications from Gen1 to Gen2 after data migration is completed.
This can be applicable for below migration patterns:
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.</description>
Incremental Copy pattern
Lift and Shift copy pattern
Dual Pipeline pattern
As part of this, we will configure services in workloads used and update the applications to point to Gen2 mount.</description>
</item>
<item>
@ -26,7 +26,7 @@ This can be applicable for below migration patterns:
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-object-replication-arm/</guid>
<description>Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
<description>Azure Blob Storage - Setup Object Replication with ARM Templates # Object replication asynchronously copies block blobs between a source storage account and a destination account.
You can find a good overview of the service here, and instructions on how to deploy it via the portal here.
Here we are going to focus on deploying Object Replication with ARM. You will see we are doing this in 3 steps with three templates orchestrated with some CLI code.</description>
</item>
@ -37,7 +37,7 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/data-retention/</guid>
<description>Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.</description>
<description>Azure blob storage data management and retention # When you store your data in blob storage, there are a number of policies which govern how your data is managed and retained in the event of deletion. Data management is strictly governed and Microsoft® is committed to ensuring that your data remains your data, without exception. When you delete your data - either through an API or due to a subscription being removed - there are varying policies which dictate the length of time for which your data may be retained in the event you would need to recover it.</description>
</item>
<item>
@ -46,8 +46,8 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-data-protection-features/</guid>
<description>Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp;amp; Security Azure Defender for Storage Immutable Blob storage </description>
<description> Azure Blob Storage data protection features # Enterprises, partners, and IT professionals store business-critical data in Azure Blob Storage. We are committed to providing the best-in-class data protection and recovery capabilities to keep your applications running. In this video, learn more about the Azure Blob Storage data protection features.
Learn more about Data Protection &amp;amp; Security Azure Defender for Storage Immutable Blob storage </description>
</item>
<item>
@ -56,7 +56,7 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/</guid>
<description>Azure Blob Storage Upload API&amp;rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&amp;rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&amp;rsquo;s. However, a good understanding of the API&amp;rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&amp;rsquo;s to help you compare the differences between them.</description>
<description>Azure Blob Storage Upload API&amp;rsquo;s # Customers typically use existing applications such as AzCopy, Azure Storage Explorer, etc. or the Azure Storage SDK&amp;rsquo;s (.NET, Java, Node.js, Python, Go, PHP, Ruby) when building custom apps to access the Azure Storage API&amp;rsquo;s. However, a good understanding of the API&amp;rsquo;s is critical when tuning your uploads for high performance. This document provides an overview of the different upload API&amp;rsquo;s to help you compare the differences between them.</description>
</item>
<item>
@ -65,8 +65,8 @@ Here we are going to focus on deploying Object Replication with ARM. You will se
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/azure-storage-data-lake-gen2-billing-faq/</guid>
<description>Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
<description>Azure Data Lake Storage Gen2 Billing FAQs # The pricing page for ADLS Gen2 can be found here. This resource provides more detailed answers to frequently asked questions from ADLS Gen2 users.
Terminology # Here are some terms that are key to understanding ADLS Gen2 billing concepts.
Flat namespace (FNS): A mode of organization in a storage account on Azure where objects are organized using a flat structure - aka a flat list of objects.</description>
</item>
@ -76,7 +76,7 @@ Flat namespace (FNS): A mode of organization in a storage account on Azure where
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/commvault/</guid>
<description>Microsoft Partner Documentation for Commvault for Azure # https://documentation.commvault.com/commvault/v11/article?p=31252.htm</description>
<description>Microsoft Partner Documentation for Commvault for Azure # https://documentation.commvault.com/commvault/v11/article?p=31252.htm</description>
</item>
<item>
@ -85,8 +85,8 @@ Flat namespace (FNS): A mode of organization in a storage account on Azure where
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/veritas/</guid>
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
@ -96,7 +96,8 @@ on-
premises Backup
Azure VM&amp;rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.</description>
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.</description>
</item>
<item>
@ -105,8 +106,8 @@ Azure Blob X X X X X X X X Links to Marketplace Offerings # Information
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/rubrik/</guid>
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
<description>Microsoft Partner Documentation for Partner X # This article describes the storage options for partners.
Support Matrix # GPv2
Storage Cool
Tier Archive
Tier WORM
@ -116,7 +117,8 @@ on-
premises Backup
Azure VM&amp;rsquo;s Backup
Azure Files Backup
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.</description>
Azure Blob X X X X X X X X Links to Marketplace Offerings # Information related to the partner marketplace links goes here.
Link 1 Link 2 Links to relevant documentation # Information related to the partner docs goes here.</description>
</item>
<item>
@ -125,7 +127,7 @@ Azure Blob X X X X X X X X Links to Marketplace Offerings # Information
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/backup-and-archive/veeam/</guid>
<description>Links to relevant documentation # https://www.veeam.com/documentation-guides-datasheets.html </description>
<description> Links to relevant documentation # https://www.veeam.com/documentation-guides-datasheets.html </description>
</item>
<item>
@ -134,7 +136,7 @@ Azure Blob X X X X X X X X Links to Marketplace Offerings # Information
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/supported-character-scrubber/</guid>
<description>Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
<description>Azure Storage Supported Character Scrubber # Azure Storage supports a wide variety of Unicode characters across containers, blobs, metadata, and snapshots. When you are migrating from another storage system to Azure, you may find that some characters supported in your source system (e.g., AWS S3) are not supported by Azure and will require an object to be renamed.
The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turnkey solution to discovering unsupported characters in your file names with a simple CSV input.</description>
</item>
@ -144,7 +146,7 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/dual-pipeline/</guid>
<description>Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.</description>
<description>Dual Pipeline Pattern Guide: A quick start template # Overview # The purpose of this document is to provide a manual for the use of Dual pipeline pattern for migration of data from Gen1 to Gen2. This provides the directions, references and approach how to set up the Dual pipeline, do migration of existing data from Gen1 to Gen2 and set up the workloads to run at Gen2 endpoint.</description>
</item>
<item>
@ -153,7 +155,7 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/</guid>
<description>Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.</description>
<description>Estimating Pricing for Azure Block Blob Deployments # We have several tools to help you price Azure Block Blob Storage, however figuring out what questions you need to answer to produce an estimate can sometimes be overwhelming. To that end we have put together this simple template. You can use the template as-is or modify it to fit your workload. Once you have the template populated you will have some estimates you can input into the Azure Pricing Calculator to get a cost estimate.</description>
</item>
<item>
@ -162,7 +164,7 @@ The PowerShell script AzureStorageSupportedCharacterScrubber.ps1 provides a turn
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/adls-gen1-and-gen2-acl-behavior/</guid>
<description>Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&amp;rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
<description>Gen1 and Gen2 ACL Behavior Analysis # Overview # Azure Data Lake Storage is Microsoft&amp;rsquo;s optimized storage solution for big data analytics workloads. ADLS Gen2 is the combination of the current ADLS Gen1 and Blob storage.
Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 combines features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and low cost scalability, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.</description>
</item>
@ -172,7 +174,7 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/hpc-iot-and-ai/</guid>
<description>HPC IoT and AI # Coming Soon. . .</description>
<description>HPC IoT and AI # Coming Soon. . .</description>
</item>
<item>
@ -181,9 +183,9 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-storage-classic-logs-to-data-explorer/</guid>
<description>Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.</description>
<description>Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer # Azure Storage is moving to use Azure Monitor for logging. This is great because querying logs with Kusto is super easy. More info
If you can use Azure Monitor, use it, and dont read the rest of this article.
However, some customers might need to use the Classic Storage logging, but our classic logging goes to text files stored in the $logs container in your storage account.</description>
</item>
<item>
@ -192,8 +194,8 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/code-samples/concurrent-uploads-with-versioning/</guid>
<description>Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.</description>
<description>Managing concurrent uploads in Azure blob storage with blob versioning # When you are building applications that need to have multiple clients uploading to the same object in Azure blob storage, there are several options to help you manage concurrency depending on your strategy. Concurrency strategies include:
Optimistic concurrency: An application performing an update will, as part of its update, determine whether the data has changed since the application last read that data.</description>
</item>
<item>
@ -202,8 +204,8 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/nfs-3-support-for-azure-blob-storage/</guid>
<description>NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com </description>
<description> NFS 3.0 support for Azure Blob Storage # In this video, we introduce Azure Blob NFS 3.0 support, the only public cloud object storage offering native file system compatibility. Learn about NFS support and how to accelerate your workload migration from on premise datacenters to Azure.
Learn more Step by step guide NFSv3 performance considerations Contact us: BlobNFSFeedback@microsoft.com </description>
</item>
<item>
@ -212,8 +214,8 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/application-and-user-data/basics/optimize-your-costs-with-azure-blob-storage/</guid>
<description>Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference </description>
<description> Optimize your costs with Azure Blob Storage # In this video, learn about the Azure Blob Storage features that help you save cost and keep your Total Cost of Ownership (TCO) low.
Learn more about Azure Storage redundancy Tiers and lifecycle Reservations Network routing preference </description>
</item>
<item>
@ -222,7 +224,7 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/storage-partners/</guid>
<description>Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp;amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data </description>
<description> Storage Partners # Archive # Acronis Archive360 Commvault HubStor Igneous Veeam Veritas Backup # Acronis Actifio Carbonite Cloudberry Cohesity Commvault Igneous Rubrik Veeam Veritas Disaster Recovery # Portworx StorageOS Zerto MultiProtocol # Caringo Cloudian Minio Scality MultiSite collaboration # Nasuni Panzura Talon Tiering # Komprise Moonwalk Peer Software Pure Storage Quantum Tools # Cloudberry Komprise Verticals # Automotive # Cognata Elektrobit Linker Networks Financial Services # Archive360 Data Parser HubStor XenData Healthcare # DNA Nexus Nucleus Health Oil &amp;amp; Gas # Cegal Interica PixStor Tiger Tech Xen Data </description>
</item>
<item>
@ -231,9 +233,9 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/hitchhikers-guide-to-the-datalake/</guid>
<description>The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?</description>
<description>The Hitchhiker&#39;s Guide to the Data Lake # A comprehensive guide on key considerations involved in building your enterprise data lake
Share this page using https://aka.ms/adls/hitchhikersguide
The Hitchhiker&#39;s Guide to the Data Lake When is ADLS Gen2 the right choice for your data lake? Key considerations in designing your data lake Terminology Organizing and managing data in your data lake Do I want a centralized or a federated data lake implementation?</description>
</item>
<item>
@ -242,7 +244,7 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/tools-and-utilities/</guid>
<description>Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.</description>
<description>Tools and Utilities # AzCopy is a command-line utility that you can use to copy blobs or files to or from a storage account. AzReplicate is a sample application designed to help Azure Storage customers perform very large, multi-petabyte data migrations to Azure Blob Storage. AzDataMaker is a sample .NET Core app that runs in a Linux Azure Container Instance that generates files and uploads them to Azure Blob Storage.</description>
</item>
<item>
@ -251,11 +253,12 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/analytics/adls-gen1-to-gen2-migration/bi-directional/wandisco-set-up-and-installation/</guid>
<description>WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1</description>
<description>WANdisco Fusion Set up and Installation Guide # Overview # This quickstart will help in setting up the Azure Linux Virtual Machine (VM) suitable for the WANdisco Fusion installation. Below will be covered:
Azure Linux Virtual Machine (VM) creation using Azure Portal
Configuration set up and Installation guide for WANdisco Fusion
Prerequisites # Active Azure Subscription
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2. For more details please refer to create azure storage account</description>
</item>
<item>
@ -264,7 +267,7 @@ Azure Data Lake Storage Gen2 is built on Azure Blob storage and provides a set o
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://azure.github.io/Storage/docs/whats-new/</guid>
<description>What&amp;rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&amp;rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) </description>
<description> What&amp;rsquo;s New # ADLS Billing FAQ (09/23/2021) AzBulkSetBlobTier (08/24/2021) Load, Parse and Summarize Classic Azure Storage Logs in Azure Data Explorer (03/30/2021) How do you deploy Object Replication with ARM? (03/26/2021) What are the differences between the Azure Blob Storage Upload APIs and when should I use each? (02/09/2021) ADLS Gen 1 to Gen 2 Migration Guide (02/08/2021) NFS 3.0 support for Azure Blob Storage (02/03/2021) Optimize your costs with Azure Blob Storage (02/01/2021) Azure Blob Storage data protection features (01/28/2021) Managing concurrent uploads with versioning (01/25/2021) Azure Storage Supported Character Scrubber PowerShell Script (01/05/2021) Estimating Pricing for Azure Block Blob Deployments (01/01/2021) Data management and retention (12/16/2020) Hitchhiker&amp;rsquo;s Guide to the Datalake (10/27/2020) AzReplicate (08/30/2020) AzDataMaker (08/26/2020) </description>
</item>
</channel>

Просмотреть файл

@ -1,14 +1,14 @@
{
"name": "Azure Storage",
"short_name": "Azure Storage",
"start_url": "/Storage/",
"scope": "/Storage/",
"start_url": "/",
"scope": "/",
"display": "standalone",
"background_color": "#000000",
"theme_color": "#000000",
"icons": [
{
"src": "/Storage/favicon.svg",
"src": "/favicon.svg",
"sizes": "512x512"
}
]

Просмотреть файл

@ -2,7 +2,7 @@
<html lang="en" dir=>
<head>
<meta name="generator" content="Hugo 0.88.1" />
<meta name="generator" content="Hugo 0.105.0">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="">
@ -15,7 +15,7 @@
<link rel="manifest" href="/Storage/manifest.json">
<link rel="icon" href="/Storage/favicon.png" type="image/x-icon">
<link rel="stylesheet" href="/Storage/book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css" integrity="sha256-6TXiC9DUaTeMtILwlY7fJYxzGk&#43;JXczVV5nG&#43;8gEPyM=">
<script defer src="/Storage/en.search.min.f9dc316b682362e907b9d54060ecda5e6ae9c979e4306a6c9887393766a69511.js" integrity="sha256-&#43;dwxa2gjYukHudVAYOzaXmrpyXnkMGpsmIc5N2amlRE="></script>
<script defer src="/Storage/en.search.min.63fdb55cd2e04f8a9f17757914d9129a2b2aaff34673d2d1e6755837978a1e31.js" integrity="sha256-Y/21XNLgT4qfF3V5FNkSmisqr/NGc9LR5nVYN5eKHjE="></script>
<link rel="alternate" type="application/rss+xml" href="https://azure.github.io/Storage/tags/index.xml" title="Azure Storage" />
<!--
Made with Book Theme
@ -34,7 +34,7 @@ https://github.com/alex-shpak/hugo-book
<nav>
<h2 class="book-brand">
<a href="/Storage"><img src="/Storage/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
<a href="/Storage"><img src="/images/azure-icon.png" alt="Logo" /><span>Azure Storage</span>
</a>
</h2>
@ -177,7 +177,7 @@ https://github.com/alex-shpak/hugo-book
<script>(function(){var a=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(b){localStorage.setItem("menu.scrollTop",a.scrollTop)}),a.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
<script>(function(){var e=document.querySelector("aside.book-menu nav");addEventListener("beforeunload",function(){localStorage.setItem("menu.scrollTop",e.scrollTop)}),e.scrollTop=localStorage.getItem("menu.scrollTop")})()</script>
@ -255,10 +255,10 @@ https://github.com/alex-shpak/hugo-book
<hr />
Azure Storage &copy;2021 <br />
Azure Storage &copy;2022 <br />
Visit the <a href="https://azure.microsoft.com/services/storage/">Azure Storage homepage</a> or read our <a href="https://docs.microsoft.com/azure/storage/">getting started guide</a> or the <a href="https://azure.microsoft.com/blog/topics/storage-backup-and-recovery/">Azure Storage Blog</a>. <br />
Contact us: <a href="mailto:azurestoragefeedback@microsoft.com?subject=AzureStorage.com%20Feedback">azurestoragefeedback@microsoft.com</a>.<br />
Generated on Fri, Sep 24 2021 17:07:45 UTC
Generated on Wed, Nov 09 2022 01:23:51 UTC
</footer>

Просмотреть файл

@ -1 +1,10 @@
<!DOCTYPE html><html><head><title>https://azure.github.io/Storage/tags/</title><link rel="canonical" href="https://azure.github.io/Storage/tags/"/><meta name="robots" content="noindex"><meta charset="utf-8" /><meta http-equiv="refresh" content="0; url=https://azure.github.io/Storage/tags/" /></head></html>
<!DOCTYPE html>
<html lang="en-us">
<head>
<title>https://azure.github.io/Storage/tags/</title>
<link rel="canonical" href="https://azure.github.io/Storage/tags/">
<meta name="robots" content="noindex">
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url=https://azure.github.io/Storage/tags/">
</head>
</html>

0
src/.hugo_build.lock Normal file
Просмотреть файл

Двоичный файл не отображается.

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -1 +1 @@
{"Target":"book.min.6c7c6446dfdee7c8c933e9bbc6e80ee3ed6c913b2a59519f2092c3c6a9d63e55.css","MediaType":"text/css","Data":{"Integrity":"sha256-bHxkRt/e58jJM+m7xugO4+1skTsqWVGfIJLDxqnWPlU="}}
{"Target":"book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css","MediaType":"text/css","Data":{"Integrity":"sha256-6TXiC9DUaTeMtILwlY7fJYxzGk+JXczVV5nG+8gEPyM="}}

Различия файлов скрыты, потому что одна или несколько строк слишком длинны

Просмотреть файл

@ -1 +1 @@
{"Target":"book.min.e935e20bd0d469378cb482f0958edf258c731a4f895dccd55799c6fbc8043f23.css","MediaType":"text/css","Data":{"Integrity":"sha256-6TXiC9DUaTeMtILwlY7fJYxzGk+JXczVV5nG+8gEPyM="}}
{"Target":"book.min.6c7c6446dfdee7c8c933e9bbc6e80ee3ed6c913b2a59519f2092c3c6a9d63e55.css","MediaType":"text/css","Data":{"Integrity":"sha256-bHxkRt/e58jJM+m7xugO4+1skTsqWVGfIJLDxqnWPlU="}}