Merge pull request #232 from microsoft/annaworking
Anna pull in to ASWS
|
@ -55,7 +55,7 @@ Your workshop invitation may have instructed you that they will provide a Micros
|
|||
|
||||
In order to complete this workshop you need to install the following software:
|
||||
|
||||
1. Create a resource group for the workshop, naming it **azuresqlworkshopUID** where **ID** is some 4-6 digit identifier that you can easily remember (e.g. 0406 is my birthday so I might pick "azuresqlworkshop0406"). Use this same **ID** every time you are told to name something ending in **ID**. Select a region that is close to where you are, and use this region for all future resoureces.
|
||||
1. Create a resource group for the workshop, naming it **azuresqlworkshopID** where **ID** is some 4-6 digit identifier that you can easily remember (e.g. 0406 is my birthday so I might pick "azuresqlworkshop0406"). Use this same **ID** every time you are told to name something ending in **ID**. Select a region that is close to where you are, and use this region for all future resources.
|
||||
1. Deploy an [Azure virtual machine](https://ms.portal.azure.com/#create/Microsoft.VirtualMachine-ARM) (link goes to service in Azure portal). The recommended minimum size is a **D2s_v3**, and you should use a **Windows 10** image. Name the virtual machine **win-vmID** (i.e. "win-vm0406"). Accept other defaults, and refer to more information on deploying Azure virtual machines [here](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal#create-virtual-machine).
|
||||
|
||||
1. Connect to the virtual machine, and perform the remaining steps in the virtual machine.
|
||||
|
|
|
@ -63,7 +63,7 @@ This isn't the focus of this workshop, but if you're considering Azure SQL VMs,
|
|||
|
||||
## IaaS vs PaaS
|
||||
|
||||
Azure SQL virtual machines are considered IaaS. The other deployment options in the Azure SQL umbrella (Azure SQL managed instance and Azure SQL Database) are [Platform as a Service (Paas)](https://azure.microsoft.com/en-us/overview/what-is-paas/) deployments. These PaaS Azure SQL deployment options use fully managed Database Engine that automates most of the database management functions such as upgrading, patching, backups, and monitoring. Throughout this course, you'll learn much more about the benefits and capabilities that the PaaS deployment options enable and how to optimally configure, manage, and troubleshoot them, but some highlights are listed below:
|
||||
Azure SQL virtual machines are considered IaaS. The other deployment options in the Azure SQL umbrella (Azure SQL managed instance and Azure SQL Database) are [Platform as a Service (PaaS)](https://azure.microsoft.com/en-us/overview/what-is-paas/) deployments. These PaaS Azure SQL deployment options use fully managed Database Engine that automates most of the database management functions such as upgrading, patching, backups, and monitoring. Throughout this course, you'll learn much more about the benefits and capabilities that the PaaS deployment options enable and how to optimally configure, manage, and troubleshoot them, but some highlights are listed below:
|
||||
|
||||
* [Business continuity](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-business-continuity) enables your business to continue operating in the face of disruption, particularly to its computing infrastructure.
|
||||
* [High availability](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-high-availability) of Azure SQL Database guarantees your databases are up and running 99.99% of the time, no need to worry about maintenance/downtimes.
|
||||
|
@ -83,7 +83,7 @@ Azure SQL virtual machines are considered IaaS. The other deployment options in
|
|||
|
||||
![](../graphics/sqlmi.png)
|
||||
|
||||
[Azure SQL managed instance](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance) is a PaaS deployment option of Azure SQL that basically gives you an evergreen instance of SQL Server. Most of the features available in the SQL Server box products are available in Azure SQL managed instance (Azure SQL MI). This option is ideal for customers who want to leverage instance-scoped features (features that are tied to an instance of SQL Server as opposed to features that are tied to a database in an instance of SQL Server) like SQL Server Agent, Service Broker, Common Language Runtime (CLR), etc and want to move to Azure without rearchitecting their applications. While Azure SQL MI allows customers to access the instance-scoped features, customers do not have to worry about (nor do they have access to) the OS or the infrastructure underneath.
|
||||
[Azure SQL managed instance](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance) is a PaaS deployment option of Azure SQL that basically gives you an evergreen instance of SQL Server. Most of the features available in the SQL Server box products are available in Azure SQL managed instance (Azure SQL MI). This option is ideal for customers who want to leverage instance-scoped features (features that are tied to an instance of SQL Server as opposed to features that are tied to a database in an instance of SQL Server) like SQL Server Agent, Service Broker, Common Language Runtime (CLR), etc. and want to move to Azure without rearchitecting their applications. While Azure SQL MI allows customers to access the instance-scoped features, customers do not have to worry about (nor do they have access to) the OS or the infrastructure underneath.
|
||||
|
||||
A good customer example comes from [Komatsu](https://customers.microsoft.com/en-us/story/komatsu-australia-manufacturing-azure). Komatsu is a manufacturing company that produces and sells heavy equipment for construction. They had multiple mainframe applications for different types of data, which they wanted to consolidate to get a holistic view. Additionally, they wanted a way reduce overhead. Because Komatsu uses a large surface area of SQL Server features, they chose to move to **Azure SQL Managed Instance**. They were able to move about 1.5 terabytes of data smoothly, and [start enjoying benefits like automatic patching and version updates, automated backups, high availability, and reduced management overhead](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-technical-overview). After migrating, they reported ~49% cost reduction and ~25-30% performance gains.
|
||||
|
||||
|
@ -158,11 +158,11 @@ The vCore model lets you choose the generation of hardware:
|
|||
* **Gen4**: Up to 24 logical CPUs based on Intel E5-2673 v3 (Haswell) 2.4-GHz processors, vCore = 1 physical core, 7 GB per core, attached SSD
|
||||
* **Gen5**: Up to 80 logical CPUs based on Intel E5-2673 v4 (Broadwell) 2.3-GHz processors, vCore = 1 hyper-thread, 5.1 GB per core, fast NVMe SSD
|
||||
|
||||
Basically, Gen4 hardware offers substantially more memory per vCore. However, Gen5 hardware allows you to scale up compute resources much higher. [New Gen4 databases are no longer supported in certain regions](https://azure.microsoft.com/en-us/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/), where Gen5 is available in most regions worldwide. As technology advances, you can expect that the hardware will change as well. For example, Fsv2-series (compute optimized) and M-series (memory optmized) hardware options recently became available in public preview for Azure SQL DB. You can reivew the latest hardware generations and availability [here](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore#hardware-generations).
|
||||
Basically, Gen4 hardware offers substantially more memory per vCore. However, Gen5 hardware allows you to scale up compute resources much higher. [New Gen4 databases are no longer supported in certain regions](https://azure.microsoft.com/en-us/updates/gen-4-hardware-on-azure-sql-database-approaching-end-of-life-in-2020/), where Gen5 is available in most regions worldwide. As technology advances, you can expect that the hardware will change as well. For example, Fsv2-series (compute optimized) and M-series (memory optimized) hardware options recently became available in public preview for Azure SQL DB. You can review the latest hardware generations and availability [here](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-vcore#hardware-generations).
|
||||
|
||||
> Note: If you choose General Purpose within Azure SQL DB and want to use the serverless compute tier, Gen5 hardware is the only option and it currently can scale up to 16 vCores.
|
||||
|
||||
|
||||
In this module, you learned about Azure SQL, including the deployment options, purchasing models, service tiers, and hardware choices. Hopefully, you also have a better understanding of what to choose when. In the next module, you'll learn more about deploying and configuring Azure SQL.
|
||||
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
|
|
@ -21,16 +21,22 @@ In this module, you'll cover these topics:
|
|||
[Activity 2](#2): Initial connect and comparison
|
||||
[Activity 3](#3): Verify deployment queries
|
||||
[2.3](#2.3): Configure
|
||||
[Activity 4](#4): Configure with Azure CLI
|
||||
[2.4](#2.4): Deploy and configure databases
|
||||
[2.5](#2.5): Load data
|
||||
[Activity 5](#5): Load data
|
||||
(Bonus) [Activity 4](#4): Configure with Azure CLI
|
||||
[2.4](#2.4): Load data
|
||||
(Bonus) [Activity 5](#5): Load data
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.1">2.1 Pre-deployment planning</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
Before you start deploying things in Azure, it's important to understand what your requirements are and how they match to offerings in Azure SQL. Using what you learned in Module 1, it's time to make a plan. You need to determine the following:
|
||||
* Deployment method: GUI or unattended?
|
||||
* Deployment option: VM, DB, Elastic Pool, MI, or Instance Pool?
|
||||
* Purchasing model: DTU or vCore?
|
||||
* Service tier (SLO): General purpose, business critical, or hyperscale?
|
||||
* Hardware: Gen4, Gen5, or something new?
|
||||
* Sizing: number of vCores and data max size?
|
||||
> The Data Migration Assistant tool (DMA) has a [SKU Recommender](https://docs.microsoft.com/en-us/sql/dma/dma-sku-recommend-sql-db?view=sql-server-ver15) that can help you determine the number of vCores and size if you are migrating.
|
||||
|
||||
<br>
|
||||
|
||||
|
@ -38,7 +44,7 @@ TODO: Topic Description
|
|||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.2">2.2 Deploy and Verify</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
Once you've completed your pre-deployment planning, it's time to deploy and verify that deployment. In this stage, you'll deploy Azure SQL (using the Azure portal or command-line), determine network configuration and how to connect, and run some queries that verify your deployment configuration.
|
||||
|
||||
<br>
|
||||
|
||||
|
@ -73,7 +79,7 @@ When you create an Azure SQL MI, supplying the server name is the same as in SQL
|
|||
|
||||
Select **Create new** next to "Server" and provide the following information:
|
||||
* *Server name*: **aw-serverID** where ID is the same identifier you used for the database and resource group.
|
||||
* *Server admin login*: **cloudadmin**. This is the equilavent to the system admin in SQL Server. This account connects using SQL authentication (username and password) and only one of these accounts can exist.
|
||||
* *Server admin login*: **cloudadmin**. This is the equivalent to the system admin in SQL Server. This account connects using SQL authentication (username and password) and only one of these accounts can exist.
|
||||
* *Password*: A complex password that meets the requirements.
|
||||
* *Location*: Use the same location as your resource group.
|
||||
|
||||
|
@ -96,12 +102,12 @@ For the purposes of this workshop, we'll focus on the vCore purchasing model (re
|
|||
**Step 6 - Service tier**
|
||||
>For more details on service tiers and comparisons, refer to [Module 1](../azuresqlworkshop/01-IntroToAzureSQL.md).
|
||||
|
||||
The next decision is choosing the service tier for performance and availability. We recommend you start with the General Purpose, and adjust as needed.
|
||||
The next decision is choosing the service tier for performance and availability. We recommend you start with the General Purpose and adjust as needed.
|
||||
|
||||
**Step 7 - Hardware**
|
||||
>For more details on available hardware and comparisons, refer to [Module 1](../azuresqlworkshop/01-IntroToAzureSQL.md).
|
||||
|
||||
For the workshop, you can leave the default hardware selection of **Gen5** but you can select **Change configuration** to view the other options available (may vary by region).
|
||||
For the workshop, you can leave the default hardware selection of **Gen5**, but you can select **Change configuration** to view the other options available (may vary by region).
|
||||
|
||||
**Step 8 - Sizing**
|
||||
|
||||
|
@ -109,7 +115,9 @@ One of the final steps is to determine how many vCores and the Data max size. Fo
|
|||
|
||||
Generally, if you're migrating, you should use a similar size as to what you use on-premises. You can also leverage tools, like the [Data Migration Assistant SKU Recommender](https://docs.microsoft.com/en-us/sql/dma/dma-sku-recommend-sql-db?view=sql-server-ver15) to estimate the vCore and Data max size based on your current workload.
|
||||
|
||||
You might also be wondering what "9.6 GB LOG SPACE ALLOCATED" in the bottom right corner means. TODO
|
||||
The Data max size is not necessarily the database size of your data today. It is the maximum amount of data space that can be allocated for your database. For more information about the difference between data space used, data space allocated, and data max size, refer to this [explanation in the documentation](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-file-space-management#understanding-types-of-storage-space-for-a-database). This will also help you understand the log space allocated, which scales with your data max size.
|
||||
|
||||
|
||||
|
||||
Before you select **Apply**, confirm your selections look similar to those below:
|
||||
|
||||
|
@ -132,7 +140,7 @@ You can then choose to select Public endpoint or Private endpoint (preview). In
|
|||
|
||||
With Azure SQL MI, you deploy it inside an Azure virtual network and a subnet that is dedicated to managed instances. This enables you to have a completely secure, private IP address. Azure SQL MI provides the ability to connect an on-prem network to a managed instance, connect a managed instance to a linked server or other on-prem data store, and connect a managed instance to other resources. You can additionally enable a public endpoint so you can connect to managed instance from the Internet without VPN. This access is disabled by default.
|
||||
|
||||
The principle of private endpoints through virtual network isolation is making it's way to Azure SQL DB in something called 'private link' (currently in public preview), and you can learn more [here](https://docs.microsoft.com/en-us/azure/private-link/private-link-overview).
|
||||
The principle of private endpoints through virtual network isolation is making its way to Azure SQL DB in something called 'private link' (currently in public preview), and you can learn more [here](https://docs.microsoft.com/en-us/azure/private-link/private-link-overview).
|
||||
|
||||
More information on connectivity for Azure SQL DB can be found [here](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-architecture) and for Azure SQL MI [here](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-connectivity-architecture). There will also be more on this topic in upcoming sections/modules.
|
||||
|
||||
|
@ -148,19 +156,19 @@ For the workshop, select **Sample**.
|
|||
|
||||
Since we're using the AdventureWorksLT sample, the **database collation is already set**. For a review of collations and how they apply in Azure SQL, continue reading, otherwise **you can skip to Step 12**.
|
||||
|
||||
Collations in SQL Server and Azure SQL tell the Database Engine how to treat certain characters and languages. A collation provides the sorting rules, case, and accent sensitivity properties for your data. When you're creating a new Azure SQL DB or MI, it's important to first take into account the locale requirements of the data you're working with, because the collation set will affect the characteristics of many operations in the database. In the SQL Server box product, the default collation is typically determined by the OS locale. In Azure SQL MI, you can set the server collation upon creation of the instance, and it cannot be changed later. The server collation sets the default for all of the databases in that instance of Azure SQL MI, but you can modify the collations on a database and column level. In Azure SQL DB, you can not set the server collation, it is set at the default (and most common) collation of `SQL_Latin1_General_CP1_CI_AS`, but you can set the database collation. If we break that into chunks:
|
||||
Collations in SQL Server and Azure SQL tell the Database Engine how to treat certain characters and languages. A collation provides the sorting rules, case, and accent sensitivity properties for your data. When you're creating a new Azure SQL DB or MI, it's important to first take into account the locale requirements of the data you're working with, because the collation set will affect the characteristics of many operations in the database. In the SQL Server box product, the default collation is typically determined by the OS locale. In Azure SQL MI, you can set the server collation upon creation of the instance, and it cannot be changed later. The server collation sets the default for all of the databases in that instance of Azure SQL MI, but you can modify the collations on a database and column level. In Azure SQL DB, you cannot set the server collation, it is set at the default (and most common) collation of `SQL_Latin1_General_CP1_CI_AS`, but you can set the database collation. If we break that into chunks:
|
||||
* `SQL` means it is a SQL Server collation (as opposed to a Windows or Binary collation)
|
||||
* `Latin1_General` specifies the alphabet/language to use when sorting
|
||||
* `CP1` references the code page used by the collation
|
||||
* `CI` means it will be case insensitive, where `CS` is case sensitive
|
||||
* `AS` meand it will be accent sensitive, where `AI` is accent insensitive
|
||||
* `AS` means it will be accent sensitive, where `AI` is accent insensitive
|
||||
|
||||
There are other options available related to widths, UTF-8, etc., and more details about what you can and can't do with Azure SQL [here](https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15).
|
||||
|
||||
|
||||
**Step 12 - Opt-in for Advanced Data Security**
|
||||
|
||||
When you deploy Azure SQL DB in the portal, you are prompted if you'd like to enable [Advanced Data Security (ADS)](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-advanced-data-security) on a free trial. Select **Start free trial**. After the free trial, it is billed according to the [Azure Security Center Standard Tier pricing](https://azure.microsoft.com/en-us/pricing/details/security-center/). If you choose to enable it, you get functionality related to data discovery and classification, identifying/mitigating potential database vulnerabilities, and threat detection. You'll learn more about these capabilities in the next module (<a href="https://github.com/microsoft/sqlworkshops/blob/master/AzureSQLWorkshop/azuresqlworkshop/03-Security.md" target="_blank"><i>03 - Security</i></a>). In Azure SQL MI, you can enable it on the instance after deployment.
|
||||
When you deploy Azure SQL DB in the portal, you are prompted if you'd like to enable [Advanced Data Security (ADS)](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-advanced-data-security) on a free trial. Select **Start free trial**. After the free trial, it is billed according to the [Azure Security Center Standard Tier pricing](https://azure.microsoft.com/en-us/pricing/details/security-center/). If you choose to enable it, you get functionality related to data discovery and classification, identifying/mitigating potential database vulnerabilities, and threat detection. You'll learn more about these capabilities in the next module (<a href="https://github.com/microsoft/sqlworkshops/blob/master/AzureSQLWorkshop/azuresqlworkshop/03-Security.md" target="_blank">03 - Security</a>). In Azure SQL MI, you can enable it on the instance after deployment.
|
||||
|
||||
Your "Additional settings" pane should now look similar to the image below.
|
||||
|
||||
|
@ -199,7 +207,7 @@ If, for whatever reason, you get lost from this page and the deployment has not
|
|||
Once your resource has deployment, review the "Overview" pane for the SQL database in the Azure portal and confirm that the Status is "Online."
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 2</a>: Initial connect and comparison</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="2"><b>Activity 2</a>: Initial connect and comparison</b></p>
|
||||
|
||||
**Step 1 - Connect to SQL Server 2019**
|
||||
Now that everything looks to be up and running in the Azure portal, let's switch to a familiar tool, SQL Server Management Studio (SSMS). Open SSMS and connect, using Windows Authentication, to the local instance of SQL Server 2019 that's running on your Azure VM (if you don't have this, please revisit the prerequisites).
|
||||
|
@ -230,9 +238,8 @@ Expanding the databases and system databases should result in a view similar to
|
|||
|
||||
Spend a few minutes clicking around and exploring the differences, at first glance, between the Azure SQL Database logical server and Azure SQL Database. You won't deploy an Azure SQL Managed Instance as part of this workshop, but the image below shows how Azure SQL Managed Instance would appear in SSMS.
|
||||
|
||||
**TODO SCREENSHOT OF SSMS WITH ADVENTUREWORKS**
|
||||
|
||||
|
||||
![](../graphics/miserver.png)
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="3"><b>Activity 3</a>: Verify deployment queries</b></p>
|
||||
|
@ -281,56 +288,58 @@ Next, select the **Explorer** icon from the left taskbar to navigate through the
|
|||
|
||||
![](../graphics/explorer.png)
|
||||
|
||||
Throughout the workshop, you'll be instructed at various points to open a notebook (file ending in `.ipynb`) or a script (file ending in `.sql`), and you can access those through here directly.
|
||||
Throughout the workshop, you'll be instructed at various points to open a notebook (file ending in `.ipynb`), and you can access those from here directly.
|
||||
|
||||
**Step 3 - Verify deployment queries**
|
||||
|
||||
Once you've deployed an instance of SQL (be in Azure SQL or SQL Server), there are typically some queries you would run to verify your deployment. In Azure SQL, some of these queries vary from SQL Server. In this step, you'll see what and how things change from SQL Server, and what is new.
|
||||
|
||||
For this step, you'll use the notebook **VerifyDeployment.ipynb** which is under `azuresqlworkshop\01-IntroToAzureSQL\verifydeployment\VerifyDeployment.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
For this step, you'll use the notebook **VerifyDeployment.ipynb** which is under `azuresqlworkshop\02-DeployAndConfigure\verifydeployment\VerifyDeployment.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.3">2.3 Configure</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
TODO: Put in text here that talks about the process to configure, and configure/deploy databases with Azure SQL comparing this to SQL Server
|
||||
|
||||
<br>
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="4"><b>Activity 4</a>: Azure CLI</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>(Bonus) <a name="4">Activity 4</a>: Configure with Azure CLI</b></p>
|
||||
|
||||
So you've seen the Azure portal, SSMS, and SQL Notebooks in ADS, but there are other tools available to you to use to manage Azure SQL. Two of the most popular are the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest) and [Azure PowerShell](https://docs.microsoft.com/en-us/powershell/azure/?view=azps-3.3.0). They are similar in their functionality, but for this workshop we will focus on the Azure CLI.
|
||||
|
||||
To complete this activity, you'll use a PowerShell notebook, which is the same concept as a SQL notebook, but the coding language is PowerShell. You can use PowerShell notebooks to leverage Azure CLI or Azure PowerShell, but we will focus on Azure CLI. For more information on the Azure PowerShell module, [see the documentation](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-powershell-samples?tabs=single-database). For both of these tools, you can also use the [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview), which is an interactive shell environment that you can use through your browser in the Azure portal.
|
||||
|
||||
For this activity, you'll use the notebook called **AzureCli.ipynb** which is under `azuresqlworkshop\01-IntroToAzureSQL\cli\AzureCli.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
In the example that follows, you'll also explore the latency effects of using different connection policies in Azure SQL.
|
||||
|
||||
>In the `cli` folder, you'll also find a script if you want to try the activity with the Azure Cloud Shell.
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
For this activity, you'll use the notebook called **AzureCli.ipynb** which is under `azuresqlworkshop\02-DeployAndConfigure\cli\AzureCli.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.4">2.4 Deploy and configure databases</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
|
||||
<br>
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.5">2.5 Load data</h2></a>
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="2.4">2.4 Load data</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
TODO: Put in text here that talks about the process to load data with Azure SQL comparing this to SQL Server
|
||||
|
||||
<br>
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="5"><b>Activity 5</a>: Load data</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>(Bonus) <a name="5">Activity 5</a>: Load data into Azure SQL Database</b></p>
|
||||
|
||||
TBD
|
||||
In this activity, you'll explore one scenario for bulk loading data from Azure Blob storage using T-SQL and Shared Access Signatures (SAS) into Azure SQL Database.
|
||||
|
||||
For this activity, you'll use the notebook called **LoadData.ipynb** which is under `azuresqlworkshop\02-DeployAndConfigure\loaddata\LoadData.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
|
||||
In this module and throughout the activities, you learned how to deploy and configure Azure SQL. In the next module, you'll dive in to security for Azure SQL.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
|
||||
<ul>
|
||||
<li><a href="url" target="_blank">TODO: Enter courses, books, posts, whatever the student needs to extend their study</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/sql/dma/dma-sku-recommend-sql-db?view=sql-server-ver15" target="_blank">Data Migration Assistant tool (DMA) SKU Recommender</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-get-started" target="_blank">Quickstart: Create an Azure SQL Managed Instance</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-migrate" target="_blank">How to migrate to Azure SQL Managed Instance</a></li>
|
||||
</ul>
|
||||
|
||||
|
||||
|
|
|
@ -78,8 +78,14 @@
|
|||
"azdata_cell_guid": "e91e1c9a-7175-45e0-879b-0f6ef150db3c",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 3
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": ""
|
||||
}
|
||||
],
|
||||
"execution_count": 2
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
@ -100,7 +106,7 @@
|
|||
"azdata_cell_guid": "828e608f-fadd-40a7-8ac2-1f5394a81412"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 4
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
@ -121,12 +127,12 @@
|
|||
"azdata_cell_guid": "85457b28-ac25-40a1-8dea-5cd1ca8a70f2"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 5
|
||||
"execution_count": 4
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"You can also determine the usage."
|
||||
"You can also determine the database suze and usage."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "9439704f-73d5-4c60-a422-233e2062dcff"
|
||||
|
@ -144,10 +150,10 @@
|
|||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": "[\n {\n \"currentValue\": 29360128.0,\n \"displayName\": \"Database Size\",\n \"limit\": 34359738368.0,\n \"name\": \"database_size\",\n \"nextResetTime\": null,\n \"resourceName\": \"AdventureWorks0406\",\n \"unit\": \"Bytes\"\n },\n {\n \"currentValue\": 33554432.0,\n \"displayName\": \"Database Allocated Size\",\n \"limit\": 34359738368.0,\n \"name\": \"database_allocated_size\",\n \"nextResetTime\": null,\n \"resourceName\": \"AdventureWorks0406\",\n \"unit\": \"Bytes\"\n }\n]\n"
|
||||
"text": "[\n {\n \"currentValue\": 556793856.0,\n \"displayName\": \"Database Size\",\n \"limit\": 34359738368.0,\n \"name\": \"database_size\",\n \"nextResetTime\": null,\n \"resourceName\": \"AdventureWorks0406\",\n \"unit\": \"Bytes\"\n },\n {\n \"currentValue\": 1610612736.0,\n \"displayName\": \"Database Allocated Size\",\n \"limit\": 34359738368.0,\n \"name\": \"database_allocated_size\",\n \"nextResetTime\": null,\n \"resourceName\": \"AdventureWorks0406\",\n \"unit\": \"Bytes\"\n }\n]\n"
|
||||
}
|
||||
],
|
||||
"execution_count": 7
|
||||
"execution_count": 5
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
|
@ -199,37 +205,65 @@
|
|||
"azdata_cell_guid": "1af59f8b-afe3-43c5-863d-3549f06871ed"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 8
|
||||
"execution_count": 6
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"So the results above tell us the connection type is \"Default\". What if we want to make everything `Redirect` so we can achieve reduced latency? It's as easy as running two commands. \r\n",
|
||||
"\r\n",
|
||||
"First, we need to update the firewall rules to allow the required ports."
|
||||
"So the results above tell us the connection type is \"Default\". Let's set it to \"Proxy\" and determine the round trip time."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "7eb584ef-cc42-4dc8-920d-1b049f33fe0c"
|
||||
"azdata_cell_guid": "57406599-f0e7-4862-ae96-feb2bf4b3d04"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"az sql server firewall-rule create -n NewRule --start-ip-address 11000 --end-ip-address 11999 # ?????????????????????????????????????????????????????????????"
|
||||
"# update policy\r\n",
|
||||
"az sql server conn-policy update --connection-type Proxy\r\n",
|
||||
"# confirm update\r\n",
|
||||
"az sql server conn-policy show"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "af7d38e8-5cb8-4ef1-8718-b871403943ca"
|
||||
"azdata_cell_guid": "5114ebd7-b37d-4cd8-abbc-4a00ba77f798"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 0
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": "{\n \"connectionType\": \"Proxy\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": null,\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n{\n \"connectionType\": \"Proxy\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": \"West US\",\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n"
|
||||
}
|
||||
],
|
||||
"execution_count": 16
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Then, we can update the connection policy"
|
||||
"If you want to test round trip time, you can connect with SSMS, create a new query (below), and choose to \"Include Client Statistics\" in your results. In the results, the \"Wait time on server replies\" is the best indicator of network latency. You can run this a few times to get a good average. \r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"-- Proxy\r\n",
|
||||
"SELECT * FROM SalesLT.Product\r\n",
|
||||
"GO 20\r\n",
|
||||
"```\r\n",
|
||||
"\r\n",
|
||||
"After 10 trials, I had an average wait time on server replies of `46.6000`"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "fe00029a-e099-47f3-932f-7e62c4e4e9ee"
|
||||
"azdata_cell_guid": "77286271-b423-407a-9b8e-4c96d80b6d0e"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"What if we want to make everything `Redirect` so we can attempt to achieve reduced latency?\r\n",
|
||||
"\r\n",
|
||||
"First, for anything that is **outside Azure**, you need to allow inbound and outbound communication on ports in the range of 11000 - 11999. This is required for the Redirect connection policy. Since you are connecting through an Azure VM, there is no action here. \r\n",
|
||||
"\r\n",
|
||||
"Update the connection policy and confirm that update with the following two commands. "
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "7eb584ef-cc42-4dc8-920d-1b049f33fe0c"
|
||||
}
|
||||
},
|
||||
{
|
||||
|
@ -243,8 +277,68 @@
|
|||
"metadata": {
|
||||
"azdata_cell_guid": "c540779a-9c4d-472f-8ddc-6fc59b8437d4"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 0
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": "{\n \"connectionType\": \"Redirect\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": null,\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n{\n \"connectionType\": \"Redirect\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": \"West US\",\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n"
|
||||
}
|
||||
],
|
||||
"execution_count": 17
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Now, to test network latency from the `Redirect` policy, connect with SSMS, create a new query (below), and choose to \"Include Client Statistics\" in your results. Compare the \"Wait time on server replies\" with your query for `Proxy`. \r\n",
|
||||
"\r\n",
|
||||
"> Note: you'll need to create a **new connection to query** to evaluate (i.e. right-click on the Adventure Works database and select New Query, do not use the query window you used to test Proxy).\r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"-- Redirect\r\n",
|
||||
"SELECT * FROM SalesLT.Product\r\n",
|
||||
"GO 20\r\n",
|
||||
"```\r\n",
|
||||
"\r\n",
|
||||
"After 10 trials, I have an average wait time on server replies of `25.8000`, which is almost half that of the Proxy connection policy. \r\n",
|
||||
"\r\n",
|
||||
"### To review\r\n",
|
||||
"\r\n",
|
||||
"Redirect is faster because after the intial connection, you can bypass the gateway and go straight to the database. This means less hops, which results in less latency, which ultimately helps in preventing bottlenecks (especially important for chatty applications).\r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "1b519a96-f1ea-4454-a081-2a7ffc28c38a"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"To set it back to default (or change to Proxy), you can use the same set of commands."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "45a85215-ed6c-447f-8c06-7f59220e9915"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# update policy\r\n",
|
||||
"az sql server conn-policy update --connection-type Default\r\n",
|
||||
"# confirm update\r\n",
|
||||
"az sql server conn-policy show"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "34f23b31-0252-436f-abdb-f0b7208f44b7"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": "{\n \"connectionType\": \"Default\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": null,\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n{\n \"connectionType\": \"Default\",\n \"id\": \"/subscriptions/227e9423-1792-43b0-82e6-ac94397ed789/resourceGroups/azuresqlworkshop0406/providers/Microsoft.Sql/servers/aw-server0406/connectionPolicies/default\",\n \"kind\": null,\n \"location\": \"West US\",\n \"name\": \"default\",\n \"resourceGroup\": \"azuresqlworkshop0406\",\n \"type\": \"Microsoft.Sql/servers/connectionPolicies\"\n}\n"
|
||||
}
|
||||
],
|
||||
"execution_count": 18
|
||||
}
|
||||
]
|
||||
}
|
|
@ -0,0 +1,407 @@
|
|||
{
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"name": "SQL",
|
||||
"display_name": "SQL",
|
||||
"language": "sql"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "sql",
|
||||
"version": ""
|
||||
}
|
||||
},
|
||||
"nbformat_minor": 2,
|
||||
"nbformat": 4,
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Activity 5: Load data - Azure SQL Database\r\n",
|
||||
"\r\n",
|
||||
"#### <i>The Azure SQL Workshop - Module 2</i>\r\n",
|
||||
"\r\n",
|
||||
"<p style=\"border-bottom: 1px solid lightgrey;\"></p>\r\n",
|
||||
"\r\n",
|
||||
"In this activity, you'll get to see how you can bulk load data into Azure SQL Database. \r\n",
|
||||
"\r\n",
|
||||
"\r\n",
|
||||
"**Set up - Attach the notebook to Azure SQL Database** \r\n",
|
||||
"\r\n",
|
||||
"0. You should have opened this file using Azure Data Studio. If you didn't, please refer to Module 2 Activity 3 in the main Module 2 file to get set up. \r\n",
|
||||
"1. In the bar at the top of this screen, confirm or change the \"Kernel\" to **SQL**. This determines what language the code blocks in the file are. In this case, that language is SQL. \r\n",
|
||||
"2. For \"Attach to\", use the drop-down to select **Change Connection**. From the Recent Connections pane, you should be able to select your Azure SQL Database logical server. \r\n",
|
||||
"\r\n",
|
||||
"Now that you're set up, you should read the text cells and \"Run\" the code cells by selecting the play button that appears in the left of a code cell when you hover over it. \r\n",
|
||||
"> Some of the cells have been run before, this is just to show you the expected result from the testing of the labs. If you choose not to complete the labs/prerequisites, do not run any cells, just review the results. \r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "2c06b521-aaf9-41a8-9824-f06a3fb12e2c"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"When you're bulk loading data, it has to come from somewhere. In Azure, it's very common to store or dump data into an [Azure Blob Storage](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) because Blob storage is optimized for storing massive amounts of unstructured data at a relatively low cost. \r\n",
|
||||
"\r\n",
|
||||
"In this scenario, AdventureWorks is receiving store return data based on store identification number (e.g. 1, 2, etc.) This return data is being stored in `.dat` files which are then pushed into Azure Blob storage. \r\n",
|
||||
"\r\n",
|
||||
"Within blob storage, there exists three types of resources: \r\n",
|
||||
"* Storage account: this provides a unique namespace for a storage account, and a way to connect or access it \r\n",
|
||||
"* Containers: these are used to organize a set of blobs. A storage account can have an unlimited number of containers \r\n",
|
||||
"* Blobs: there are several types of blobs but we will use Block blobs that can store text and binary data that can be managed individually. \r\n",
|
||||
"\r\n",
|
||||
"Now, once the data is in blob storage, Azure SQL needs a way to access it. You can do that by [creating an external data source](https://docs.microsoft.com/en-us/sql/t-sql/statements/create-external-data-source-transact-sql?view=azuresqldb-current) that has access to the Azure Storage account. \r\n",
|
||||
"\r\n",
|
||||
"You can [control access to Azure Storage accounts](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#control-access-to-account-data) through Azure Active Directory, Shared Key authorization, or with a Shared access signature (SAS). The link points to more details, but we will use SAS for this exercise. \r\n",
|
||||
"\r\n",
|
||||
"If you want to read more about how SAS works with regards to Azure Storage, please [read here](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview) before continuing. \r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "8cdf43ef-a2ce-42b1-af4c-4cb2669440f3"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 1 - Create a table and schema** \r\n",
|
||||
"\r\n",
|
||||
"First, we need to create a table and schema for our data to be loaded into. This is pretty straightforward, good old-fashioned T-SQL."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "234764b7-3174-401f-b105-66f22bee5ab3"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"IF SCHEMA_ID('DataLoad') IS NULL \r\n",
|
||||
"EXEC ('CREATE SCHEMA DataLoad')\r\n",
|
||||
"\r\n",
|
||||
"CREATE TABLE DataLoad.store_returns\r\n",
|
||||
"(\r\n",
|
||||
" sr_returned_date_sk bigint,\r\n",
|
||||
" sr_return_time_sk bigint,\r\n",
|
||||
" sr_item_sk bigint ,\r\n",
|
||||
" sr_customer_sk bigint,\r\n",
|
||||
" sr_cdemo_sk bigint,\r\n",
|
||||
" sr_hdemo_sk bigint,\r\n",
|
||||
" sr_addr_sk bigint,\r\n",
|
||||
" sr_store_sk bigint,\r\n",
|
||||
" sr_reason_sk bigint,\r\n",
|
||||
" sr_ticket_number bigint ,\r\n",
|
||||
" sr_return_quantity integer,\r\n",
|
||||
" sr_return_amt float,\r\n",
|
||||
" sr_return_tax float,\r\n",
|
||||
" sr_return_amt_inc_tax float,\r\n",
|
||||
" sr_fee float,\r\n",
|
||||
" sr_return_ship_cost float,\r\n",
|
||||
" sr_refunded_cash float,\r\n",
|
||||
" sr_reversed_charge float,\r\n",
|
||||
" sr_store_credit float,\r\n",
|
||||
" sr_net_loss float\r\n",
|
||||
"\r\n",
|
||||
") "
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "46df44da-7cb9-48c1-b071-fee30b67d4d1"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:00.011"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 2 - Create a `MASTER KEY`** \r\n",
|
||||
"\r\n",
|
||||
"Leveraging [an example in the docs](https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15#f-importing-data-from-a-file-in-azure-blob-storage) you learn that a `MASTER KEY` is required to create a `DATABASE SCOPED CREDENTIAL` since the blob storage is not configured to allow public (anonymous) access. \r\n",
|
||||
"\r\n",
|
||||
"So, let's first create a `MASTER KEY`"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "c66497ba-8fc8-4296-829e-562ccc6a942b"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"CREATE MASTER KEY \r\n",
|
||||
"ENCRYPTION BY PASSWORD='MyComplexPassword00!';"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "65b4c01a-9bec-417b-a471-4b99c4e28cea"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:00.017"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 4
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 3 - Create a `DATABASE SCOPED CREDENTIAL`** \r\n",
|
||||
"\r\n",
|
||||
"A `MASTER KEY` is required to create a `DATABASE SCOPED CREDENTIAL`, which we can now create. The credential refers to the Azure blob storage account and the `data/` portion specifies the container where the store return data is located. \r\n",
|
||||
"\r\n",
|
||||
"We use `SHARED ACCESS SIGNATURE` as the identity which SQL knows how to interpret, and the secret provided is the SAS token that you can generate from the Azure blob storage account. \r\n",
|
||||
"\r\n",
|
||||
"> Note: the `?` at the beginning of the SAS token should be removed \r\n",
|
||||
"\r\n",
|
||||
"> Note: if you are completing this as part of an in-person workshop and were provided an environment to use, please refer to instructor guidance to obtain the SAS token. Otherwise, please refer to the **PREREQS TODO**.\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "68c55330-e433-4526-a62f-904660fb8adb"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"CREATE DATABASE SCOPED CREDENTIAL [https://azuresqlworkshopsa.blob.core.windows.net/data/]\r\n",
|
||||
"WITH IDENTITY = 'SHARED ACCESS SIGNATURE',\r\n",
|
||||
"SECRET = 'redacted';"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "26c0a508-595d-4ead-a680-b8ea422a8d68",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:00.014"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 5
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 4 - Create an external data source to the container** \r\n",
|
||||
"\r\n",
|
||||
"> Note: `LOCATION` doesn't have a trailing `/`, even through the `CREDENTIAL` does."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "3e65516c-6c05-4cc4-b8f4-310d9fe41da2"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"CREATE EXTERNAL DATA SOURCE dataset\r\n",
|
||||
"WITH \r\n",
|
||||
"(\r\n",
|
||||
" TYPE = BLOB_STORAGE,\r\n",
|
||||
" LOCATION = 'https://azuresqlworkshopsa.blob.core.windows.net/data',\r\n",
|
||||
" CREDENTIAL = [https://azuresqlworkshopsa.blob.core.windows.net/data/]\r\n",
|
||||
");"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "e8e3ad86-2f58-41ef-a568-18ffc9128438"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:01.471"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 6
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 5 - `BULK INSERT` a single file** \r\n",
|
||||
"\r\n",
|
||||
"You're finally ready to `BULK INSERT` one of the store return files. \r\n",
|
||||
"\r\n",
|
||||
"Review the comments before running the following cell."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "7d0ffa7d-660a-48c1-aa6b-2a295aff2e30"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"SET NOCOUNT ON -- Reduce network traffic by stopping the message that shows the number of rows affected\r\n",
|
||||
" BULK INSERT DataLoad.store_returns -- Table you created in Step 1\r\n",
|
||||
" FROM 'dataset/store_returns/store_returns_1.dat' -- Within the container, the location of the file\r\n",
|
||||
" WITH (\r\n",
|
||||
"\t\t\tDATA_SOURCE = 'dataset' -- Using the External data source from Step 4\r\n",
|
||||
"\t\t\t,DATAFILETYPE = 'char' \r\n",
|
||||
"\t ,FIELDTERMINATOR = '\\|' \r\n",
|
||||
"\t ,ROWTERMINATOR = '\\|\\n' \r\n",
|
||||
" ,BATCHSIZE=100000 -- Reduce network traffic by inserting in batches\r\n",
|
||||
" , TABLOCK -- Minimize number of log records for the insert operation\r\n",
|
||||
" )"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "bcc16f9e-1fc5-4f51-8cd2-11d47da4b24d"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:01:07.828"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 7
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"In the **Module 4: Performance**, there will be an opportunity to explore how you can improve your throughput and performance of bulk loading activities. \r\n",
|
||||
"\r\n",
|
||||
"For now, let's check how many rows were inserted into our table:"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "96f0dcbb-7bf6-456e-b5d8-53763eae3630"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"select count(*) from DataLoad.store_returns"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "36c0c58f-b0a1-4854-88d1-e576e31b37d0",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:01.460"
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"metadata": {},
|
||||
"execution_count": 8,
|
||||
"data": {
|
||||
"application/vnd.dataresource+json": {
|
||||
"schema": {
|
||||
"fields": [
|
||||
{
|
||||
"name": "(No column name)"
|
||||
}
|
||||
]
|
||||
},
|
||||
"data": [
|
||||
{
|
||||
"0": "2807797"
|
||||
}
|
||||
]
|
||||
},
|
||||
"text/html": "<table><tr><th>(No column name)</th></tr><tr><td>2807797</td></tr></table>"
|
||||
}
|
||||
}
|
||||
],
|
||||
"execution_count": 8
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"If you want to run throught the exercise again, run the following code to reset what you've done."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "3ab4cac4-9c02-4ee1-818d-c6cbaa54ca6e"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"DROP EXTERNAL DATA SOURCE dataset\r\n",
|
||||
"DROP DATABASE SCOPED CREDENTIAL [https://azuresqlworkshopsa.blob.core.windows.net/data/]\r\n",
|
||||
"DROP TABLE DataLoad.store_returns\r\n",
|
||||
"DROP MASTER KEY"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "297d59bb-08be-4680-94e4-028161ac0b4e"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Commands completed successfully."
|
||||
},
|
||||
"metadata": {}
|
||||
},
|
||||
{
|
||||
"output_type": "display_data",
|
||||
"data": {
|
||||
"text/html": "Total execution time: 00:00:00.042"
|
||||
},
|
||||
"metadata": {}
|
||||
}
|
||||
],
|
||||
"execution_count": 2
|
||||
}
|
||||
]
|
||||
}
|
|
@ -4,4 +4,14 @@ These represent demos and examples you can run that accompany Module 2. See [Mod
|
|||
|
||||
## verifydeployment
|
||||
|
||||
Run the verify deployment queries and review the results across Azure SQL Database, Azure SQL Managed Instance, and SQL Server 2019. Main notebook file [here](./verifydeployment/VerifyDeployment.ipynb).
|
||||
Run the verify deployment queries and review the results across Azure SQL Database, Azure SQL Managed Instance, and SQL Server 2019. Main notebook file [here](./verifydeployment/VerifyDeployment.ipynb).
|
||||
|
||||
## cli
|
||||
|
||||
Get started managing your Azure SQL resources using the Azure CLI. In the example that follows, you'll also explore the latency effects of using different connection policies in Azure SQL. Main notebook file [here](./cli/AzureCLI.ipynb).
|
||||
|
||||
## loaddata
|
||||
|
||||
In this activity, you'll explore one scenario for loading data from Azure Blob storage using T-SQL and Shared Access Signatures (SAS). Main notebook file [here](./loaddata/LoadData.ipynb).
|
||||
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@
|
|||
"2. For \"Attach to\", use the drop-down to select **Change Connection**. From the Recent Connections pane, you should be able to select your local SQL Server instance. \r\n",
|
||||
"\r\n",
|
||||
"Now that you're set up, you should read the text cells and \"Run\" the code cells by selecting the play button that appears in the left of a code cell when you hover over it. \r\n",
|
||||
"> All of the cells have been run in advance, so you can just review the results for SQL Server 2019. \r\n",
|
||||
"> All of the cells have been run in advance, so you can either just review the results or run the commands for SQL Server 2019. \r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
|
@ -2539,15 +2539,6 @@
|
|||
],
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"TODO: Commentary on results for above queries"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "9282b039-ff63-4494-9fac-3dd2a329d2ae"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
|
@ -3279,12 +3270,12 @@
|
|||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"There are, however, some Azure SQL queries you can use to get more information about your Azure SQL Database or Managed Instance deployment, two examples follow:\r\n",
|
||||
"There are some Azure SQL queries you can use to get more information about your Azure SQL Database or Managed Instance deployment, two examples follow:\r\n",
|
||||
"\r\n",
|
||||
"* `sys.dm_user_db_resource_governance` returns actual configuration and capacity settings used by resource governance mechanisms in the current database or elastic pool\r\n",
|
||||
"* `sys.dm_db_resource_stats` returns CPU, I/O, and memory consumption for an Azure SQL database or databases in Managed Instance \r\n",
|
||||
"\r\n",
|
||||
"Run and review the queries below. Note that these do not apply to SQL Server"
|
||||
"Run and review the queries below. Note that these do not apply to SQL Server."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "06a05a1f-9864-4ef9-bb84-4cf1b8a83156"
|
||||
|
@ -3328,7 +3319,7 @@
|
|||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Finally, you made use `sys.dm_exec_requests` as a sanity check on currently running SQL workers"
|
||||
"Finally, you can use `sys.dm_exec_requests` as a sanity check on currently running SQL workers"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "9d8d5ee7-b577-4588-a42b-7e3fd008375b"
|
||||
|
@ -5764,24 +5755,6 @@
|
|||
],
|
||||
"execution_count": 8
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"TODO: Add commentary on above"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "abbfc233-5ec9-40c1-8732-ea669f715f07"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "b78d02fa-0f5e-4938-80f9-d87b4e98d83c"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
|
|
|
@ -15,61 +15,609 @@ Ensuring security and compliance of your data is always a top priority. In this
|
|||
|
||||
|
||||
In this module, you'll cover these topics:
|
||||
[3.1](#3.1): TODO
|
||||
[3.2](#3.2): TODO
|
||||
[3.3](#3.3): TODO
|
||||
[3.1](#3.1): Platform and network security
|
||||
[Activity 1](#1): Create and manage firewall rules for Azure SQL Database
|
||||
[3.2](#3.2): Access management and Authorization
|
||||
[Activity 2](#2): Getting started with Azure AD authentication
|
||||
[3.3](#3.3): Information protection and encryption
|
||||
[Activity 3](#3): Confirm TDE is enabled
|
||||
[3.4](#3.4): Security management
|
||||
[Activity 4](#4): Auditing
|
||||
[Activity 5](#5): Advanced data security
|
||||
(Bonus) [Activity 6](#6): Data classification, Dynamic data masking, and SQL Audit
|
||||
|
||||
[Activity 1](#1): TODO
|
||||
[Activity 2](#2): TODO
|
||||
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.1">3.1 TODO: Topic Name</h2></a>
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.1">3.1 Platform and network security</h2></a>
|
||||
|
||||
TODO: Put in text here that talks about the process for network security with Azure SQL comparing this to SQL Server
|
||||
|
||||
|
||||
TODO: Topic Description
|
||||
|
||||
<br>
|
||||
|
||||
<img style="height: 400; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);" src="linkToPictureEndingIn.png">
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="1"><b>Activity 1</a>: Create and manage firewall rules for Azure SQL Database</b></p>
|
||||
|
||||
<br>
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="1"><b>Activity 1</a>: TODO: Activity Name</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
In this short activity, you'll see how to review and manage your firewall rules using the Azure portal.
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Description</b></p>
|
||||
|
||||
TODO: Enter activity description with checkbox
|
||||
During deployment of Azure SQL Database, you selected "Allow Azure services and resources access to this server" to **ON**. If you can, switching it to **OFF** is the most secure configuration. This can be complicated, since it means you'll have to specify a range of IP addresses for all your connections. In this activity, you'll simply see how to view and edit your firewall rules. In reality, you'll want to partner with your networking team to ensure you have the most secure, functional network. A few handy resources include:
|
||||
* [Azure SQL Database network access controls](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-networkaccess-overview)
|
||||
* [Connecting your applications to Managed Instance](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-connect-app)
|
||||
* [IP firewall rules for Azure SQL Database](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure)
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
TODO: Enter activity steps description with checkbox
|
||||
**Step 1 - create and manage firewall rules with the Azure portal**
|
||||
In your Azure virtual machine, navigate to the Azure portal, specifically to your Azure SQL Database logical server. Select **Firewalls and virtual networks** from the left-hand menu.
|
||||
|
||||
![](../graphics/fwvn.png)
|
||||
|
||||
Switch "Allow Azure services and resources to access this server" to **OFF**. During deployment, you should have added your Client IP address already, but if one of the Rules do not match your Client IP displayed (see below), select **Add Client IP**.
|
||||
|
||||
![](../graphics/clientip.png)
|
||||
|
||||
Finally, select **Save**.
|
||||
|
||||
To confirm you still have access from your Azure VM, navigate to SSMS and refresh your connection to the Azure SQL Database logical server. If no errors occur, you have successfully configured access to your Azure SQL Database logical server for your IP address only.
|
||||
|
||||
![](../graphics/dbrefresh.png)
|
||||
|
||||
**Step 2 - Create and manage firewall rules with the Azure Cloud Shell**
|
||||
|
||||
You can also use commands `az sql server firewall-rule` to create, delete, and view server-level firewall rules. You can use the Azure CLI through the command-line of your Azure VM or through a PowerShell notebook. For this step, you'll experiment with the Azure Cloud Shell.
|
||||
|
||||
Return to the Azure portal in your Azure VM. In the top bar, select the Azure Cloud Shell button.
|
||||
|
||||
![](../graphics/cloudshell.png)
|
||||
|
||||
If this is your first time using the Azure Cloud Shell, you will be prompted to select a subscription to create a storage account and Microsoft Azure Files share. For this workshop, you can just use any of the storage accounts that are in your resource group already. More information about the Azure Cloud Shell can be found in the [documentation](https://docs.microsoft.com/en-us/azure/cloud-shell/overview).
|
||||
|
||||
Then, you can select Bash or PowerShell. Select **Bash**. You should see a view similar to below.
|
||||
|
||||
![](../graphics/acsbash.png)
|
||||
|
||||
Next, run `az account list` to find the name of the subscription you are using for the workshop.
|
||||
|
||||
Then, run `az account set --subscription 'my-subscription-name'` to set the default subscription for this Azure Cloud Shell session. You can confirm this worked by running `az account show`.
|
||||
|
||||
Now that you're set up, you can list your server's firewall settings with the following command:
|
||||
|
||||
```bash
|
||||
az sql server firewall-rule list -g <ResourceGroup> -s <Server>
|
||||
```
|
||||
Your client IP address rule should match what you saw in the previous step using the Azure portal.
|
||||
|
||||
![](../graphics/fwlist.png)
|
||||
|
||||
There are other commands available for creating, deleting, and updating rules, which you can explore [here](https://docs.microsoft.com/en-us/cli/azure/sql/server/firewall-rule?view=azure-cli-latest).
|
||||
|
||||
Note that this method of setting the firewall rules (using the Azure portal or Azure Cloud Shell) grants your client IP address access to all of the databases that are in that logical server. After you've configured the server-level firewall rule, which you did above, you can optionally configure database-level firewall rules that apply to individual databases. This can only be done with T-SQL, using the command `EXECUTE sp_set_database_firewall_rule`. For more information, see the references in the **Description** of this activity.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.2">3.2 TODO: Topic Name</h2></a>
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.2">3.2 Access management and Authorization</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
TODO: Put in text here that talks about the process to access management with Azure SQL comparing this to SQL Server
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="2"><b>Activity 2</a>: TODO: Activity Name</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="2"><b>Activity 2</a>: Getting started with Azure AD authentication</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Description</b></p>
|
||||
|
||||
TODO: Enter activity description with checkbox
|
||||
In this activity, you'll learn how to configure an Azure AD administrator on a server level for Azure SQL Database. Next, you'll change your connection in SSMS from SQL authentication to Azure AD authentication, and you'll see how to grant other Azure AD users access to the database like normal users in SQL Server.
|
||||
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
TODO: Enter activity steps description with checkbox
|
||||
**Step 1 - Create an Azure AD admin**
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database logical server. In the left-hand task menu, select **Active Directory Admin** and **Set Admin**.
|
||||
|
||||
![](../graphics/aadadmin.png)
|
||||
|
||||
Search for you account. Easiest way is to type in your full email address. Click your user and then choose **Select**.
|
||||
|
||||
![](../graphics/aadselect.png)
|
||||
|
||||
You might think that's it, but you still have to select **Save** to confirm your actions.
|
||||
|
||||
![](../graphics/aadsave.png)
|
||||
|
||||
|
||||
**Step 2 - Authenticate using Azure AD**
|
||||
|
||||
Now that you've configured access for yourself to your Azure SQL Database logical server, let's update the connection in SSMS and ADS.
|
||||
|
||||
First, in SSMS, right click on you Azure SQL Database logical server and select **Connect**.
|
||||
|
||||
![](../graphics/dbconnect.png)
|
||||
|
||||
Notice that under *Authentication*, there are several different Azure Active Directory authentication methods, which will depend on how your organization is set up. For this workshop, select **Azure Active Directory - Password**.
|
||||
|
||||
![](../graphics/connecttoserver.png)
|
||||
|
||||
> Note: If you get the following error, this indicates your organization requires you to select **Azure Active Directory - Universal with MFA**. Connect accordingly.
|
||||
>
|
||||
> ![](../graphics/cannotconnect.png)
|
||||
|
||||
Next to the server name, you should now be able to see that you are authenticated using your Azure AD account and not the `cloudadmin` SQL user as before.
|
||||
|
||||
![](../graphics/aadc.png)
|
||||
|
||||
**Step 3 - Grant other users access (SQL)**
|
||||
|
||||
Now that you're authenticated using Azure AD, your next step might be to add other users. Just as in SQL Server, you can add new SQL users. In SSMS, using your Azure AD connection, right-click on your database and create a new query. Run the following.
|
||||
|
||||
> Note: You must right-click on the **database** within your Azure SQL Database logical server. In SQL Server and Azure SQL managed instance, you can query at the server level and use `USE DatabaseName`, but in Azure SQL Database, you must query the database directly, the `USE` statement is not supported.
|
||||
|
||||
```sql
|
||||
-- Create a new SQL user and give them a password
|
||||
CREATE USER ApplicationUser WITH PASSWORD = 'YourStrongPassword1';
|
||||
|
||||
-- Until you run the following two lines, ApplicationUser has no access to read or write data
|
||||
ALTER ROLE db_datareader ADD MEMBER ApplicationUser;
|
||||
ALTER ROLE db_datawriter ADD MEMBER ApplicationUser;
|
||||
```
|
||||
|
||||
As you likely already know, the best practice is to create non-admin accounts at the database level, unless they need to be able to execute administrator tasks.
|
||||
|
||||
|
||||
**Step 3 - Grant other users access (Azure AD)**
|
||||
|
||||
Azure AD authentication is a little different. From the documentation, "*Azure Active Directory authentication requires that database users are created as contained. A contained database user maps to an identity in the Azure AD directory associated with the database and has no login in the master database. The Azure AD identity can either be for an individual user or a group*."
|
||||
|
||||
Additionally, the Azure portal can only be used to create administrators, and Azure RBAC roles don't propagate to Azure SQL Database logical servers, Azure SQL Databases, or Azure SQL Managed Instances. Additional server/database permissions must be granted using T-SQL.
|
||||
|
||||
How you complete this next step will depend on how you are consuming this workshop. If you were given an environment, find a neighbor to work with. If you are doing this self-paced, or in a group that is multi-organization, you will just review this step, observing the screenshots.
|
||||
|
||||
1. With your neighbor, first determine who will be *Person A* and who will be *Person B*.
|
||||
2. Both *Person A* and *Person B* should note their Azure VM's **Public IP Address** (can locate this in the Azure portal)
|
||||
3. *Person A* should run the following T-SQL to authorize *Person B* to their server:
|
||||
```sql
|
||||
-- Create the Azure AD user with access to the server
|
||||
CREATE USER <Person B Azure AD account> FROM EXTERNAL PROVIDER;
|
||||
|
||||
-- Create firewall to allow Person B's Azure VM
|
||||
EXECUTE sp_set_firewall_rule @name = N'AllowPersonB',
|
||||
@start_ip_address = 'Person B VM Public IP',
|
||||
@end_ip_address = 'Person B VM Public IP'
|
||||
```
|
||||
4. *Person B* should run the following T-SQL to authorize *Person A* to their server:
|
||||
```sql
|
||||
-- Create the Azure AD user with access to the server
|
||||
CREATE USER <Person A Azure AD account> FROM EXTERNAL PROVIDER;
|
||||
|
||||
-- Create firewall to allow Person A's Azure VM
|
||||
EXECUTE sp_set_firewall_rule @name = N'AllowPersonA',
|
||||
@start_ip_address = 'Person A VM Public IP',
|
||||
@end_ip_address = 'Person A VM Public IP'
|
||||
```
|
||||
5. **Person A** should now try to connect to **Person B**'s Azure SQL Database logical server.
|
||||
6. **Person B** should now try to connect to **Person A**'s Azure SQL Database logical server.
|
||||
7. Compare results.
|
||||
|
||||
TODO Add screenshots and test with Bob.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.3">3.3 Information protection and encryption</h2></a>
|
||||
|
||||
TODO: Put in text here that talks about the process to protect information/encryption with Azure SQL comparing this to SQL Server
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="3"><b>Activity 3</a>: Confirm TDE is enabled</b></p>
|
||||
|
||||
This is a quick activity to show you how easily you can confirm that TDE is enabled, or you can enable it if it is not.
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database, and in the left-hand menu, under Security, select **Transparent data encryption**. Confirm your database is set to **ON**.
|
||||
|
||||
![](../graphics/tdeon.png)
|
||||
|
||||
Next, navigate to your Azure SQL Database logical server, and in the left-hand menu, under Security, select **Transparent data encryption**. Notice that you have a different view:
|
||||
|
||||
![](../graphics/tdeoption.png)
|
||||
|
||||
The default is to let the Azure service manage your key. As it says, Azure will automatically generate a key to encrypt your databases, and manage the key rotations. You've seen how to do this with the Azure portal, but you can also use PowerShell, Azure CLI, T-SQL, or REST APIs. For more details, [refer here](https://docs.microsoft.com/en-us/azure/sql-database/transparent-data-encryption-azure-sql?tabs=azure-portal).
|
||||
|
||||
You can, alternatively, bring your own key (BYOK) leveraging Azure key vault. In this scenario, you (not Azure) are responsible for and in full control of a key lifecycle management (key creation, rotation, deletion), key usage permissions, and auditing of operations on keys. For more information regarding Azure SQL TDE with BYOK, please [refer here](https://docs.microsoft.com/en-us/azure/sql-database/transparent-data-encryption-byok-azure-sql?view=sql-server-ver15).
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3.4">3.4 Security management</h2></a>
|
||||
|
||||
TODO: Put in text here that talks about the process for security management with Azure SQL comparing this to SQL Server
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="4"><b>Activity 4</a>: Auditing</b></p>
|
||||
|
||||
The auditing feature tracks database events and writes events to an audit log in either Azure storage, Azure Monitor logs, or to an Event hub. Auditing helps maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate potential security violations. In this activity, you'll set up Auditing at the server level (also available at the database level).
|
||||
|
||||
> **Aside**: The main differences between auditing in Azure SQL and auditing in SQL Server are:
|
||||
> * With Azure SQL Database, auditing is at server or database level, but with Azure SQL Managed Instance and SQL Server is at the server level.
|
||||
> * XEvent auditing supports Azure Blob storage targets
|
||||
> * [SQL Server Auditing](https://docs.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-database-engine?view=sql-server-ver15) is only available (with some differences) in Azure SQL Managed Instance
|
||||
> * For Azure SQL Managed Instance specifically:
|
||||
> * With `CREATE AUDIT`, you have new syntax `TO URL` and `TO EXTERNAL MONITOR` allow you to specify an Azure Blob storage container and enable Event Hub and Azure Monitor logs target, respectively.
|
||||
> * `TO FILE`, shutdown option, and `queue_delay`=0 are not supported in Azure SQL.
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
**Step 1 - Enable auditing on the Azure SQL Database logical server**
|
||||
|
||||
Open the Azure portal and navigate to your Azure SQL Database. In the left-hand task menu, under Security, select **Auditing**. Review the options and then select **View server settings**. The Microsoft recommendation is to apply auditing at the server level, which then applies to all databases within the Azure SQL Database logical server.
|
||||
|
||||
![](../graphics/dbaudit.png)
|
||||
|
||||
Next, set **Auditing** to **ON**. Notice you have different options for your log destination, depending how you want to audit your data. In this lab, you'll configure Storage and Log Analytics. In a later activity in this module, you'll get to look at the logs in both. You can also explore the implementations by reviewing [the documentation](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-auditing).
|
||||
|
||||
Select **Log Analytics (Preview)** and the **Configure** button.
|
||||
|
||||
![](../graphics/serveraudit.png)
|
||||
|
||||
Next, select **+ Create New Workspace**.
|
||||
|
||||
![](../graphics/newws.png)
|
||||
|
||||
Fill in the information according to the subscription, resource group, and location, that you are using to complete this workshop. We recommend naming your Log Analytics Workspace **azuresqlworkshopUID-la**, using your unique ID for your resources. Select **OK**.
|
||||
|
||||
![](../graphics/laws.png)
|
||||
|
||||
This may take a few moments to validate and create. You should now see your Log Analytics account.
|
||||
|
||||
Next, select **Storage**. This option allows you to collect XEvent log files in an Azure Blob storage account. In a later activity, you'll see more on how this differs from Log Analytics. Select **Configure**.
|
||||
|
||||
![](../graphics/configstorage.png)
|
||||
|
||||
Next, select the subscription you're using for this workshop as well as the storage account that was created to be used with Advanced data security (should be *sql* + *a random string of letters and numbers*). In this storage account, auditing logs will be saved as a collection of blob files within a container named **sqldbauditlogs**.
|
||||
|
||||
You also have options for the number of days you want to retain data. The default, **0**, means to retain data forever. You can change this to something else, if you want to cut back on the storage that may be generated and charged here. For this exercise, input **7**.
|
||||
|
||||
Finally, you can make a decision of which storage access key to use. Note you can use this to switch between keys when it's time to rotate them. Select **Primary**.
|
||||
|
||||
After you've configured your options, select **OK**.
|
||||
|
||||
![](../graphics/sasql.png)
|
||||
|
||||
Select **Save**.
|
||||
|
||||
![](../graphics/savela.png)
|
||||
|
||||
Once it saves, you can select the **X** button to close the server level Auditing pane. Back in the Azure SQL Database Auditing pane, you may notice that the **Auditing** option says **OFF**. It's important to note that if auditing is enabled on the server, it will always apply to the database.
|
||||
|
||||
![](../graphics/dbauditoff.png)
|
||||
|
||||
This is the end of this activity. In a later activity in this module, you'll see how to analyze the audit logs with information from Data Discovery & Classification in a Security dashboard as well as in SSMS.
|
||||
|
||||
TODO: Topic Description CONTD
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="5"><b>Activity 5</a>: Advanced Data Security</b></p>
|
||||
|
||||
Advanced data security (ADS) is a unified package for advanced SQL security capabilities, providing a single go-to location for enabling and managing three main capabilities:
|
||||
* [Data discovery & classification](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-data-discovery-and-classification)
|
||||
* [Vulnerability assessment](https://docs.microsoft.com/en-us/azure/sql-database/sql-vulnerability-assessment)
|
||||
* [Advanced Threat Protection](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-threat-detection-overview)
|
||||
|
||||
In this activity, you'll enable ADS and explore some of the features within each of the capabilities mentioned above.
|
||||
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
**Step 1 - Enable ADS**
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database logical server. Then, in the left-hand menu, under Security, select **Advanced data security**. If you followed the deployment activity in Module 2, ADS should already be enabled. If it is not, select **ON** and select **Save**.
|
||||
|
||||
![](../graphics/adson.png)
|
||||
|
||||
|
||||
**Step 2 - ADS server level settings**
|
||||
|
||||
In this step, you'll review the selections you've made for your Azure SQL Database logical server. In the same pane as step 1 (Azure SQL Database logical server > Security > Advanced data security), you will also see information regarding Vulnerability Assessments and Advanced Threat Protection.
|
||||
|
||||
At the highest level, SQL Vulnerability Assessment (VA) is a scanning service that provide visibility into your security state. It then provides actionable steps to address any potential concerns. When you configure periodic recurring scans, you're enabling the service to scan your databases every seven days and check for any vulnerabilities. You can then choose to send those reports to the admins, subscription owners, or anyone else that might need to be made notified of changes. In order for this service to operate, you have to specify a storage account for the results to be stored. This storage account was deployed during deployment of your Azure SQL Database, as you opted in to turn on ADS. Review the options and add your email address if you want to view a recurring scan.
|
||||
|
||||
![](../graphics/vasettings.png)
|
||||
|
||||
Lastly, you can configure your Advanced Threat Protection (ATP) settings. ATP enables you to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. To check the ATP alert types available, select **All** under Advanced Threat Protection types.
|
||||
|
||||
![](../graphics/atptypes.png)
|
||||
|
||||
Just like you can configure who receives the VA scans, you can configure who receives the ATP alerts. Review the options and add your email address if you want to be alerted (recommended for future lab).
|
||||
|
||||
![](../graphics/atpsettings.png)
|
||||
|
||||
Once you've updated all your settings, don't forget to select **Save**.
|
||||
|
||||
![](../graphics/save.png)
|
||||
|
||||
Setting these settings up will enable you to complete some of the other steps in this activity, so you'll see more of VA and ATP soon.
|
||||
|
||||
**Step 3 - Data Discovery & Classification**
|
||||
|
||||
Navigate back to your Azure SQL Database (not the logical server!). In the left-hand menu, under Security, Select **Advanced data security**.
|
||||
|
||||
![](../graphics/adsdashboard.png)
|
||||
|
||||
First, you'll review Data Discovery & Classification (DD&C). Select the **Data Discovery & Classification** box. This wizard type of view is similar (but not exactly matching) to the Data Discovery & Classification tool that exists in SQL Server today through SSMS. Using the SSMS wizard is **not supported** for Azure SQL, but you can achieve similar functionality here.
|
||||
|
||||
Select the information bar that says **We have found XX columns with classification recommendations**.
|
||||
|
||||
![](../graphics/recs.png)
|
||||
|
||||
DD&C tries to identify potential sensitive data based on the column names in your tables. Review some of the suggested labels and then select **Select all** and **Accept selected recommendations**.
|
||||
|
||||
![](../graphics/ddcrecs.png)
|
||||
|
||||
Then, select **Save** near the top left corner.
|
||||
|
||||
![](../graphics/save.png)
|
||||
|
||||
Finally, select **Overview** to view the overview dashboard and review the classifications you've added.
|
||||
|
||||
![](../graphics/ddcoverview.png)
|
||||
|
||||
**Step 4 - Vulnerability Assessment**
|
||||
|
||||
Select the **X** in the top right corner of DD&C to bring you back to the ADS dashboard. Next, you'll review the Vulnerability Assessment (VA) capabilities. Start by selecting the **Vulnerability Assessment** box.
|
||||
|
||||
![](../graphics/adsdashboard2.png)
|
||||
|
||||
Next, select **Scan** to get the most current VA results. This will take a few moments, while VA scans all the databases in your Azure SQL Database logical server.
|
||||
|
||||
![](../graphics/vascan.png)
|
||||
|
||||
Your resulting view should be similar to below.
|
||||
|
||||
![](../graphics/vadashboard.png)
|
||||
|
||||
Every security risk has a risk level (high, medium, or low) and additional information. Select the security check **VA2065** to get a detailed view, similar to below. Review the status and other available information.
|
||||
|
||||
![](../graphics/va20651.png)
|
||||
![](../graphics/va20652.png)
|
||||
|
||||
In this case, VA is suggesting that you configure a baseline of your server level firewall rules. Once you have a baseline, you can then monitor and assess any changes.
|
||||
|
||||
Depending on the security check, there will be alternate views and recommendations. For this security check, you can select the **Approve as Baseline** button at the top of the details page.
|
||||
|
||||
You can now re-scan your logical server to confirm that you are now passing **VA2065**.
|
||||
|
||||
![](../graphics/vabaseline.png)
|
||||
|
||||
You can then optionally complete another scan and confirm that VA2065 is now showing up as a **Passed** security check.
|
||||
|
||||
![](../graphics/va20653.png)
|
||||
|
||||
**Step 5 - Advanced Threat Protection overview**
|
||||
|
||||
Select the **X** in the top right corner of VA to get back to the ADS dashboard. Select the **Advanced Threat Protection** (ATP) box to drill in and review the results.
|
||||
|
||||
![](../graphics/adsdashboard3.png)
|
||||
|
||||
Likely, you won't see any security alerts. In the next step, you will run a test that will trigger an alert, so you can review the results in ATP.
|
||||
|
||||
**Step 6 - Testing ATP capabilities**
|
||||
|
||||
TODO with help of Bob
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>(Bonus) <a name="6">Activity 6</a>: Data classification, Dynamic data masking, and SQL Audit</b></p>
|
||||
|
||||
In this activity, you will learn how to audit users trying to view columns that were marked for data classification. This activity will combine several of the things you've already learned about in the module, and take those learnings to the next level.
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="https://github.com/microsoft/sqlworkshops/blob/master/graphics/checkmark.png?raw=true"><b>Steps</b></p>
|
||||
|
||||
**Step 1 - Add new data classification manually**
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database (not logical server!). In the left-hand menu, under Security, select **Advanced data security** and then select the **Data Discovery & Classification** box.
|
||||
|
||||
![](../graphics/adsdashboard4.png)
|
||||
|
||||
Next, select the **Classification** tab and then select **+ Add classification**.
|
||||
|
||||
![](../graphics/addddc.png)
|
||||
|
||||
In a previous activity, you added all the recommended column classifications. In this step, you will *manually* add a potentially sensitive column to the list of classified columns.
|
||||
|
||||
In the SalesLT Customer table, DD&C identified FirstName and LastName to be classified, but not MiddleName. Using the drop-downs, add it now. Then, select **Add classification**.
|
||||
|
||||
![](../graphics/addddc2.png)
|
||||
|
||||
Select **Save**.
|
||||
|
||||
![](../graphics/save.png)
|
||||
|
||||
You can confirm that this was successful by viewing the **Overview** tab and confirming that MiddleName is now present in the list of classified columns under the SalesLT schema.
|
||||
|
||||
**Step 2 - Apply Dynamic Data Masking over the Name columns**
|
||||
|
||||
Dynamic Data Masking (DDM) is something available in Azure SQL as well as in SQL Server. It limits data exposure by masking sensitive data to non-privileged users. Azure SQL will recommend things for you to mask, or you can add masks manually. You'll mask the FirstName, MiddleName, and LastName columns which you reviewed in the previous step.
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database. In the left-hand menu, under Security, select **Dynamic Data Masking** and then select **+ Add mask**.
|
||||
|
||||
![](../graphics/addmask.png)
|
||||
|
||||
First, select the **SalesLT** schema, **Customer** table, and **FirstName** column. Then, you can review the options for masking, but the default is good for this scenario. Select **Add** to add the masking rule.
|
||||
|
||||
![](../graphics/addmask2.png)
|
||||
|
||||
Repeat this for both **MiddleName** and **LastName** in that table.
|
||||
|
||||
After, you should have three masking rules, similar to below.
|
||||
|
||||
![](../graphics/addmask3.png)
|
||||
|
||||
Select **Save**.
|
||||
|
||||
![](../graphics/save.png)
|
||||
|
||||
**Step 3 - Query classified and masked columns**
|
||||
|
||||
Now, navigate to SSMS and create a new query in your AdventureWorks database.
|
||||
|
||||
![](../graphics/newquery.png)
|
||||
|
||||
> Note: You should be connected to this logical server using Azure AD authentication. If you are connected as `cloudadmin`, create a new connection and connect using Azure AD authentication.
|
||||
|
||||
Now, run the following query to return the classified (and in some cases masked) data.
|
||||
```sql
|
||||
SELECT TOP 10 FirstName, MiddleName, LastName
|
||||
FROM SalesLT.Customer
|
||||
```
|
||||
You should get a result of the first ten names, with no masking applied. Why? Because you are the admin for this Azure SQL Database logical server.
|
||||
|
||||
![](../graphics/names.png)
|
||||
|
||||
Now, run the following query to create a new user and run the previous query as that user. You may notice the first few commands, they are a repeat from Activity 2, Step 3.
|
||||
```sql
|
||||
-- Create a new SQL user and give them a password
|
||||
CREATE USER Bob WITH PASSWORD = 'gocowboys1!';
|
||||
|
||||
-- Until you run the following two lines, Bob has no access to read or write data
|
||||
ALTER ROLE db_datareader ADD MEMBER Bob;
|
||||
ALTER ROLE db_datawriter ADD MEMBER Bob;
|
||||
|
||||
-- Execute as our new, low-privilege user, Bob
|
||||
EXECUTE AS USER = 'Bob';
|
||||
SELECT TOP 10 FirstName, MiddleName, LastName
|
||||
FROM SalesLT.Customer;
|
||||
REVERT;
|
||||
```
|
||||
|
||||
Now, you should get a result of the first ten names, but masking applied. Bob has not been granted access to the unmasked form of this data.
|
||||
|
||||
![](../graphics/names2.png)
|
||||
|
||||
**Step 4 - Add excluded users from masking**
|
||||
|
||||
What if, for some reason, Bob needs access to the names and gets permission to have it?
|
||||
|
||||
You can update excluded users from masking in the Azure portal (in the Dynamic Data Masking pane under Security), but you can also do it using T-SQL. Use the query below to allow Bob to query the names results without masking.
|
||||
|
||||
```sql
|
||||
GRANT UNMASK TO Bob;
|
||||
EXECUTE AS USER = 'Bob';
|
||||
SELECT TOP 10 FirstName, MiddleName, LastName
|
||||
FROM SalesLT.Customer;
|
||||
REVERT;
|
||||
```
|
||||
Your results should include the names in full.
|
||||
![](../graphics/names.png)
|
||||
|
||||
Finally, you can also take away a user's unmasking privileges, and confirm that with the following T-SQL.
|
||||
```sql
|
||||
-- Remove unmasking privilege
|
||||
REVOKE UNMASK TO Bob;
|
||||
|
||||
-- Execute as Bob
|
||||
EXECUTE AS USER = 'Bob';
|
||||
SELECT TOP 10 FirstName, MiddleName, LastName
|
||||
FROM SalesLT.Customer;
|
||||
REVERT;
|
||||
```
|
||||
Your results should include the masked names.
|
||||
![](../graphics/names2.png)
|
||||
|
||||
**Step 5 - Analyze audit logs from Azure Blob storage with SSMS**
|
||||
|
||||
As an admin, you may want to review and audit who is accessing the databases and specifically the classified data. Next, you'll take a look at the audit files that are being sent to Azure Blob storage. The first thing you have to do is merge the audit files, in case logs span multiple files. You can do this from SSMS. First, select **File** > **Open** > **Merge Audit Files**.
|
||||
|
||||
![](../graphics/fileaudit.png)
|
||||
|
||||
Next, select **Add**.
|
||||
|
||||
![](../graphics/fileauditadd.png)
|
||||
|
||||
Specify to add then from Azure Blob storage and select **Connect**.
|
||||
|
||||
![](../graphics/fileauditconnect.png)
|
||||
|
||||
Now sign into Azure with the account you are using for this workshop.
|
||||
|
||||
![](../graphics/fileauditsignin.png)
|
||||
|
||||
Select the subscription, storage account, and blob container you configured Audit logs to go to (refer to your selection in the Azure portal under your Azure SQL Database logical server's Auditing blade). The container will be called `sqldbauditlogs`.
|
||||
|
||||
![](../graphics/fileauditselect.png)
|
||||
|
||||
Select your Azure SQL Database logical server and your AdventureWorks database. It should take everything from the day up until the second you select the Database name. Select **OK**.
|
||||
|
||||
![](../graphics/fileauditok.png)
|
||||
|
||||
The confirmation window lets you know how many files are being downloaded and merged. Select **OK**.
|
||||
|
||||
![](../graphics/downloadconf.png)
|
||||
|
||||
Review the files and select **OK** one last time.
|
||||
|
||||
![](../graphics/mergeaudit.png)
|
||||
|
||||
You should now be able to see all the audit logs. Look for where you were testing with masking with Bob. You can select the statement, and then use the detail pane below to review the information. For example, for one of the queries where Bob tries to view classified data, under the `data_sensitivity_information` field, you can see the data that is classified. For more information on the naming conventions in audit logs, [see here](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-audit-log-format).
|
||||
|
||||
This merged file can then be exported to an XEL or CSV file (or to a table) for additional analysis. You can also query the [Extended Events files using PowerShell](https://sqlscope.wordpress.com/reading-extended-event-files-using-client-side-tools-only/).
|
||||
|
||||
**Step 6 - Analyze audit logs with Log Analytics**
|
||||
|
||||
Analyzing your audit logs will depend on your preference, and the method in Step 5 may be more familiar. In this step, you'll be exposed to querying security logs in the Azure portal with Log Analytics.
|
||||
|
||||
In the Azure portal, navigate to your Azure SQL Database. In the left-hand menu, under Security, select **Auditing**. Then select **View audit logs**.
|
||||
|
||||
![](../graphics/viewauditlogs.png)
|
||||
|
||||
You should now be able to see a query of your event records, options to run in Query Editor (run T-SQL queries through the portal), options for Log Analytics/View dashboard, and more.
|
||||
|
||||
![](../graphics/auditrecords.png)
|
||||
|
||||
Feel free to click around and understand what some of the options are.
|
||||
|
||||
Then, click on **Log Analytics**. This takes you to a query editor but it is not T-SQL. This view allows you to query logs using Kusto query language or KQL, which is meant to be easy to use and understand by SQL professionals. For the KQL documentation, [refer here](https://docs.microsoft.com/en-us/azure/kusto/query/).
|
||||
|
||||
The default query is querying the category `SQLSecurityAuditEvents`, so while you might use this category now to view security related incidents, this tool can also be used for querying other Azure logs and categories in [Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview).
|
||||
|
||||
![](../graphics/laview.png)
|
||||
|
||||
This workshop won't go deep into KQL querying of logs, but there are many resources in the references above if you want more practice later.
|
||||
|
||||
**Step 7 - Analyze audit logs and monitor security with the Log Analytics SQL Security dashboard**
|
||||
|
||||
In this step, you'll see how SQL Security has built a dashboard based on Log Analytics for you to monitor and audit the logs and other SQL activity. To get back to Audit records, select the **X** in the top right corner of the Log Analytics query window.
|
||||
|
||||
Then, select **View dashboard**.
|
||||
|
||||
![](../graphics/viewdb.png)
|
||||
|
||||
You should now see an overview dashboard. Drill in to **Azure SQL - Access to Sensitive Data**.
|
||||
|
||||
![](../graphics/securitydb.png)
|
||||
|
||||
You can use this drill down to find out:
|
||||
1. How many queries are accessing sensitive data
|
||||
1. Which types and sensitivities of are being accessed
|
||||
1. Which principals are accessing sensitive data
|
||||
1. Which IPs are accessing sensitive data
|
||||
|
||||
Review what's available here, and how you can audit usage with this tool.
|
||||
|
||||
When you're done, select the **X** in the top right corner of the **Azure SQL - Access to Sensitive Data** tab.
|
||||
|
||||
Back in the overview, select **Azure SQL - Security Insights**.
|
||||
|
||||
![](../graphics/securitydb.png)
|
||||
|
||||
This dashboard gives more auditing information to help you understand database activity, and gain insight into anomalies. Spend a few minutes reviewing the options here.
|
||||
|
||||
> Looking for another bonus security activity? Try this tutorial: [Always Encrypted: Protect sensitive data and store encryption keys in Azure Key Vault](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault?tabs=azure-powershell). You will need VS for this, you can download [Visual Studio Community for free here](https://visualstudio.microsoft.com/downloads/).
|
||||
|
||||
In this module and throughout the activities, you got to get hands-on with many security features that are available for Azure SQL. In the next module, you'll take a look at how performance is different in Azure, and you'll see how you can optimize it in Azure SQL.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
|
||||
<ul>
|
||||
<li><a href="url" target="_blank">TODO: Enter courses, books, posts, whatever the student needs to extend their study</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-overview" target="_blank">Azure SQL Security Documentation</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-best-practice" target="_blank">Azure SQL Security Best Practices Playbook</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-aad-security-tutorial" target="_blank">Configure security for Azure SQL Managed Instance</a></li>
|
||||
<li><a href="https://docs.microsoft.com/en-us/azure/sql-database/sql-database-security-tutorial" target="_blank">Configure security for Azure SQL Database</a></li>
|
||||
</ul>
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
|
||||
|
|
|
@ -2,6 +2,4 @@
|
|||
|
||||
These represent demos and examples you can run that accompany Module 3. See [Module 3](../03-Security.md) for details on how to use the files in this module.
|
||||
|
||||
## verifydeployment TODO
|
||||
|
||||
TODO Run the verify deployment queries and review the results across Azure SQL Database, Azure SQL Managed Instance, and SQL Server 2019. Main notebook file [here](./verifydeployment/VerifyDeployment.ipynb).
|
||||
> There are currently no files in this folder. All the instructions are contained in the main [Module 3](../03-Security.md) file.
|
|
@ -11,30 +11,29 @@
|
|||
> You must complete the [prerequisites](../azuresqlworkshop/00-Prerequisites.md) before completing these activities. You can also choose to audit the materials if you cannot complete the prerequisites. If you were provided an environment to use for the workshop, then you **do not need** to complete the prerequisites.
|
||||
|
||||
|
||||
Depending on the SLA your business requires, Azure SQL has the options you need including built-in capabilities. In this module, you will learn how to translate your knowledge of backup/restore, Always on failover cluster instances, and Always On availability groups with the options for business continuity in Azure SQL.
|
||||
Depending on the SLA your business requires, Azure SQL has the options you need including built-in capabilities. In this module, you will learn how to translate your knowledge of backup/restore, Always on failover cluster instances, and Always On availability groups to the options for business continuity in Azure SQL.
|
||||
|
||||
|
||||
In this module, you'll cover these topics:
|
||||
[5.1](#5.1): TODO
|
||||
[5.2](#5.2): TODO
|
||||
[5.3](#5.3): TODO
|
||||
|
||||
[Activity 1](#1): TODO
|
||||
[Activity 2](#2): TODO
|
||||
[5.1](#5.1): Azure SQL high availability basics
|
||||
[Activity 1](#1): TODO-Turn-key FCIs
|
||||
[5.2](#5.2): Backup and restore
|
||||
[Activity X](#X): Restore a deleted database
|
||||
[Activity 2](#2): Restore to a point in time
|
||||
[5.3](#5.3): The highest availability
|
||||
[Activity 3](#3): TODO-Turn-key AGs in Business critical
|
||||
[Activity 4](#4): Geo-distributed auto-failover groups with read-scale in Business critical
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="5.1">5.1 TODO: Topic Name</h2></a>
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="5.1">5.1 Azure SQL high availability basics</h2></a>
|
||||
|
||||
TODO: Explain basic architecture of general purpose/business critical/hyperscale for availability
|
||||
|
||||
TODO: Topic Description
|
||||
|
||||
<br>
|
||||
|
||||
<img style="height: 400; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);" src="linkToPictureEndingIn.png">
|
||||
|
||||
<br>
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="1"><b>Activity 1</a>: TODO: Activity Name</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="1"><b>Activity 1</a>: TODO: TODO-Turn-key FCIs</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
|
||||
|
@ -48,11 +47,72 @@ TODO: Enter activity steps description with checkbox
|
|||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="5.2">5.2 TODO: Topic Name</h2></a>
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="5.2">5.2 TODO: Backup and restore</h2></a>
|
||||
|
||||
TODO: Topic Description
|
||||
TODO: Explain how on prem you have to have a plan for DR and a BU/R strategy, but how it's built for you in Azure. Also talk about how it all works, ADR, LTR, etc.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="2"><b>Activity 2</a>: TODO: Activity Name</b></p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="X"><b>Activity X</a>: TODO: Restore a deleted database</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Description</b></p>
|
||||
|
||||
TODO: Enter activity description with checkbox
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
TODO: Enter activity steps description with checkbox
|
||||
|
||||
![](../graphics/deletedb.png)
|
||||
|
||||
![](../graphics/deleteddb.png)
|
||||
|
||||
![](../graphics/deleteddb2.png)
|
||||
|
||||
![](../graphics/deleteddbview.png)
|
||||
(this takes about 2 min to show up)
|
||||
|
||||
![](../graphics/restoredb.png)
|
||||
|
||||
![](../graphics/restoredb2.png)
|
||||
|
||||
![](../graphics/deploynotification.png)
|
||||
|
||||
![](../graphics/deployunderway.png)
|
||||
|
||||
NOTE: It took 11 MINUTES to restore the deleted database and 2-3 min for it to show up so I could delete it. With 14 minutes waiting time, I think we should drop this lab, I will mention it at the end of activity 2.
|
||||
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="2"><b>Activity 2</a>: Undo errors to a point in time</b></p>
|
||||
|
||||
In all organizations, big or small, mistakes can happen. That's why you always have to have a plan for how you will restore to where you need to be. In SQL Server, ideally, you want choose to [restore to a point in time](https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model?view=sql-server-ver15), but you can only do that if you are running in full recovery model. Under the bulk-logged recovery model, it's more likely that you'll have to recover the database to the end of the transaction log backup.
|
||||
|
||||
One of the benefits of Azure SQL is that Azure can take care of all of this for you. Since Azure SQL manages your backups and runs in full recovery model, it can restore you to any point in time (you can even [restore a deleted database](https://docs.microsoft.com/en-us/azure/sql-database/sql-database-recovery-using-backups#deleted-database-restore)). In this activity, you'll see how a common error can be recovered using point in time restore (PITR). This is easy to do in the portal or programmatically, but in this activity you'll see how to do it with the Azure CLI.
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
For this activity, you'll use the notebook called **pitr.ipynb** which is under `azuresqlworkshop\05-Availability\pitr\pitr.ipynb`. Navigate to that file in ADS to complete this activity, and then return here.
|
||||
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="5.3">5.3 The highest availability</h2></a>
|
||||
|
||||
TODO: We've shown you basics/how to get data back, now we'll show HA tech, what do you get in BC… Focus on BC here
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="3"><b>Activity 3</a>: TODO-Turn-key AGs in Business critical</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Description</b></p>
|
||||
|
||||
TODO: Enter activity description with checkbox
|
||||
|
||||
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/checkmark.png"><b>Steps</b></p>
|
||||
|
||||
TODO: Enter activity steps description with checkbox
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><a name="4"><b>Activity 4</a>: Geo-distributed auto-failover groups with read-scale in Business critical</b></p>
|
||||
|
||||
TODO: Activity Description and tasks
|
||||
|
||||
|
|
Двоичные данные
AzureSQLWorkshop/azuresqlworkshop/05-Availability/pitr/graphics/completiontime.png
Normal file
После Ширина: | Высота: | Размер: 5.5 KiB |
Двоичные данные
AzureSQLWorkshop/azuresqlworkshop/05-Availability/pitr/graphics/dbrestoring.png
Normal file
После Ширина: | Высота: | Размер: 6.4 KiB |
Двоичные данные
AzureSQLWorkshop/azuresqlworkshop/05-Availability/pitr/graphics/lastgoodq.png
Normal file
После Ширина: | Высота: | Размер: 1.8 KiB |
Двоичные данные
AzureSQLWorkshop/azuresqlworkshop/05-Availability/pitr/graphics/newnewquery.png
Normal file
После Ширина: | Высота: | Размер: 5.8 KiB |
Двоичные данные
AzureSQLWorkshop/azuresqlworkshop/05-Availability/pitr/graphics/salesdetailssms.png
Normal file
После Ширина: | Высота: | Размер: 26 KiB |
|
@ -0,0 +1,368 @@
|
|||
{
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"name": "powershell",
|
||||
"display_name": "PowerShell"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "powershell",
|
||||
"codemirror_mode": "shell",
|
||||
"mimetype": "text/x-sh",
|
||||
"file_extension": ".ps1"
|
||||
}
|
||||
},
|
||||
"nbformat_minor": 2,
|
||||
"nbformat": 4,
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"# Activity 2: Undo errors to a point in time\r\n",
|
||||
"\r\n",
|
||||
"#### <i>The Azure SQL Workshop - Module 5</i>\r\n",
|
||||
"\r\n",
|
||||
"<p style=\"border-bottom: 1px solid lightgrey;\"></p>\r\n",
|
||||
"\r\n",
|
||||
"In this activity, you'll see how a common error can be recovered using point in time restore (PITR). This is easy to do in the portal or programmatically, but in this activity you'll see how to do it with the Azure CLI through the Azure Cloud Shell. \r\n",
|
||||
"\r\n",
|
||||
"\r\n",
|
||||
"**Set up** \r\n",
|
||||
"\r\n",
|
||||
"0. You should have opened this file using Azure Data Studio. If you didn't, please refer to Module 2 Activity 3 in the readme.md file to get set up. \r\n",
|
||||
"1. In the bar at the top of this screen, confirm or change the \"Kernel\" to **PowerShell**. This determines what language the code blocks in the file are. In this case, that language is PowerShell. \r\n",
|
||||
"2. Confirming the Kernel is **PowerShell**, for \"Attach to\", it should read **localhost**. \r\n",
|
||||
"\r\n",
|
||||
"Now that you're set up, you should read the text cells and \"Run\" the code cells by selecting the play button that appears in the left of a code cell when you hover over it. \r\n",
|
||||
" \r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "cfeaf504-99ef-4000-9481-88ac986f2e4b"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 0 - Delete a database on *accident*** \r\n",
|
||||
"\r\n",
|
||||
"First, let's confirm that the table we'll *accidentally* delete does exist and have data in it. Let's take a look at some of the values in `SalesLT.OrderDetail`. \r\n",
|
||||
"\r\n",
|
||||
"**Using SSMS**, run the following query and review the results. \r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"SELECT TOP 10 * from SalesLT.SalesOrderDetail\r\n",
|
||||
"```\r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/salesdetailssms.png) \r\n",
|
||||
"\r\n",
|
||||
"\r\n",
|
||||
"For whatever reason, let's create a scenario where someone accidentally deletes that table. Today, you will be that someone. \r\n",
|
||||
"\r\n",
|
||||
"**Using SSMS**, run the following query.\r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"DROP TABLE SalesLT.SalesOrderDetail\r\n",
|
||||
"```\r\n",
|
||||
"\r\n",
|
||||
"Also, copy the `Completion time`. For example, in the below image, you would copy `2020-01-31T17:15:18` \r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/completiontime.png) \r\n",
|
||||
"\r\n",
|
||||
"Then, paste the completion time **AND THEN SUBTRACT 2 MINUTES** in the PowerShell cell below and run it, so you can refer to it later. \r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "0f550d3c-e7c0-4df9-957b-cfaa950cbbe7"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 1 - Determine the time you need to go back to** \r\n",
|
||||
"Before you go any further, it's important to understand the recommended process for doing point in time restore (PITR): \r\n",
|
||||
"\r\n",
|
||||
"1. Determine the time that you need to go back to. This should be **before** the error or mistake took place. \r\n",
|
||||
"1. Complete PITR via PowerShell or the Azure portal to go back to this time. This deploys a new database and restores a copy of your database, e.g. **AdventureWorks0406-copy**. \r\n",
|
||||
"1. Confirm the new database (e.g. **AdventureWorks0406-copy**) is where you need to be. \r\n",
|
||||
"1. Rename the original database, e.g. **AdventureWorks0406** to **AdventureWorks0406-old**. \r\n",
|
||||
"1. Rename the new database to the original database name, e.g. **AdventureWorks0406-copy** to **AdventureWorks0406**. \r\n",
|
||||
"1. Delete the original database, e.g. **AdventureWorks0406-old**. \r\n",
|
||||
"\r\n",
|
||||
"In order to complete step 1, you need to know when the last \"good\" transaction occurred, before the \"bad\" on, so you can restore to before the \"bad\" transaction but after the last \"good\" one. \r\n",
|
||||
"\r\n",
|
||||
"To do this, run the following query in **SSMS**, then note the completion time of the last good query run. In your case, it should be the `SELECT TOP 10 * FROM SalesLT.SalesOrderDetail`. \r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"SELECT dest.text, deqs.last_execution_time\r\n",
|
||||
"FROM sys.dm_exec_query_stats AS deqs\r\n",
|
||||
" CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest\r\n",
|
||||
"ORDER BY \r\n",
|
||||
" deqs.last_execution_time DESC\r\n",
|
||||
"```\r\n",
|
||||
"\r\n",
|
||||
"It should be similar to below, but with a different date/time. \r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/lastgoodq.png) \r\n",
|
||||
"\r\n",
|
||||
"You'll notice, in this example, the date/time is `2020-01-31 21:11:42.993`. The required format is slightly different. Update it using this example as a reference and to the definition of `$before_error_time`. \r\n",
|
||||
"* SSMS format: `2020-01-31 21:11:42.993`\r\n",
|
||||
"* Required format: `2020-01-31T21:11:42.993` \r\n",
|
||||
"\r\n",
|
||||
"The last part of this step is filling in your Subscription ID and database name information so the rest goes smoothly. \r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "0a78a603-f251-4ac2-aadd-3200353db5d0"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"$before_error_time = \"2020-01-31T21:11:42.993\"\r\n",
|
||||
"\r\n",
|
||||
"$subscription_id = \"<SubscriptionIdHere>\"\r\n",
|
||||
"$unique_id = \"<WorkshopUserID>\"\r\n",
|
||||
"$database_name = \"AdventureWorks$($unique_id)\"\r\n",
|
||||
"$database_name_copy = \"$($database_name)-copy\"\r\n",
|
||||
"$database_name_old = \"$($database_name)-old\"\r\n",
|
||||
"$logical_server = \"aw-server$($unique_id)\"\r\n",
|
||||
"$resource_group = \"azuresqlworkshop$($unique_id)\""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "e66b65ce-c896-4457-97c4-9554c94d2aef",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 19
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 2 - Complete PITR using the Azure CLI** \r\n",
|
||||
"\r\n",
|
||||
"In the next step you'll use `az cli db restore` to restore to before the table was deleted. \r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "99d7b0f5-9405-46e5-93ff-5129f79d6227"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"First we want to make sure we're logged in and set up to use the Azure CLI locally. "
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "9fbe71ed-ccf8-4d9a-934a-a5874a080ccf"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Log in to the Azure portal with your workshop credentials\r\n",
|
||||
"# You may get an error initially, run again, and you should get a pop-up that directs you through authenticating\r\n",
|
||||
"az login"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "12d396b2-92c4-483b-a741-b9aa84807d29",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 14
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Specify your subscription for the workshop\r\n",
|
||||
"az account set --subscription $subscription_id\r\n",
|
||||
"\r\n",
|
||||
"# Confirm you're connected to the correct subscription\r\n",
|
||||
"az account show"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "da44d206-65f3-4749-b186-364ee211b766",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 21
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Specify your default subscription, resource group, and Azure SQL Database logical server\r\n",
|
||||
"az configure --defaults group=$resource_group sql-server=$logical_server"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "e91e1c9a-7175-45e0-879b-0f6ef150db3c",
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 3
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"This next command will take about 10 minutes. This is because, in the background, Azure is deploying a new Azure SQL Database in your Azure SQL Database logical server that has all the same configuration options as the original. After it's deployed, it will then restore the database into that new Azure SQL Database. \r\n",
|
||||
"\r\n",
|
||||
"After about 6-8 minutes, you may be able to refresh your view of databases in **SSMS** and see that the database has been deployed and the restore is now in progress. \r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/dbrestoring.png) \r\n",
|
||||
"\r\n",
|
||||
"Once you see this, it should only be 1-2 minutes more. You will know it is done, because the \"stop\" like button in the code cell below will stop spinning and go back to the standard \"play\" like button. "
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "fec1c2b9-9af9-4b41-81c8-365031d6021f"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"# Restore the database to the time before the database was deleted\r\n",
|
||||
"az sql db restore --dest-name $database_name_copy --name $database_name --time $before_error_time"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "828e608f-fadd-40a7-8ac2-1f5394a81412"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 23
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"TODO WHILE YOU WAIT"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "5770ca8c-856f-4fb0-b134-f583d6707381"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 3 - Confirm the new database is where you need to be**\r\n",
|
||||
"\r\n",
|
||||
"In order to do this, refresh your connection to the Azure SQL Database logical server in SSMS (right-click on the logical server and select **Refresh**). \r\n",
|
||||
"\r\n",
|
||||
"Then, right-click on your new database, e.g. **AdventureWorks0406-copy** and select **New Query**. \r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/newnewquery.png) \r\n",
|
||||
"\r\n",
|
||||
"Use the following query to confirm the table exists. \r\n",
|
||||
"\r\n",
|
||||
"```sql\r\n",
|
||||
"SELECT TOP 10 * from SalesLT.SalesOrderDetail\r\n",
|
||||
"```\r\n",
|
||||
"You should get something similar to the following screenshot, which confirms your database has been restored to where you want it to be. \r\n",
|
||||
"\r\n",
|
||||
"![](./graphics/salesdetailssms.png) \r\n",
|
||||
"\r\n",
|
||||
"\r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "b4afdd5e-130c-4dea-a246-8905b9055831"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 4 - Rename the original database** \r\n",
|
||||
"\r\n",
|
||||
"This step involves renaming the original database to something similar to **AdventureWorks0406-old** so you can later rename the new database to the original database name. As long as your applications use retry logic, this will make it so no connection strings need to be changed. \r\n",
|
||||
"\r\n",
|
||||
"Now, you're very familiar with how to rename databases in SSMS, but here you will see how it can be easily done using the Azure CLI. "
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "0dfb6493-4a9d-450c-8ae8-d9dfe11b744f"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"az sql db rename --name $database_name --new-name $database_name_old"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "f2a96ef9-d39c-4f16-af2d-d41adb56403e"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 27
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 5 - Rename the new database to the original database name** \r\n",
|
||||
"\r\n",
|
||||
"Now that the original database name is no longer taken, you can rename the copy database to that of the original, again using the Azure CLI. \r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "8c6edfb6-f3e5-4a62-9082-5d9991e89ad1"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"az sql db rename --name $database_name_copy --new-name $database_name"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "dd42263b-5296-4ae8-9d69-a2c1cd2b2e1f"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 5
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"**Step 6 - Delete the original database** \r\n",
|
||||
"\r\n",
|
||||
"Finally, you have no need for the old database, so you can delete it with `az sql db delete`. \r\n",
|
||||
""
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "2c86884d-ba69-4b30-ae06-98c15d8a4e4c"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"az sql db delete --name $database_name_old --yes"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "86e9abb5-8db7-460c-ac20-b730a0db7b40"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 9
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"And you can confirm it no longer exists with the following command."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "ff220914-8349-455d-9c7f-aa29612fa312"
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"az sql db list"
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "18ef1258-ace5-481c-b3a7-b6962894d2cc"
|
||||
},
|
||||
"outputs": [],
|
||||
"execution_count": 25
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"You've now seen how you can leverage point in time restore (PITR) in Azure SQL Database. PITR is also available in Azure SQL Managed Instance, **for databases not the whole instance**. You can use almost the same commands except with `az sql midb` as opposed to `az sql db`. For more information, see the [documentation](https://docs.microsoft.com/en-us/cli/azure/sql/midb?view=azure-cli-latest#az-sql-midb-restore)."
|
||||
],
|
||||
"metadata": {
|
||||
"azdata_cell_guid": "057514b2-bcdd-4624-8b95-c8ecd829ae07"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -2,6 +2,6 @@
|
|||
|
||||
These represent demos and examples you can run that accompany Module 5. See [Module 5](../05-Availability.md) for details on how to use the files in this module.
|
||||
|
||||
## verifydeployment TODO
|
||||
## pitr
|
||||
|
||||
TODO Run the verify deployment queries and review the results across Azure SQL Database, Azure SQL Managed Instance, and SQL Server 2019. Main notebook file [here](./verifydeployment/VerifyDeployment.ipynb).
|
||||
In this lab, you'll see how you can restore a database to a point in time using the Azure CLI and SSMS. Main notebook file [here](./pitr/pitr.ipynb).
|
После Ширина: | Высота: | Размер: 88 KiB |
После Ширина: | Высота: | Размер: 9.0 KiB |
После Ширина: | Высота: | Размер: 37 KiB |
После Ширина: | Высота: | Размер: 36 KiB |
После Ширина: | Высота: | Размер: 25 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 26 KiB |
После Ширина: | Высота: | Размер: 59 KiB |
После Ширина: | Высота: | Размер: 24 KiB |
После Ширина: | Высота: | Размер: 36 KiB |
Двоичные данные
AzureSQLWorkshop/graphics/adsconnectdb.png
До Ширина: | Высота: | Размер: 25 KiB После Ширина: | Высота: | Размер: 25 KiB |
После Ширина: | Высота: | Размер: 86 KiB |
После Ширина: | Высота: | Размер: 57 KiB |
После Ширина: | Высота: | Размер: 81 KiB |
После Ширина: | Высота: | Размер: 103 KiB |
После Ширина: | Высота: | Размер: 5.9 KiB |
После Ширина: | Высота: | Размер: 14 KiB |
После Ширина: | Высота: | Размер: 19 KiB |
После Ширина: | Высота: | Размер: 68 KiB |
Двоичные данные
AzureSQLWorkshop/graphics/basicspane.png
До Ширина: | Высота: | Размер: 65 KiB После Ширина: | Высота: | Размер: 64 KiB |
После Ширина: | Высота: | Размер: 18 KiB |
После Ширина: | Высота: | Размер: 34 KiB |
После Ширина: | Высота: | Размер: 12 KiB |
После Ширина: | Высота: | Размер: 5.5 KiB |
После Ширина: | Высота: | Размер: 6.0 KiB |
После Ширина: | Высота: | Размер: 137 KiB |
После Ширина: | Высота: | Размер: 146 KiB |
После Ширина: | Высота: | Размер: 75 KiB |
После Ширина: | Высота: | Размер: 7.8 KiB |
После Ширина: | Высота: | Размер: 10 KiB |
После Ширина: | Высота: | Размер: 6.4 KiB |
После Ширина: | Высота: | Размер: 91 KiB |
После Ширина: | Высота: | Размер: 99 KiB |
После Ширина: | Высота: | Размер: 10 KiB |
После Ширина: | Высота: | Размер: 4.3 KiB |
После Ширина: | Высота: | Размер: 33 KiB |
После Ширина: | Высота: | Размер: 29 KiB |
После Ширина: | Высота: | Размер: 62 KiB |
После Ширина: | Высота: | Размер: 66 KiB |
После Ширина: | Высота: | Размер: 9.0 KiB |
После Ширина: | Высота: | Размер: 31 KiB |
После Ширина: | Высота: | Размер: 3.2 KiB |
После Ширина: | Высота: | Размер: 6.2 KiB |
После Ширина: | Высота: | Размер: 17 KiB |
После Ширина: | Высота: | Размер: 11 KiB |
После Ширина: | Высота: | Размер: 3.6 KiB |
После Ширина: | Высота: | Размер: 43 KiB |
После Ширина: | Высота: | Размер: 12 KiB |
После Ширина: | Высота: | Размер: 1.8 KiB |
После Ширина: | Высота: | Размер: 252 KiB |
После Ширина: | Высота: | Размер: 51 KiB |
После Ширина: | Высота: | Размер: 14 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 6.6 KiB |
После Ширина: | Высота: | Размер: 4.0 KiB |
После Ширина: | Высота: | Размер: 13 KiB |
После Ширина: | Высота: | Размер: 5.8 KiB |
После Ширина: | Высота: | Размер: 172 KiB |
После Ширина: | Высота: | Размер: 11 KiB |
После Ширина: | Высота: | Размер: 62 KiB |
После Ширина: | Высота: | Размер: 3.3 KiB |
После Ширина: | Высота: | Размер: 38 KiB |
После Ширина: | Высота: | Размер: 19 KiB |
После Ширина: | Высота: | Размер: 26 KiB |
После Ширина: | Высота: | Размер: 21 KiB |
После Ширина: | Высота: | Размер: 943 B |
После Ширина: | Высота: | Размер: 51 KiB |
После Ширина: | Высота: | Размер: 28 KiB |
После Ширина: | Высота: | Размер: 50 KiB |
После Ширина: | Высота: | Размер: 44 KiB |
После Ширина: | Высота: | Размер: 58 KiB |
После Ширина: | Высота: | Размер: 122 KiB |
После Ширина: | Высота: | Размер: 81 KiB |
После Ширина: | Высота: | Размер: 80 KiB |
После Ширина: | Высота: | Размер: 18 KiB |
После Ширина: | Высота: | Размер: 78 KiB |
После Ширина: | Высота: | Размер: 11 KiB |
После Ширина: | Высота: | Размер: 20 KiB |
После Ширина: | Высота: | Размер: 91 KiB |
После Ширина: | Высота: | Размер: 38 KiB |
После Ширина: | Высота: | Размер: 6.3 KiB |