Resolved merge conflict by incorporating both suggestions.

This commit is contained in:
pmasl 2019-03-26 15:35:10 -07:00
Родитель 4d32befbff d5ccd55a36
Коммит 5f6dfa585c
120 изменённых файлов: 4411 добавлений и 4714 удалений

Просмотреть файл

@ -1,42 +1,42 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize Your Database with SQL Server 2019
#### <i>A Microsoft Course from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>00 Pre-Requisites</h2>
Pre-Requisites for the "Modernize Your Database with SQL Server 2019" workshop are found in each module. At a later date all pre-requisites will be listed in this module.
*Note that all following activities must be completed prior to class - there will not be time to perform these operations during the workshop.*
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 1: TBD</b></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png"><b>Option 1 - TBD</b></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 2: TBD</b></p>
<br>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 2: TODO: Step within Activity.</b></p>
<br>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">TODO: Sub-step<p>
First, ensure all of your updates are current. You can use the following commands to do that in an Administrator-level PowerShell session:
<pre>
TODO: Enter any code typing here
</pre>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<ul>
<li><a href="url" target="_blank">Official Documentation for this section</a></li>
</ul>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="01-WhySQL2019.md" target="_blank"><i>Why SQL Server 2019</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize Your Database with SQL Server 2019
#### <i>A Microsoft Course from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>00 Pre-Requisites</h2>
Pre-Requisites for the "Modernize Your Database with SQL Server 2019" workshop are found in each module. At a later date all pre-requisites will be listed in this module.
*Note that all following activities must be completed prior to class - there will not be time to perform these operations during the workshop.*
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 1: TBD</b></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png"><b>Option 1 - TBD</b></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 2: TBD</b></p>
<br>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b>Activity 2: TODO: Step within Activity.</b></p>
<br>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">TODO: Sub-step<p>
First, ensure all of your updates are current. You can use the following commands to do that in an Administrator-level PowerShell session:
<pre>
TODO: Enter any code typing here
</pre>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<ul>
<li><a href="url" target="_blank">Official Documentation for this section</a></li>
</ul>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="01-WhySQL2019.md" target="_blank"><i>Why SQL Server 2019</i></a>.

Просмотреть файл

@ -1,16 +1,16 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize Your Database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Why SQL Server 2019?</h2>
This module will be completed at a later date.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="02-IntelligentPerformance.md"
![](../graphics/microsoftlogo.png)
# Workshop: Modernize Your Database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Why SQL Server 2019?</h2>
This module will be completed at a later date.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="02-IntelligentPerformance.md"
target="_blank"><i> Intelligent Performance</i></a>.

Просмотреть файл

@ -1,60 +1,60 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Intelligent Performance</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">2.0 Automatic Tuning</a></dt>
<dt><a href="#3-1">2.1 Intelligent Query Processing</a></dt>
<dt><a href="#3-2">2.2 Lightweight Query Processing</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">2.0 Automatic Tuning</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Automatic Tuning</a></b></p>
Follow the instructions for [Automatic Tuning Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/autotune)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">2.1 Intelligent Query Processing</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Intelligent Query Processing</a></b></p>
Follow the instructions for [Intelligent Query Processing Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/iqp)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-2">2.2 Lightweight Query Profiling</h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Lightweight Query Profiling</a></b></p>
Follow the instructions for [Lightweight Query Profiling Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/lwp)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="03-Security.md" target="_blank"><i> New Security Capabilities</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Intelligent Performance</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">2.0 Automatic Tuning</a></dt>
<dt><a href="#3-1">2.1 Intelligent Query Processing</a></dt>
<dt><a href="#3-2">2.2 Lightweight Query Processing</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">2.0 Automatic Tuning</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Automatic Tuning</a></b></p>
Follow the instructions for [Automatic Tuning Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/autotune)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">2.1 Intelligent Query Processing</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Intelligent Query Processing</a></b></p>
Follow the instructions for [Intelligent Query Processing Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/iqp)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-2">2.2 Lightweight Query Profiling</h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Lightweight Query Profiling</a></b></p>
Follow the instructions for [Lightweight Query Profiling Exercise](Module%202%20Activity%20-%20Intelligent%20Performance/lwp)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="03-Security.md" target="_blank"><i> New Security Capabilities</i></a>.

Просмотреть файл

@ -1,39 +1,39 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>New Security Capabilities</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">3.0 Static Data Masking</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">3.0 Static Data Masking</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Static Data Masking</a></b></p>
Follow the instructions for [Static Data Masking Exercise](Module%203%20Activity%20-%20Security/staticmask)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="04-MissionCriticalAvailability.md" target="_blank"><i>Mission Critical Availability</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>New Security Capabilities</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">3.0 Static Data Masking</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">3.0 Static Data Masking</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Static Data Masking</a></b></p>
Follow the instructions for [Static Data Masking Exercise](Module%203%20Activity%20-%20Security/staticmask)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="04-MissionCriticalAvailability.md" target="_blank"><i>Mission Critical Availability</i></a>.

Просмотреть файл

@ -1,38 +1,38 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Mission Critical Availability</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">4.0 Accelerated Database Recovery</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">4.0 Accelerated Database Recovery</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Accelerated Database Recovery</a></b></p>
Follow the instructions for [Accelerated Database Recovery Exercise](Module%204%20Activity%20-%20MIssion%20Critical%20Availability/adr)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="05-ModernDevelopmentPlatform.md" target="_blank"><i> Modern Development Platform</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Mission Critical Availability</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">4.0 Accelerated Database Recovery</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">4.0 Accelerated Database Recovery</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Accelerated Database Recovery</a></b></p>
Follow the instructions for [Accelerated Database Recovery Exercise](Module%204%20Activity%20-%20MIssion%20Critical%20Availability/adr)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="05-ModernDevelopmentPlatform.md" target="_blank"><i> Modern Development Platform</i></a>.

Просмотреть файл

@ -1,49 +1,49 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Modern Development Platform</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">5.0 Python</a></dt>
<dt><a href="#3-1">5.1 Java</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">5.0 Python</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Python</a></b></p>
Follow the instructions for [Python Exercise](Module%205%20Activity%20-%20Modern%20Development%20Platform/python)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">5.1 Java</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Java</a></b></p>
Follow the instructions for [Java Exercise](Module%205%20Activity%20-%20Modern%20Development%20Platform/java)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="06-SQLLinux.md" target="_blank"><i>SQL Server on Linux</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Modern Development Platform</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">5.0 Python</a></dt>
<dt><a href="#3-1">5.1 Java</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">5.0 Python</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Python</a></b></p>
Follow the instructions for [Python Exercise](Module%205%20Activity%20-%20Modern%20Development%20Platform/python)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">5.1 Java</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Java</a></b></p>
Follow the instructions for [Java Exercise](Module%205%20Activity%20-%20Modern%20Development%20Platform/java)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="06-SQLLinux.md" target="_blank"><i>SQL Server on Linux</i></a>.

Просмотреть файл

@ -1,49 +1,49 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server on Linux</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">6.0 Deploy SQL Server on Linux</a></dt>
<dt><a href="#3-1">6.1 Explore SQL Server on Linux</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">6.0 Deploy SQL Server on Linux</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Deploy SQL Server on Linux</a></b></p>
Follow the instructions for [Deploy SQL Server on Linux Exercise](Module%206%20Activity%20-%20SQL%20Server%20on%20Linux/deploy)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">6.1 Explore SQL Server on Linux</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Explore SQL Server on Linux</a></b></p>
Follow the instructions for [Explore SQL Server on Linux Exercise](Module%206%20Activity%20-%20SQL%20Server%20on%20Linux/explore)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="07-SQLContainers.md" target="_blank"><i>SQL Server Containers and Kubernetes</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server on Linux</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">6.0 Deploy SQL Server on Linux</a></dt>
<dt><a href="#3-1">6.1 Explore SQL Server on Linux</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">6.0 Deploy SQL Server on Linux</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Deploy SQL Server on Linux</a></b></p>
Follow the instructions for [Deploy SQL Server on Linux Exercise](Module%206%20Activity%20-%20SQL%20Server%20on%20Linux/deploy)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">6.1 Explore SQL Server on Linux</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Explore SQL Server on Linux</a></b></p>
Follow the instructions for [Explore SQL Server on Linux Exercise](Module%206%20Activity%20-%20SQL%20Server%20on%20Linux/explore)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="07-SQLContainers.md" target="_blank"><i>SQL Server Containers and Kubernetes</i></a>.

Просмотреть файл

@ -1,49 +1,49 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server Containers and Kubernetes</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">7.0 SQL Server Containers Fundamentals</a></dt>
<dt><a href="#3-1">7.1 Updating and Upgrading with SQL Containers</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">7.0 SQL Server Containers Fundamentals</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: SQL Server Containers Fundamentals</a></b></p>
Follow the instructions for [SQL Server Containers Fundamentals Exercise](Module%207%20Activity%20-%20SQL%20Server%20Containers/sqlcontainers)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">7.1 Updating and Upgrading with SQL Containers</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Updating and Upgrading with SQL Containers</a></b></p>
Follow the instructions for [Updating and Upgrading with SQL Containers Exercise](Module%207%20Activity%20-%20SQL%20Server%20Containers/sqlcontainerupdate)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="08-DataVirtualization.md" target="_blank"><i>Data Virtualization</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server Containers and Kubernetes</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">7.0 SQL Server Containers Fundamentals</a></dt>
<dt><a href="#3-1">7.1 Updating and Upgrading with SQL Containers</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">7.0 SQL Server Containers Fundamentals</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: SQL Server Containers Fundamentals</a></b></p>
Follow the instructions for [SQL Server Containers Fundamentals Exercise](Module%207%20Activity%20-%20SQL%20Server%20Containers/sqlcontainers)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">7.1 Updating and Upgrading with SQL Containers</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Updating and Upgrading with SQL Containers</a></b></p>
Follow the instructions for [Updating and Upgrading with SQL Containers Exercise](Module%207%20Activity%20-%20SQL%20Server%20Containers/sqlcontainerupdate)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="08-DataVirtualization.md" target="_blank"><i>Data Virtualization</i></a>.

Просмотреть файл

@ -1,49 +1,49 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server Data Virtualization</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">8.0 SQL Server Polybase</a></dt>
<dt><a href="#3-1">8.1 A SQL Server Data Hub</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">8.0 SQL Server Polybase</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: SQL Server Polybase</a></b></p>
Follow the instructions for [SQL Server Polybase Exercise](Module%208%20Activity%20-%20Data%20Virtualization/polybase)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">8.1 A SQL Server Data Hub</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: 8.1 A SQL Server Data Hub</a></b></p>
Follow the instructions for [SQL Server Data Hub Exercise](Module%208%20Activity%20-%20Data%20Virtualization/sqldatahub)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="09-WhatElseIsNew.md" target="_blank"><i>What Else is New in SQL Server 2019</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>SQL Server Data Virtualization</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">8.0 SQL Server Polybase</a></dt>
<dt><a href="#3-1">8.1 A SQL Server Data Hub</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">8.0 SQL Server Polybase</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: SQL Server Polybase</a></b></p>
Follow the instructions for [SQL Server Polybase Exercise](Module%208%20Activity%20-%20Data%20Virtualization/polybase)
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-1">8.1 A SQL Server Data Hub</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: 8.1 A SQL Server Data Hub</a></b></p>
Follow the instructions for [SQL Server Data Hub Exercise](Module%208%20Activity%20-%20Data%20Virtualization/sqldatahub)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="09-WhatElseIsNew.md" target="_blank"><i>What Else is New in SQL Server 2019</i></a>.

Просмотреть файл

@ -1,37 +1,37 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>What else is new in SQL Server 2019</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">9.0 TBD</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">9.0 TBD</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: TBD</a></b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="10-MigratingAndNextSteps.md" target="_blank"><i>Migrating to SQL Server 2019 and Next Steps</i></a>.
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>What else is new in SQL Server 2019</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">9.0 TBD</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">9.0 TBD</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: TBD</a></b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/geopin.png"><b >Next Steps</b></p>
Next, Continue to <a href="10-MigratingAndNextSteps.md" target="_blank"><i>Migrating to SQL Server 2019 and Next Steps</i></a>.

Просмотреть файл

@ -1,31 +1,31 @@
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Migrating to SQL Server 2019 and Next Steps</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">10.0 Query Tuning Assistant</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">10.0 Query Tuning Assistant</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Query Tuning Assistant</a></b></p>
Follow the instructions for [Query Tuning Assistant Exercise](Module%2010%20Activity%20-%20Migrating%20to%20SQL%20Server%202019/qta)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>
![](../graphics/microsoftlogo.png)
# Workshop: Modernize your database with SQL Server 2019
#### <i>A Microsoft workshop from the SQL Server team</i>
<p style="border-bottom: 1px solid lightgrey;"></p>
<img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/textbubble.png"> <h2>Migrating to SQL Server 2019 and Next Steps</h2>
You'll cover the following topics in this Module:
<dl>
<dt><a href="#3-0">10.0 Query Tuning Assistant</a></dt>
</dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">10.0 Query Tuning Assistant</a></h2>
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/point1.png"><b><a name="aks">Activity: Query Tuning Assistant</a></b></p>
Follow the instructions for [Query Tuning Assistant Exercise](Module%2010%20Activity%20-%20Migrating%20to%20SQL%20Server%202019/qta)
<p style="border-bottom: 1px solid lightgrey;"></p>
<p><img style="margin: 0px 15px 15px 0px;" src="../graphics/owl.png"><b>For Further Study</b></p>

Просмотреть файл

@ -1,9 +1,9 @@
# Using Query Tuning Assistant for Post Migration Optimization
This exercise will show you how to use the new Query Tuning Assistant(QTA) to optimize performance after migrating to SQL Server 2019
To performance this exercise:
1. Download the following zip file to your computer with SQL Server or client computer that can connect to SQL Server at https://github.com/Microsoft/tigertoolbox/blob/master/Sessions/Winter-Ready-2019/Labs/Lab-QTA.zip.
2. Read through the instructions at https://github.com/Microsoft/tigertoolbox/blob/master/Sessions/Winter-Ready-2019/Lab-QTA.md to perform the lab.
# Using Query Tuning Assistant for Post Migration Optimization
This exercise will show you how to use the new Query Tuning Assistant(QTA) to optimize performance after migrating to SQL Server 2019
To performance this exercise:
1. Download the following zip file to your computer with SQL Server or client computer that can connect to SQL Server at https://github.com/Microsoft/tigertoolbox/blob/master/Sessions/Winter-Ready-2019/Labs/Lab-QTA.zip.
2. Read through the instructions at https://github.com/Microsoft/tigertoolbox/blob/master/Sessions/Winter-Ready-2019/Lab-QTA.md to perform the lab.

Просмотреть файл

@ -1,7 +1,7 @@
# Module 10 Activities - Migrating to SQL Server 2019
These represent demos and examples to show you migration techniques for SQL Server. Right now this Module only includes exercises for post migration optmization using Query Tuning Assistant (QTA)
## qta
# Module 10 Activities - Migrating to SQL Server 2019
These represent demos and examples to show you migration techniques for SQL Server. Right now this Module only includes exercises for post migration optmization using Query Tuning Assistant (QTA)
## qta
Learn how to use Query Tuning Assistant (QTA) to assist with post migration optimizations.

Просмотреть файл

@ -1,4 +1,4 @@
use WideWorldImporters
go
exec auto_tune
use WideWorldImporters
go
exec auto_tune
go

Просмотреть файл

@ -1,4 +1,4 @@
use WideWorldImporters
go
exec initialize
use WideWorldImporters
go
exec initialize
go

Просмотреть файл

@ -1,36 +1,36 @@
# Automatic Tuning with SQL Server
This is a repro package to demonstrate the Automatic Tuning (Auto Plan Correction) in SQL Server 2017. This feature is using telemtry from the Query Store feature we launched with Azure SQL Database and SQL Server 2016 to provide built-in intelligence.
## Requirements
This repro assumes the following:
- SQL Server 2017 installed (pick at minimum Database Engine) on Windows. This feature requires Developer or Enterprise Edition.
- You have installed SQL Server Management Studio or SQL Operations Studio (https://docs.microsoft.com/en-us/sql/sql-operations-studio/download)
- You have downloaded the RML Utilities from https://www.microsoft.com/en-us/download/details.aspx?id=4511.
- These demos use a named instance called SQL2017. You will need to edit the .cmd scripts which connect to SQL Server to change to a default instance or whatever named instance you have installed.
- Install ostress from the package RML_Setup_AMD64.msi. Add C:\Program Files\Microsoft Corporation\RMLUtils to your path.
- Restore the WideWorldImporters database backup to your SQL Server 2017 instance. The WideWorldImporters-Full.bak is provided along with a **restorewwi.sql** script to restore the database. This script assumes the backup is in the C:\sql_sample_databases directory and that all database files will be placed in c:\temp. Change the location for the backup and your files as needed.
## Demo Steps
1. Run **repro_setup.cmd** to customize the WideWorldImporters database for the demo. You will only need to run this one time after restoring the backup.
2. Setup Performance Monitor on Windows to track SQL Statistics/Batch Requests/sec
3. Run **initalize.cmd**to setup the repro for default of recommendations. If you restart the demo from the beginning, you can run this again to "reset" the demo.
4. Run **report.cmd** to start the workload. This will pop-up a command window running the workload. Note the chart showing Batch Requests/Sec as your workload throughput
5. Run **regression.cmd** (you may need to run this a few times for timing reasons). Notice the drop in batch requests/sec which shows a performance regression in your workload.
6. Load **recommendations.sql** into SQL Server Management Studio or SQL Operations Studio and review the results. Notice the time difference under the reason column and value of state_transition_reason which should be AutomaticTuningOptionNotEnabled. This means we found a regression but are recommending it only, not automatically fixing it. The script column shows a query that could be used to fix the problem.
7. Stop the **report.cmd** workload by pressing <Ctrl>+<C> in the command window and pressing 'y' to stop. This should close that command window.
8. Now let's see what happens with automatic plan correction. Run **auto_tune.cmd**which sets automatic plan correct ON for WideWorldImporters
# Automatic Tuning with SQL Server
This is a repro package to demonstrate the Automatic Tuning (Auto Plan Correction) in SQL Server 2017. This feature is using telemtry from the Query Store feature we launched with Azure SQL Database and SQL Server 2016 to provide built-in intelligence.
## Requirements
This repro assumes the following:
- SQL Server 2017 installed (pick at minimum Database Engine) on Windows. This feature requires Developer or Enterprise Edition.
- You have installed SQL Server Management Studio or SQL Operations Studio (https://docs.microsoft.com/en-us/sql/sql-operations-studio/download)
- You have downloaded the RML Utilities from https://www.microsoft.com/en-us/download/details.aspx?id=4511.
- These demos use a named instance called SQL2017. You will need to edit the .cmd scripts which connect to SQL Server to change to a default instance or whatever named instance you have installed.
- Install ostress from the package RML_Setup_AMD64.msi. Add C:\Program Files\Microsoft Corporation\RMLUtils to your path.
- Restore the WideWorldImporters database backup to your SQL Server 2017 instance. The WideWorldImporters-Full.bak is provided along with a **restorewwi.sql** script to restore the database. This script assumes the backup is in the C:\sql_sample_databases directory and that all database files will be placed in c:\temp. Change the location for the backup and your files as needed.
## Demo Steps
1. Run **repro_setup.cmd** to customize the WideWorldImporters database for the demo. You will only need to run this one time after restoring the backup.
2. Setup Performance Monitor on Windows to track SQL Statistics/Batch Requests/sec
3. Run **initalize.cmd**to setup the repro for default of recommendations. If you restart the demo from the beginning, you can run this again to "reset" the demo.
4. Run **report.cmd** to start the workload. This will pop-up a command window running the workload. Note the chart showing Batch Requests/Sec as your workload throughput
5. Run **regression.cmd** (you may need to run this a few times for timing reasons). Notice the drop in batch requests/sec which shows a performance regression in your workload.
6. Load **recommendations.sql** into SQL Server Management Studio or SQL Operations Studio and review the results. Notice the time difference under the reason column and value of state_transition_reason which should be AutomaticTuningOptionNotEnabled. This means we found a regression but are recommending it only, not automatically fixing it. The script column shows a query that could be used to fix the problem.
7. Stop the **report.cmd** workload by pressing <Ctrl>+<C> in the command window and pressing 'y' to stop. This should close that command window.
8. Now let's see what happens with automatic plan correction. Run **auto_tune.cmd**which sets automatic plan correct ON for WideWorldImporters
9. Repeat steps 4-7 as above. In Performance Monitor you will see the batch requests/sec dip but within a second go right back up. This is because SQL Server detected the regression and automatically reverted to "last known good" or the last known good query plan as found in the Query Store. Note in the output of recommendations.sql the state_transition_reason now says LastGoodPlanForced.

Просмотреть файл

@ -1,14 +1,14 @@
use WideWorldImporters
go
SELECT reason, score,
JSON_VALUE(state, '$.currentValue') state,
JSON_VALUE(state, '$.reason') state_transition_reason,
JSON_VALUE(details, '$.implementationDetails.script') script,
planForceDetails.*
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
WITH ( [query_id] int '$.queryId',
[new plan_id] int '$.regressedPlanId',
[forcedPlanId] int '$.forcedPlanId'
) as planForceDetails;
use WideWorldImporters
go
SELECT reason, score,
JSON_VALUE(state, '$.currentValue') state,
JSON_VALUE(state, '$.reason') state_transition_reason,
JSON_VALUE(details, '$.implementationDetails.script') script,
planForceDetails.*
FROM sys.dm_db_tuning_recommendations
CROSS APPLY OPENJSON (Details, '$.planForceDetails')
WITH ( [query_id] int '$.queryId',
[new plan_id] int '$.regressedPlanId',
[forcedPlanId] int '$.forcedPlanId'
) as planForceDetails;
go

Просмотреть файл

@ -1,4 +1,4 @@
use WideWorldImporters
go
exec regression
use WideWorldImporters
go
exec regression
go

Просмотреть файл

@ -1,5 +1,5 @@
use WideWorldImporters
go
declare @packagetypeid int = 7;
exec dbo.report @packagetypeid
use WideWorldImporters
go
declare @packagetypeid int = 7;
exec dbo.report @packagetypeid
go

Просмотреть файл

@ -1,7 +1,7 @@
restore database WideWorldImporters from disk = 'C:\sql_sample_databases\WideWorldImporters-Full.bak' with
move 'WWI_Primary' to 'c:\temp\WideWorldImporters.mdf',
move 'WWI_UserData' to 'c:\temp\WideWorldImporters_UserData.ndf',
move 'WWI_Log' to 'c:\temp\WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to 'c:\temp\WideWorldImporters_InMemory_Data_1',
stats=5
restore database WideWorldImporters from disk = 'C:\sql_sample_databases\WideWorldImporters-Full.bak' with
move 'WWI_Primary' to 'c:\temp\WideWorldImporters.mdf',
move 'WWI_UserData' to 'c:\temp\WideWorldImporters_UserData.ndf',
move 'WWI_Log' to 'c:\temp\WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to 'c:\temp\WideWorldImporters_InMemory_Data_1',
stats=5
go

Просмотреть файл

@ -1,47 +1,47 @@
use wideworldimporters
go
DROP procedure IF EXISTS [dbo].[initialize]
go
CREATE procedure [dbo].[initialize]
as begin
DBCC FREEPROCCACHE;
ALTER DATABASE current SET QUERY_STORE CLEAR ALL;
ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = OFF);
end
GO
DROP procedure IF EXISTS [dbo].[auto_tune]
go
CREATE procedure [dbo].[auto_tune]
as begin
ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON);
DBCC FREEPROCCACHE;
ALTER DATABASE current SET QUERY_STORE CLEAR ALL;
end
GO
DROP PROCEDURE IF EXISTS [dbo].[report]
go
CREATE PROCEDURE [dbo].[report] ( @packagetypeid INT )
AS
BEGIN
SELECT AVG([UnitPrice] * [Quantity] - [TaxRate])
FROM [Sales].[OrderLines]
WHERE [PackageTypeID] = @packagetypeid;
END;
GO
DROP PROCEDURE IF EXISTS [dbo].[regression]
go
CREATE PROCEDURE [dbo].[regression]
AS
BEGIN
DBCC FREEPROCCACHE;
BEGIN
DECLARE @packagetypeid INT = 1;
EXEC [report] @packagetypeid;
END;
END;
use wideworldimporters
go
DROP procedure IF EXISTS [dbo].[initialize]
go
CREATE procedure [dbo].[initialize]
as begin
DBCC FREEPROCCACHE;
ALTER DATABASE current SET QUERY_STORE CLEAR ALL;
ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = OFF);
end
GO
DROP procedure IF EXISTS [dbo].[auto_tune]
go
CREATE procedure [dbo].[auto_tune]
as begin
ALTER DATABASE current SET AUTOMATIC_TUNING ( FORCE_LAST_GOOD_PLAN = ON);
DBCC FREEPROCCACHE;
ALTER DATABASE current SET QUERY_STORE CLEAR ALL;
end
GO
DROP PROCEDURE IF EXISTS [dbo].[report]
go
CREATE PROCEDURE [dbo].[report] ( @packagetypeid INT )
AS
BEGIN
SELECT AVG([UnitPrice] * [Quantity] - [TaxRate])
FROM [Sales].[OrderLines]
WHERE [PackageTypeID] = @packagetypeid;
END;
GO
DROP PROCEDURE IF EXISTS [dbo].[regression]
go
CREATE PROCEDURE [dbo].[regression]
AS
BEGIN
DBCC FREEPROCCACHE;
BEGIN
DECLARE @packagetypeid INT = 1;
EXEC [report] @packagetypeid;
END;
END;
GO

Просмотреть файл

@ -1,40 +1,40 @@
use WideWorldImporters
go
select * from Sales.InvoiceLines
go
alter database wideworldimporters set compatibility_level = 150
alter database wideworldimporters set compatibility_level = 130
create or alter proc defercompile
as
begin
declare @ilines table
( [InvoiceLineID] [int] NOT NULL primary key,
[InvoiceID] [int] NOT NULL,
[StockItemID] [int] NOT NULL,
[Description] [nvarchar](100) NOT NULL,
[PackageTypeID] [int] NOT NULL,
[Quantity] [int] NOT NULL,
[UnitPrice] [decimal](18, 2) NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[TaxAmount] [decimal](18, 2) NOT NULL,
[LineProfit] [decimal](18, 2) NOT NULL,
[ExtendedPrice] [decimal](18, 2) NOT NULL,
[LastEditedBy] [int] NOT NULL,
[LastEditedWhen] [datetime2](7) NOT NULL
)
insert into @ilines select * from sales.InvoiceLines
select i.CustomerID, sum(il.LineProfit)
from Sales.Invoices i
inner join @ilines il
on i.InvoiceID = il.InvoiceID
group by i.CustomerID
end
go
exec defercompile
use WideWorldImporters
go
select * from Sales.InvoiceLines
go
alter database wideworldimporters set compatibility_level = 150
alter database wideworldimporters set compatibility_level = 130
create or alter proc defercompile
as
begin
declare @ilines table
( [InvoiceLineID] [int] NOT NULL primary key,
[InvoiceID] [int] NOT NULL,
[StockItemID] [int] NOT NULL,
[Description] [nvarchar](100) NOT NULL,
[PackageTypeID] [int] NOT NULL,
[Quantity] [int] NOT NULL,
[UnitPrice] [decimal](18, 2) NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[TaxAmount] [decimal](18, 2) NOT NULL,
[LineProfit] [decimal](18, 2) NOT NULL,
[ExtendedPrice] [decimal](18, 2) NOT NULL,
[LastEditedBy] [int] NOT NULL,
[LastEditedWhen] [datetime2](7) NOT NULL
)
insert into @ilines select * from sales.InvoiceLines
select i.CustomerID, sum(il.LineProfit)
from Sales.Invoices i
inner join @ilines il
on i.InvoiceID = il.InvoiceID
group by i.CustomerID
end
go
exec defercompile

Просмотреть файл

@ -1,34 +1,34 @@
USE WideWorldImporters
GO
CREATE or ALTER PROCEDURE defercompile
AS
BEGIN
-- Declare the table variable
DECLARE @ilines TABLE
( [InvoiceLineID] [int] NOT NULL primary key,
[InvoiceID] [int] NOT NULL,
[StockItemID] [int] NOT NULL,
[Description] [nvarchar](100) NOT NULL,
[PackageTypeID] [int] NOT NULL,
[Quantity] [int] NOT NULL,
[UnitPrice] [decimal](18, 2) NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[TaxAmount] [decimal](18, 2) NOT NULL,
[LineProfit] [decimal](18, 2) NOT NULL,
[ExtendedPrice] [decimal](18, 2) NOT NULL,
[LastEditedBy] [int] NOT NULL,
[LastEditedWhen] [datetime2](7) NOT NULL
)
-- Insert all the rows from InvoiceLines into the table variable
INSERT INTO @ilines SELECT * FROM Sales.InvoiceLines
-- Find my total profile by customer
SELECT i.CustomerID, SUM(il.LineProfit)
FROM Sales.Invoices i
INNER JOIN @ilines il
ON i.InvoiceID = il.InvoiceID
GROUP By i.CustomerID
END
USE WideWorldImporters
GO
CREATE or ALTER PROCEDURE defercompile
AS
BEGIN
-- Declare the table variable
DECLARE @ilines TABLE
( [InvoiceLineID] [int] NOT NULL primary key,
[InvoiceID] [int] NOT NULL,
[StockItemID] [int] NOT NULL,
[Description] [nvarchar](100) NOT NULL,
[PackageTypeID] [int] NOT NULL,
[Quantity] [int] NOT NULL,
[UnitPrice] [decimal](18, 2) NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[TaxAmount] [decimal](18, 2) NOT NULL,
[LineProfit] [decimal](18, 2) NOT NULL,
[ExtendedPrice] [decimal](18, 2) NOT NULL,
[LastEditedBy] [int] NOT NULL,
[LastEditedWhen] [datetime2](7) NOT NULL
)
-- Insert all the rows from InvoiceLines into the table variable
INSERT INTO @ilines SELECT * FROM Sales.InvoiceLines
-- Find my total profile by customer
SELECT i.CustomerID, SUM(il.LineProfit)
FROM Sales.Invoices i
INNER JOIN @ilines il
ON i.InvoiceID = il.InvoiceID
GROUP By i.CustomerID
END
GO

Просмотреть файл

@ -1,11 +1,11 @@
USE WideWorldImporters
GO
SELECT qsp.plan_id, qsp.compatibility_level,
avg(qsrs.avg_duration)/1000 as avg_duration_ms, avg(qsrs.avg_logical_io_reads) as avg_logical_io
FROM sys.query_store_plan qsp
INNER JOIN sys.query_store_runtime_stats qsrs
ON qsp.plan_id = qsrs.plan_id
AND qsp.query_id = 41998 -- Put in your query_id here
GROUP BY qsp.plan_id, qsp.compatibility_level
GO
USE WideWorldImporters
GO
SELECT qsp.plan_id, qsp.compatibility_level,
avg(qsrs.avg_duration)/1000 as avg_duration_ms, avg(qsrs.avg_logical_io_reads) as avg_logical_io
FROM sys.query_store_plan qsp
INNER JOIN sys.query_store_runtime_stats qsrs
ON qsp.plan_id = qsrs.plan_id
AND qsp.query_id = 41998 -- Put in your query_id here
GROUP BY qsp.plan_id, qsp.compatibility_level
GO

Просмотреть файл

@ -1,27 +1,27 @@
# SQL Server demo for Intelligent Query Processing - Deferred Table Variable Compilation
## Requirements
You must first install the following for this demo
- SQL Server 2019 CTP 2.0 or greater on Windows Server. While you can run the basics of this demo on SQL Server on Linux, the full affect to the demo requires Windows Performance Monitor so I recommend you use SQL Server on Windows Server.
- SQL Server client tools (e.g. sqlcmd)
- SQL Server Management Studio 18.0 (SSMS) installed
- RML Utilities installed (ostress) from https://www.microsoft.com/en-us/download/details.aspx?id=4511
- A copy of the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
## Demo Steps
1. Restore the WWI backup. Use the **restorewwi.sql** as a template.
2. Run **setup_repro.cmd** to install new stored procedure in WideWorldImporters
3. Examine the details of the new procedure from **proc.sql**
4. Run **repro_130.cmd** and observe the total duration time. It should take around 30+ secs
5. Run **repro_150.cm**d and observe the total duration. This is the exact same workload as step 4 except with database compatibility level 150. This sould take around 10secs.
6. Using SSMS, observe the performance of this query and differences in plan and average execution time using Query Store Reports, Top Resource Queries.
# SQL Server demo for Intelligent Query Processing - Deferred Table Variable Compilation
## Requirements
You must first install the following for this demo
- SQL Server 2019 CTP 2.0 or greater on Windows Server. While you can run the basics of this demo on SQL Server on Linux, the full affect to the demo requires Windows Performance Monitor so I recommend you use SQL Server on Windows Server.
- SQL Server client tools (e.g. sqlcmd)
- SQL Server Management Studio 18.0 (SSMS) installed
- RML Utilities installed (ostress) from https://www.microsoft.com/en-us/download/details.aspx?id=4511
- A copy of the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
## Demo Steps
1. Restore the WWI backup. Use the **restorewwi.sql** as a template.
2. Run **setup_repro.cmd** to install new stored procedure in WideWorldImporters
3. Examine the details of the new procedure from **proc.sql**
4. Run **repro_130.cmd** and observe the total duration time. It should take around 30+ secs
5. Run **repro_150.cm**d and observe the total duration. This is the exact same workload as step 4 except with database compatibility level 150. This sould take around 10secs.
6. Using SSMS, observe the performance of this query and differences in plan and average execution time using Query Store Reports, Top Resource Queries.
7. Optionally use query_plan_diff.sql to observe the differences in Query Store. You need to substitute the proper query_id from the report in SSMS.

Просмотреть файл

@ -1,8 +1,8 @@
USE master
GO
ALTER DATABASE wideworldimporters SET compatibility_level = 130
go
USE WideWorldImporters
go
EXEC defercompile
USE master
GO
ALTER DATABASE wideworldimporters SET compatibility_level = 130
go
USE WideWorldImporters
go
EXEC defercompile
go

Просмотреть файл

@ -1,8 +1,8 @@
USE master
GO
ALTER DATABASE wideworldimporters SET compatibility_level = 150
go
USE WideWorldImporters
go
EXEC defercompile
USE master
GO
ALTER DATABASE wideworldimporters SET compatibility_level = 150
go
USE WideWorldImporters
go
EXEC defercompile
go

Просмотреть файл

@ -1,7 +1,7 @@
restore database WideWorldImporters from disk = 'd:\sql_sample_databases\WideWorldImporters-Full.bak' with
move 'WWI_Primary' to 'd:\sql_sample_databases\WideWorldImporters.mdf',
move 'WWI_UserData' to 'd:\sql_sample_databases\WideWorldImporters_UserData.ndf',
move 'WWI_Log' to 'd:\sql_sample_databases\WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to 'd:\sql_sample_databases\WideWorldImporters_InMemory_Data_1',
stats=5
restore database WideWorldImporters from disk = 'd:\sql_sample_databases\WideWorldImporters-Full.bak' with
move 'WWI_Primary' to 'd:\sql_sample_databases\WideWorldImporters.mdf',
move 'WWI_UserData' to 'd:\sql_sample_databases\WideWorldImporters_UserData.ndf',
move 'WWI_Log' to 'd:\sql_sample_databases\WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to 'd:\sql_sample_databases\WideWorldImporters_InMemory_Data_1',
stats=5
go

Просмотреть файл

@ -1,2 +1,2 @@
alter database wideworldimporters set compatibility_level = 130
alter database wideworldimporters set compatibility_level = 130
go

Просмотреть файл

@ -1,2 +1,2 @@
alter database wideworldimporters set compatibility_level = 150
alter database wideworldimporters set compatibility_level = 150
go

Просмотреть файл

@ -1,7 +1,7 @@
# SQL Server Demos for Intelligent Query Processing in SQL Server
These are demos you can use to show the Intelligent Query Processing capabilities in SQL Server 2017 and 2019
## deferredtablevariable
# SQL Server Demos for Intelligent Query Processing in SQL Server
These are demos you can use to show the Intelligent Query Processing capabilities in SQL Server 2017 and 2019
## deferredtablevariable
This is a demo to show the new cardinality estimation for table variables called deferred compilation.

Просмотреть файл

@ -1,9 +1,9 @@
--Run this in a different session than the session in which your query is running.
--Note that you may need to change session id 54 below with the session id you want to monitor.
SELECT node_id,physical_operator_name, SUM(row_count) row_count,
SUM(estimate_row_count) AS estimate_row_count,
CAST(SUM(row_count)*100 AS float)/SUM(estimate_row_count) as percent_complete
FROM sys.dm_exec_query_profiles
WHERE session_id=91
GROUP BY node_id,physical_operator_name
--Run this in a different session than the session in which your query is running.
--Note that you may need to change session id 54 below with the session id you want to monitor.
SELECT node_id,physical_operator_name, SUM(row_count) row_count,
SUM(estimate_row_count) AS estimate_row_count,
CAST(SUM(row_count)*100 AS float)/SUM(estimate_row_count) as percent_complete
FROM sys.dm_exec_query_profiles
WHERE session_id=91
GROUP BY node_id,physical_operator_name
ORDER BY node_id;

Просмотреть файл

@ -1,10 +1,10 @@
--alter database wideworldimporters set compatibility_level = 150
--go
use wideworldimporters
go
select si.CustomerID, sil.LineProfit
from Sales.Invoices si
join Sales.InvoiceLines sil
on si.InvoiceID = si.InvoiceID
option (maxdop 1)
--alter database wideworldimporters set compatibility_level = 150
--go
use wideworldimporters
go
select si.CustomerID, sil.LineProfit
from Sales.Invoices si
join Sales.InvoiceLines sil
on si.InvoiceID = si.InvoiceID
option (maxdop 1)
go

Просмотреть файл

@ -1,17 +1,17 @@
# SQL Server Demo Lightweight Query Profiling
This is a demo to show the feature of Lightweight Query Profiling which is on by default in SQL Server 2019
## Requirements
- Install SQL Server 2019 CTP 2.0 or higher
- Restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
- Install SQL Server Management Studio 18.0 or higher
## Demo Steps
1. Open up SSMS and Activity Monitor
2. Run the script **mysmartquery.cmd**
3. Choose the process for this query in Activity Monitor in the Process section and right-click. Choose Show Live Execution Plan
4. You should see a live view of the plan in execution at the operator level. Note the query has a syntax bug which causes the crazy behavior.
# SQL Server Demo Lightweight Query Profiling
This is a demo to show the feature of Lightweight Query Profiling which is on by default in SQL Server 2019
## Requirements
- Install SQL Server 2019 CTP 2.0 or higher
- Restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak
- Install SQL Server Management Studio 18.0 or higher
## Demo Steps
1. Open up SSMS and Activity Monitor
2. Run the script **mysmartquery.cmd**
3. Choose the process for this query in Activity Monitor in the Process section and right-click. Choose Show Live Execution Plan
4. You should see a live view of the plan in execution at the operator level. Note the query has a syntax bug which causes the crazy behavior.
5. Load **dm_exec_query_profiles.sql** and fill in the correct session_id. Note you can see the same info through a DMV.

Просмотреть файл

@ -1,15 +1,15 @@
# Module 2 Activities - Intelligent Performance
These represent demos and examples you can run to see the behavior of Intelligent Performance in SQL Server 2019. Note the autotune demo will work on SQL Server 2017.
## autotune
Show the capabilities of Automatic Tuning (Automatic Plan Correction)
## iqp
Show the capabilities of Intelligent Query Processing including deferred table variable compilation.
## lwp
# Module 2 Activities - Intelligent Performance
These represent demos and examples you can run to see the behavior of Intelligent Performance in SQL Server 2019. Note the autotune demo will work on SQL Server 2017.
## autotune
Show the capabilities of Automatic Tuning (Automatic Plan Correction)
## iqp
Show the capabilities of Intelligent Query Processing including deferred table variable compilation.
## lwp
Show the capabilities of "on by default" Lightweight Query Profiling

Просмотреть файл

@ -1,7 +1,7 @@
# Module 2 Activities - Security
These represent demos and examples you can run to see new capabilities for security in SQL Servrer 2019.
## staticmask
# Module 2 Activities - Security
These represent demos and examples you can run to see new capabilities for security in SQL Servrer 2019.
## staticmask
Show the capabilities of Static Data Masking using SSMS 18.0 against SQL Server 2019

Просмотреть файл

@ -1,24 +1,24 @@
USE master
GO
DROP DATABASE IF EXISTS HumanResources
GO
CREATE DATABASE HumanResources
GO
USE HumanResources
GO
DROP TABLE IF EXISTS Employees
GO
CREATE TABLE Employees
(EmployeeID int primary key clustered,
EmployeeName nvarchar(100) not null,
EmployeeSSN nvarchar(20) not null,
EmployeeEmail nvarchar(100) not null,
EmployeePhone nvarchar(20) not null,
EmployeeHireDate datetime not null
)
INSERT INTO Employees VALUES (1, 'Bob Ward', '123-457-6891', 'bward@microsoft.com', '817-455-0111', '10/27/1993')
INSERT INTO Employees VALUES (2, 'Dak Prescott', '256-908-1234', 'dakprescott@dallascowboys.com', '214-123-9999', '08/01/2016')
INSERT INTO Employees VALUES (3, 'Ryan Ward', '569-28-9123', 'ryan.ward@baylor.edu', '817-623-2391', '03/27/1996')
INSERT INTO Employees VALUES (4, 'Ginger Ward', '971-11-2378', 'ginger.ward@outlook.com', '817-455-9872', '01/01/2000')
INSERT INTO Employees VALUES (5, 'Troy Ward', '567-12-9291', 'troy.ward@tulane.edu', '682-111-2391', '08/30/1993')
USE master
GO
DROP DATABASE IF EXISTS HumanResources
GO
CREATE DATABASE HumanResources
GO
USE HumanResources
GO
DROP TABLE IF EXISTS Employees
GO
CREATE TABLE Employees
(EmployeeID int primary key clustered,
EmployeeName nvarchar(100) not null,
EmployeeSSN nvarchar(20) not null,
EmployeeEmail nvarchar(100) not null,
EmployeePhone nvarchar(20) not null,
EmployeeHireDate datetime not null
)
INSERT INTO Employees VALUES (1, 'Bob Ward', '123-457-6891', 'bward@microsoft.com', '817-455-0111', '10/27/1993')
INSERT INTO Employees VALUES (2, 'Dak Prescott', '256-908-1234', 'dakprescott@dallascowboys.com', '214-123-9999', '08/01/2016')
INSERT INTO Employees VALUES (3, 'Ryan Ward', '569-28-9123', 'ryan.ward@baylor.edu', '817-623-2391', '03/27/1996')
INSERT INTO Employees VALUES (4, 'Ginger Ward', '971-11-2378', 'ginger.ward@outlook.com', '817-455-9872', '01/01/2000')
INSERT INTO Employees VALUES (5, 'Troy Ward', '567-12-9291', 'troy.ward@tulane.edu', '682-111-2391', '08/30/1993')
GO

Просмотреть файл

@ -1,14 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<datamasking method="in-place">
<input dbname="HumanResources" />
<output dbname="" />
<masked-columns>
<table schema="dbo" name="Employees">
<column name="EmployeeEmail" maskType="stringComposite" pattern="(\l){5,10}" regex="([\w|.|+]+)@[\w|.]+" backupPattern="" />
<column name="EmployeeHireDate" maskType="singleValue" value="01/01/1900" />
<column name="EmployeeName" maskType="shuffle" />
<column name="EmployeePhone" maskType="stringComposite" pattern="\((\d){3}) (\d){3}-(\d){4}" />
<column name="EmployeeSSN" maskType="stringComposite" pattern="(\d){3}-(\d){2}-(\d){4}" />
</table>
</masked-columns>
<?xml version="1.0" encoding="UTF-8"?>
<datamasking method="in-place">
<input dbname="HumanResources" />
<output dbname="" />
<masked-columns>
<table schema="dbo" name="Employees">
<column name="EmployeeEmail" maskType="stringComposite" pattern="(\l){5,10}" regex="([\w|.|+]+)@[\w|.]+" backupPattern="" />
<column name="EmployeeHireDate" maskType="singleValue" value="01/01/1900" />
<column name="EmployeeName" maskType="shuffle" />
<column name="EmployeePhone" maskType="stringComposite" pattern="\((\d){3}) (\d){3}-(\d){4}" />
<column name="EmployeeSSN" maskType="stringComposite" pattern="(\d){3}-(\d){2}-(\d){4}" />
</table>
</masked-columns>
</datamasking>

Просмотреть файл

@ -1,24 +1,24 @@
# Static Data Masking with SQL Server
This demo is to show the Static Data Masking feature with SQL Server which is documented at https://docs.microsoft.com/en-us/sql/relational-databases/security/static-data-masking?view=sql-server-2017
## Requirements
- Install SQL Server Management Studio (SSMS) 18.0 Preview 6 or higher
- Install SQL Server 2019 (TODO: We are checking on whether this will work with previous versions)
## Demo Steps
1. Run the script **createhr.sql** to create the database and populate it with data.
2. In SSMS, right click on the HumanResources database and select Tasks/Mask Database (Preview)
3. In the dialog box that appears, select Load Config and choose the **hrmasking.xml** file provided.
4. Select the Configure options to see how the data will be masked.
5. Type in Step 3 the Masked Database name as HRMasked
6. Click OK. This will take a few minutes to run. The tool is taking a backup of the current database, restoring it, and modifying the data to be masked.
# Static Data Masking with SQL Server
This demo is to show the Static Data Masking feature with SQL Server which is documented at https://docs.microsoft.com/en-us/sql/relational-databases/security/static-data-masking?view=sql-server-2017
## Requirements
- Install SQL Server Management Studio (SSMS) 18.0 Preview 6 or higher
- Install SQL Server 2019 (TODO: We are checking on whether this will work with previous versions)
## Demo Steps
1. Run the script **createhr.sql** to create the database and populate it with data.
2. In SSMS, right click on the HumanResources database and select Tasks/Mask Database (Preview)
3. In the dialog box that appears, select Load Config and choose the **hrmasking.xml** file provided.
4. Select the Configure options to see how the data will be masked.
5. Type in Step 3 the Masked Database name as HRMasked
6. Click OK. This will take a few minutes to run. The tool is taking a backup of the current database, restoring it, and modifying the data to be masked.
7. Compare the data in the original database to the new masked database.

Просмотреть файл

@ -1,12 +1,12 @@
-- Let's cause rollback with server startup
-- First CHECKPOINT the database
USE gocowboys
GO
CHECKPOINT
GO
USE master
GO
-- SHUTDOWN WITH NO WAIT
SHUTDOWN WITH NOWAIT
GO
-- Let's cause rollback with server startup
-- First CHECKPOINT the database
USE gocowboys
GO
CHECKPOINT
GO
USE master
GO
-- SHUTDOWN WITH NO WAIT
SHUTDOWN WITH NOWAIT
GO

Просмотреть файл

@ -1,75 +1,75 @@
-- Make sure ADR is OFF
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
GO
-- Try to delete a bunch of rows
-- Should take about 30 secs
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Does checkpoint truncate the log?
--
CHECKPOINT
GO
SELECT * FROM sys.dm_db_log_space_usage
go
-- Try to roll it back and measure the time
ROLLBACK TRAN
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Does checkpoint truncate the log?
--
CHECKPOINT
GO
SELECT * FROM sys.dm_db_log_space_usage
go
-- Now try it with ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = ON
GO
-- Try to delete a bunch of rows and roll it back
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Try to roll it back and measure the time
-- 0 secs!
ROLLBACK TRAN
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Clear ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
-- Make sure ADR is OFF
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
GO
-- Try to delete a bunch of rows
-- Should take about 30 secs
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Does checkpoint truncate the log?
--
CHECKPOINT
GO
SELECT * FROM sys.dm_db_log_space_usage
go
-- Try to roll it back and measure the time
ROLLBACK TRAN
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Does checkpoint truncate the log?
--
CHECKPOINT
GO
SELECT * FROM sys.dm_db_log_space_usage
go
-- Now try it with ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = ON
GO
-- Try to delete a bunch of rows and roll it back
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Try to roll it back and measure the time
-- 0 secs!
ROLLBACK TRAN
GO
-- What is the log space usage
SELECT * FROM sys.dm_db_log_space_usage
go
-- Clear ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
GO

Просмотреть файл

@ -1,47 +1,47 @@
-- Try to delete a bunch of rows
-- Should take about 30 secs
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- Let's cause rollback with server startup
-- First CHECKPOINT the database
CHECKPOINT
GO
-- SHUTDOWN WITH NO WAIT
SHUTDOWN WITH NOWAIT
GO
-- Now try it with ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = ON
GO
-- Try to delete a bunch of rows and roll it back
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- Try to roll it back and measure the time
-- 0 secs!
ROLLBACK TRAN
GO
-- Clear ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
-- Try to delete a bunch of rows
-- Should take about 30 secs
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- Let's cause rollback with server startup
-- First CHECKPOINT the database
CHECKPOINT
GO
-- SHUTDOWN WITH NO WAIT
SHUTDOWN WITH NOWAIT
GO
-- Now try it with ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = ON
GO
-- Try to delete a bunch of rows and roll it back
--
USE gocowboys
GO
BEGIN TRAN
DELETE from howboutthemcowboys
GO
-- Try to roll it back and measure the time
-- 0 secs!
ROLLBACK TRAN
GO
-- Clear ADR
--
USE master
GO
ALTER DATABASE gocowboys SET ACCELERATED_DATABASE_RECOVERY = OFF
GO

Просмотреть файл

@ -1,15 +1,15 @@
# Accelerated Data Recovery in SQL Server
This demo is to show Accelerated Data Recovery feature in SQL Server 2019
## Requirements
- Install SQL Server 2019 CTP 2.3 or higher
## Demo Steps
1. Run the statements in **setup_database.sql**
2. Follow the statements in **delete_rollback.sql**
# Accelerated Data Recovery in SQL Server
This demo is to show Accelerated Data Recovery feature in SQL Server 2019
## Requirements
- Install SQL Server 2019 CTP 2.3 or higher
## Demo Steps
1. Run the statements in **setup_database.sql**
2. Follow the statements in **delete_rollback.sql**
3. Follow the steps in **delete_undo_recovery.sql**. Use the **checkpoint_and_shutdown.sql** script in another session to checkpoing and shutdown SQL Server.

Просмотреть файл

@ -1,34 +1,34 @@
USE master
GO
DROP DATABASE IF EXISTS gocowboys
GO
CREATE DATABASE gocowboys
ON PRIMARY
(NAME = N'gocowboys_primary', FILENAME = 'c:\data\gocowboys.mdf', SIZE = 10Gb , MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB)
LOG ON
(NAME = N'gocowboys_Log', FILENAME = 'c:\data\gocowboys_log.ldf', SIZE = 10Gb , MAXSIZE = UNLIMITED , FILEGROWTH = 65536KB)
GO
ALTER DATABASE gocowboys SET RECOVERY SIMPLE
GO
USE gocowboys
GO
DROP TABLE IF EXISTS howboutthemcowboys
GO
CREATE TABLE howboutthemcowboys (playerid int primary key clustered, playername char(7000) not null)
GO
SET NOCOUNT ON
GO
BEGIN TRAN
DECLARE @x int
SET @x = 0
WHILE (@x < 100000)
BEGIN
INSERT INTO howboutthemcowboys VALUES (@x, 'All players are great')
SET @x = @x + 1
END
COMMIT TRAN
GO
SET NOCOUNT OFF
GO
USE master
USE master
GO
DROP DATABASE IF EXISTS gocowboys
GO
CREATE DATABASE gocowboys
ON PRIMARY
(NAME = N'gocowboys_primary', FILENAME = 'c:\data\gocowboys.mdf', SIZE = 10Gb , MAXSIZE = UNLIMITED, FILEGROWTH = 65536KB)
LOG ON
(NAME = N'gocowboys_Log', FILENAME = 'c:\data\gocowboys_log.ldf', SIZE = 10Gb , MAXSIZE = UNLIMITED , FILEGROWTH = 65536KB)
GO
ALTER DATABASE gocowboys SET RECOVERY SIMPLE
GO
USE gocowboys
GO
DROP TABLE IF EXISTS howboutthemcowboys
GO
CREATE TABLE howboutthemcowboys (playerid int primary key clustered, playername char(7000) not null)
GO
SET NOCOUNT ON
GO
BEGIN TRAN
DECLARE @x int
SET @x = 0
WHILE (@x < 100000)
BEGIN
INSERT INTO howboutthemcowboys VALUES (@x, 'All players are great')
SET @x = @x + 1
END
COMMIT TRAN
GO
SET NOCOUNT OFF
GO
USE master
GO

Просмотреть файл

@ -1,7 +1,7 @@
# Module 4 Activities - Mission Critical Availability
These represent demos and examples you can run to see new capabilities for mission critical availability in SQL Server 2019.
## adr
# Module 4 Activities - Mission Critical Availability
These represent demos and examples you can run to see new capabilities for mission critical availability in SQL Server 2019.
## adr
Show the capabilities of Accelerated Data Recovery in SQL Server 2019

Просмотреть файл

@ -1,12 +1,12 @@
package pkg;
//This object represents one input row
public class InputRow {
public final int id;
public final String text;
public InputRow(final int id, final String text) {
this.id = id;
this.text = text;
}
package pkg;
//This object represents one input row
public class InputRow {
public final int id;
public final String text;
public InputRow(final int id, final String text) {
this.id = id;
this.text = text;
}
}

Просмотреть файл

@ -1,19 +1,19 @@
USE JavaTest
GO
DECLARE @myClassPath nvarchar(50)
DECLARE @n int
--This is where you store your classes or jars.
--Update this to your own classpath
SET @myClassPath = N'C:\java'
--This is the size of the ngram
SET @n = 3
EXEC sp_execute_external_script
@language = N'Java'
, @script = N'pkg.Ngram.getNGrams'
, @input_data_1 = N'SELECT id, text FROM reviews'
, @parallel = 0
, @params = N'@CLASSPATH nvarchar(50), @param1 INT'
, @CLASSPATH = @myClassPath
, @param1 = @n
with result sets ((ID int, ngram varchar(20)))
USE JavaTest
GO
DECLARE @myClassPath nvarchar(50)
DECLARE @n int
--This is where you store your classes or jars.
--Update this to your own classpath
SET @myClassPath = N'C:\java'
--This is the size of the ngram
SET @n = 3
EXEC sp_execute_external_script
@language = N'Java'
, @script = N'pkg.Ngram.getNGrams'
, @input_data_1 = N'SELECT id, text FROM reviews'
, @parallel = 0
, @params = N'@CLASSPATH nvarchar(50), @param1 INT'
, @CLASSPATH = @myClassPath
, @param1 = @n
with result sets ((ID int, ngram varchar(20)))
GO

Просмотреть файл

@ -1,91 +1,91 @@
//We will package our classes in a package called pkg
//Packages are option in Java-SQL, but required for this sample.
package pkg;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class Ngram {
//Required: This is only required if you are passing data in @input_data_1
//from SQL Server in sp_execute_external_script
public static int[] inputDataCol1 = new int[1];
public static String[] inputDataCol2 = new String[1];
//Required: Input null map. Size just needs to be set to "1"
public static boolean[][] inputNullMap = new boolean[1][1];
//Required: Output data columns returned back to SQL Server
public static int[] outputDataCol1;
public static String[] outputDataCol2;
//Required: Output null map. Is populated with true or false values
//to indicate nulls
public static boolean[][] outputNullMap;
//Optional: This is only required if parameters are passed with @params
// from SQL Server in sp_execute_external_script
// n is giving us the size of ngram substrings
public static int param1;
//Optional: The number of rows we will be returning
public static int numberOfRows;
//Required: Number of output columns returned
public static short numberOfOutputCols;
/*Java main method - Only for testing purposes outside of SQL Server
public static void main(String... args) {
//getNGrams();
}*/
//This is the method we will be calling from SQL Server
public static void getNGrams() {
System.out.println("inputDataCol1.length= "+ inputDataCol1.length);
if (inputDataCol1.length == 0 ) {
// TODO: Set empty return
return;
}
//Using a stream to "loop" over the input data inputDataCol1.length. You can also use a for loop for this.
final List<InputRow> inputDataSet = IntStream.range(0, inputDataCol1.length)
.mapToObj(i -> new InputRow(inputDataCol1[i], inputDataCol2[i]))
.collect(Collectors.toList());
//Again, we are using a stream to loop over data
final List<OutputRow> outputDataSet = inputDataSet.stream()
// Generate ngrams of size n for each incoming string
// Each invocation of ngrams returns a list. flatMap flattens
// the resulting list-of-lists to a flat list.
.flatMap(inputRow -> ngrams(param1, inputRow.text).stream().map(s -> new OutputRow(inputRow.id, s)))
.collect(Collectors.toList());
//Print the outputDataSet
System.out.println(outputDataSet);
//Set the number of rows and columns we will be returning
numberOfOutputCols = 2;
numberOfRows = outputDataSet.size();
outputDataCol1 = new int[numberOfRows]; // ID column
outputDataCol2 = new String[numberOfRows]; //The ngram column
outputNullMap = new boolean[2][numberOfRows];// output null map
//Since we don't have any null values, we will populate all values in the outputNullMap to false
IntStream.range(0, numberOfRows).forEach(i -> {
final OutputRow outputRow = outputDataSet.get(i);
outputDataCol1[i] = outputRow.id;
outputDataCol2[i] = outputRow.ngram;
outputNullMap[0][i] = false;
outputNullMap[1][i] = false;
});
}
// Example: ngrams(3, "abcde") = ["abc", "bcd", "cde"].
private static List<String> ngrams(int n, String text) {
return IntStream.range(0, text.length() - n + 1)
.mapToObj(i -> text.substring(i, i + n))
.collect(Collectors.toList());
}
}
//We will package our classes in a package called pkg
//Packages are option in Java-SQL, but required for this sample.
package pkg;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
public class Ngram {
//Required: This is only required if you are passing data in @input_data_1
//from SQL Server in sp_execute_external_script
public static int[] inputDataCol1 = new int[1];
public static String[] inputDataCol2 = new String[1];
//Required: Input null map. Size just needs to be set to "1"
public static boolean[][] inputNullMap = new boolean[1][1];
//Required: Output data columns returned back to SQL Server
public static int[] outputDataCol1;
public static String[] outputDataCol2;
//Required: Output null map. Is populated with true or false values
//to indicate nulls
public static boolean[][] outputNullMap;
//Optional: This is only required if parameters are passed with @params
// from SQL Server in sp_execute_external_script
// n is giving us the size of ngram substrings
public static int param1;
//Optional: The number of rows we will be returning
public static int numberOfRows;
//Required: Number of output columns returned
public static short numberOfOutputCols;
/*Java main method - Only for testing purposes outside of SQL Server
public static void main(String... args) {
//getNGrams();
}*/
//This is the method we will be calling from SQL Server
public static void getNGrams() {
System.out.println("inputDataCol1.length= "+ inputDataCol1.length);
if (inputDataCol1.length == 0 ) {
// TODO: Set empty return
return;
}
//Using a stream to "loop" over the input data inputDataCol1.length. You can also use a for loop for this.
final List<InputRow> inputDataSet = IntStream.range(0, inputDataCol1.length)
.mapToObj(i -> new InputRow(inputDataCol1[i], inputDataCol2[i]))
.collect(Collectors.toList());
//Again, we are using a stream to loop over data
final List<OutputRow> outputDataSet = inputDataSet.stream()
// Generate ngrams of size n for each incoming string
// Each invocation of ngrams returns a list. flatMap flattens
// the resulting list-of-lists to a flat list.
.flatMap(inputRow -> ngrams(param1, inputRow.text).stream().map(s -> new OutputRow(inputRow.id, s)))
.collect(Collectors.toList());
//Print the outputDataSet
System.out.println(outputDataSet);
//Set the number of rows and columns we will be returning
numberOfOutputCols = 2;
numberOfRows = outputDataSet.size();
outputDataCol1 = new int[numberOfRows]; // ID column
outputDataCol2 = new String[numberOfRows]; //The ngram column
outputNullMap = new boolean[2][numberOfRows];// output null map
//Since we don't have any null values, we will populate all values in the outputNullMap to false
IntStream.range(0, numberOfRows).forEach(i -> {
final OutputRow outputRow = outputDataSet.get(i);
outputDataCol1[i] = outputRow.id;
outputDataCol2[i] = outputRow.ngram;
outputNullMap[0][i] = false;
outputNullMap[1][i] = false;
});
}
// Example: ngrams(3, "abcde") = ["abc", "bcd", "cde"].
private static List<String> ngrams(int n, String text) {
return IntStream.range(0, text.length() - n + 1)
.mapToObj(i -> text.substring(i, i + n))
.collect(Collectors.toList());
}
}

Просмотреть файл

@ -1,15 +1,15 @@
package pkg;
//This object represents one output row
public class OutputRow {
public final int id;
public final String ngram;
public OutputRow(final int id, final String ngram) {
this.id = id;
this.ngram = ngram;
}
@Override
public String toString() { return id + ":" + ngram; }
package pkg;
//This object represents one output row
public class OutputRow {
public final int id;
public final String ngram;
public OutputRow(final int id, final String ngram) {
this.id = id;
this.ngram = ngram;
}
@Override
public String toString() { return id + ":" + ngram; }
}

Просмотреть файл

@ -1,4 +1,4 @@
rmdir /s /q c:\java\pkg
mkdir c:\java\pkg
"C:\Program Files\Java\jdk1.8.0_181\bin\javac" Ngram.java InputRow.java OutputRow.java
rmdir /s /q c:\java\pkg
mkdir c:\java\pkg
"C:\Program Files\Java\jdk1.8.0_181\bin\javac" Ngram.java InputRow.java OutputRow.java
copy *.class c:\java\pkg

Просмотреть файл

@ -1,27 +1,27 @@
-- Enable external scripts.
-- No restart is required in SQL Server 2019!
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
EXEC sp_configure 'external scripts enabled', 1
GO
RECONFIGURE
GO
-- Create a database and populate some data
--
DROP DATABASE IF EXISTS JavaTest
GO
CREATE DATABASE JavaTest
GO
USE JavaTest
GO
DROP TABLE IF exists reviews;
GO
CREATE TABLE reviews(
id int NOT NULL,
"text" nvarchar(30) NOT NULL)
INSERT INTO reviews(id, "text") VALUES (1, 'AAA BBB CCC DDD EEE FFF')
INSERT INTO reviews(id, "text") VALUES (2, 'GGG HHH III JJJ KKK LLL')
INSERT INTO reviews(id, "text") VALUES (3, 'MMM NNN OOO PPP QQQ RRR')
GO
-- Enable external scripts.
-- No restart is required in SQL Server 2019!
EXEC sp_configure 'ahow advanced options', 1
GO
RECONFIGURE
EXEC sp_configure 'external scripts enabled', 1
GO
RECONFIGURE
GO
-- Create a database and populate some data
--
DROP DATABASE IF EXISTS JavaTest
GO
CREATE DATABASE JavaTest
GO
USE JavaTest
GO
DROP TABLE IF exists reviews;
GO
CREATE TABLE reviews(
id int NOT NULL,
"text" nvarchar(30) NOT NULL)
INSERT INTO reviews(id, "text") VALUES (1, 'AAA BBB CCC DDD EEE FFF')
INSERT INTO reviews(id, "text") VALUES (2, 'GGG HHH III JJJ KKK LLL')
INSERT INTO reviews(id, "text") VALUES (3, 'MMM NNN OOO PPP QQQ RRR')
GO

Просмотреть файл

@ -1,25 +1,25 @@
# SQL Server Demo for the Java Extension
This is a demo to show the Java Extension capability in SQL Server 2019 and later. This demo shows how to run Java code on SQL Server on Windows. The same code will work with SQL Server Java on Linux. Demo for this scenario coming soon.
## Requirements
1. Install SQL Server 2019 CTP 2.0 on Windows or later following the guidance on this page for Java.
https://docs.microsoft.com/en-us/sql/advanced-analytics/java/extension-java?view=sqlallproducts-allversions#install-on-windows
2. Follow the rest of the steps on this page to install all dependencies
https://docs.microsoft.com/en-us/sql/advanced-analytics/java/extension-java?view=sqlallproducts-allversions#install-on-windows
## Demo Steps
1. I used the sample code from https://docs.microsoft.com/en-us/sql/advanced-analytics/java/java-first-sample?view=sqlallproducts-allversions and included the 3 Java class source: **InputRow.java**, **OutputRow.java**, **Ngram.java**
2. I compiled the Java classes using the script **buildjava8.cmd** with Java 8 SDK (so it will be compatible with Linux). The compiled code is placed into C:\java\pkg. You can adjust this to whatever directory you need but you will need to make edits in the following steps.
3. You need to setup permissions for the folder for your compiled classes. Use the instructions per the documentation at https://docs.microsoft.com/en-us/sql/advanced-analytics/java/java-first-sample?view=sqlallproducts-allversions#6---set-permissions
4. Run the **javasetup.sql** script to create the database and objects.
# SQL Server Demo for the Java Extension
This is a demo to show the Java Extension capability in SQL Server 2019 and later. This demo shows how to run Java code on SQL Server on Windows. The same code will work with SQL Server Java on Linux. Demo for this scenario coming soon.
## Requirements
1. Install SQL Server 2019 CTP 2.0 on Windows or later following the guidance on this page for Java.
https://docs.microsoft.com/en-us/sql/advanced-analytics/java/extension-java?view=sqlallproducts-allversions#install-on-windows
2. Follow the rest of the steps on this page to install all dependencies
https://docs.microsoft.com/en-us/sql/advanced-analytics/java/extension-java?view=sqlallproducts-allversions#install-on-windows
## Demo Steps
1. I used the sample code from https://docs.microsoft.com/en-us/sql/advanced-analytics/java/java-first-sample?view=sqlallproducts-allversions and included the 3 Java class source: **InputRow.java**, **OutputRow.java**, **Ngram.java**
2. I compiled the Java classes using the script **buildjava8.cmd** with Java 8 SDK (so it will be compatible with Linux). The compiled code is placed into C:\java\pkg. You can adjust this to whatever directory you need but you will need to make edits in the following steps.
3. You need to setup permissions for the folder for your compiled classes. Use the instructions per the documentation at https://docs.microsoft.com/en-us/sql/advanced-analytics/java/java-first-sample?view=sqlallproducts-allversions#6---set-permissions
4. Run the **javasetup.sql** script to create the database and objects.
5. Run the **javangramdemo.sql** to execute the Java code. This script assumes the class files are in c:\java\pkg. I recommend you keep the pkg subdirectory. If you need to have the classes in a different parent folder, modify this part of the script SET @myClassPath = N'C:\java'

Просмотреть файл

@ -1,77 +1,77 @@
USE TutorialDB;
--STEP 1 - Setup model table for storing the model
DROP TABLE IF EXISTS rental_models;
GO
CREATE TABLE rental_models (
model_name VARCHAR(30) NOT NULL DEFAULT('default model'),
lang VARCHAR(30),
model VARBINARY(MAX),
native_model VARBINARY(MAX),
PRIMARY KEY (model_name, lang)
);
GO
--STEP 2 - Train model using revoscalepy rx_dtree or rxlinmod
DROP PROCEDURE IF EXISTS generate_rental_py_native_model;
go
CREATE PROCEDURE generate_rental_py_native_model (@model_type varchar(30), @trained_model varbinary(max) OUTPUT)
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
from revoscalepy import rx_lin_mod, rx_serialize_model, rx_dtree
from pandas import Categorical
import pickle
rental_train_data["Holiday"] = rental_train_data["Holiday"].astype("category")
rental_train_data["Snow"] = rental_train_data["Snow"].astype("category")
rental_train_data["WeekDay"] = rental_train_data["WeekDay"].astype("category")
if model_type == "linear":
linmod_model = rx_lin_mod("RentalCount ~ Month + Day + WeekDay + Snow + Holiday", data = rental_train_data)
trained_model = rx_serialize_model(linmod_model, realtime_scoring_only = True);
if model_type == "dtree":
dtree_model = rx_dtree("RentalCount ~ Month + Day + WeekDay + Snow + Holiday", data = rental_train_data)
trained_model = rx_serialize_model(dtree_model, realtime_scoring_only = True);
'
, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay", "Snow", "Holiday" from dbo.rental_data where Year < 2015'
, @input_data_1_name = N'rental_train_data'
, @params = N'@trained_model varbinary(max) OUTPUT, @model_type varchar(30)'
, @model_type = @model_type
, @trained_model = @trained_model OUTPUT;
END;
GO
--STEP 3 - Save model to table
--Line of code to empty table with models
--TRUNCATE TABLE rental_models;
--Save Linear model to table
DECLARE @model VARBINARY(MAX);
EXEC generate_rental_py_native_model "linear", @model OUTPUT;
INSERT INTO rental_models (model_name, native_model, lang) VALUES('linear_model', @model, 'Python');
--Save DTree model to table
DECLARE @model2 VARBINARY(MAX);
EXEC generate_rental_py_native_model "dtree", @model2 OUTPUT;
INSERT INTO rental_models (model_name, native_model, lang) VALUES('dtree_model', @model2, 'Python');
-- Look at the models in the table
SELECT * FROM rental_models;
GO
--STEP 4 - Use the native PREDICT (native scoring) to predict number of rentals for both models
DECLARE @model VARBINARY(MAX) = (SELECT TOP(1) native_model FROM dbo.rental_models WHERE model_name = 'linear_model' AND lang = 'Python');
SELECT d.*, p.* FROM PREDICT(MODEL = @model, DATA = dbo.rental_data AS d) WITH(RentalCount_Pred float) AS p;
GO
--Native scoring with dtree model
DECLARE @model VARBINARY(MAX) = (SELECT TOP(1) native_model FROM dbo.rental_models WHERE model_name = 'dtree_model' AND lang = 'Python');
SELECT d.*, p.* FROM PREDICT(MODEL = @model, DATA = dbo.rental_data AS d) WITH(RentalCount_Pred float) AS p;
USE TutorialDB;
--STEP 1 - Setup model table for storing the model
DROP TABLE IF EXISTS rental_models;
GO
CREATE TABLE rental_models (
model_name VARCHAR(30) NOT NULL DEFAULT('default model'),
lang VARCHAR(30),
model VARBINARY(MAX),
native_model VARBINARY(MAX),
PRIMARY KEY (model_name, lang)
);
GO
--STEP 2 - Train model using revoscalepy rx_dtree or rxlinmod
DROP PROCEDURE IF EXISTS generate_rental_py_native_model;
go
CREATE PROCEDURE generate_rental_py_native_model (@model_type varchar(30), @trained_model varbinary(max) OUTPUT)
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
from revoscalepy import rx_lin_mod, rx_serialize_model, rx_dtree
from pandas import Categorical
import pickle
rental_train_data["Holiday"] = rental_train_data["Holiday"].astype("category")
rental_train_data["Snow"] = rental_train_data["Snow"].astype("category")
rental_train_data["WeekDay"] = rental_train_data["WeekDay"].astype("category")
if model_type == "linear":
linmod_model = rx_lin_mod("RentalCount ~ Month + Day + WeekDay + Snow + Holiday", data = rental_train_data)
trained_model = rx_serialize_model(linmod_model, realtime_scoring_only = True);
if model_type == "dtree":
dtree_model = rx_dtree("RentalCount ~ Month + Day + WeekDay + Snow + Holiday", data = rental_train_data)
trained_model = rx_serialize_model(dtree_model, realtime_scoring_only = True);
'
, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay", "Snow", "Holiday" from dbo.rental_data where Year < 2015'
, @input_data_1_name = N'rental_train_data'
, @params = N'@trained_model varbinary(max) OUTPUT, @model_type varchar(30)'
, @model_type = @model_type
, @trained_model = @trained_model OUTPUT;
END;
GO
--STEP 3 - Save model to table
--Line of code to empty table with models
--TRUNCATE TABLE rental_models;
--Save Linear model to table
DECLARE @model VARBINARY(MAX);
EXEC generate_rental_py_native_model "linear", @model OUTPUT;
INSERT INTO rental_models (model_name, native_model, lang) VALUES('linear_model', @model, 'Python');
--Save DTree model to table
DECLARE @model2 VARBINARY(MAX);
EXEC generate_rental_py_native_model "dtree", @model2 OUTPUT;
INSERT INTO rental_models (model_name, native_model, lang) VALUES('dtree_model', @model2, 'Python');
-- Look at the models in the table
SELECT * FROM rental_models;
GO
--STEP 4 - Use the native PREDICT (native scoring) to predict number of rentals for both models
DECLARE @model VARBINARY(MAX) = (SELECT TOP(1) native_model FROM dbo.rental_models WHERE model_name = 'linear_model' AND lang = 'Python');
SELECT d.*, p.* FROM PREDICT(MODEL = @model, DATA = dbo.rental_data AS d) WITH(RentalCount_Pred float) AS p;
GO
--Native scoring with dtree model
DECLARE @model VARBINARY(MAX) = (SELECT TOP(1) native_model FROM dbo.rental_models WHERE model_name = 'dtree_model' AND lang = 'Python');
SELECT d.*, p.* FROM PREDICT(MODEL = @model, DATA = dbo.rental_data AS d) WITH(RentalCount_Pred float) AS p;
GO

Просмотреть файл

@ -1,26 +1,26 @@
# SQL Server demo using Python and Native Scoring
This demo shows the capability of running Python code in SQL Server 2017 and later and the Native scoring feature.
This demo pulls SQL commands from the main site for a rental prediction tutorial using Python which you can find at https://microsoft.github.io/sql-ml-tutorials/python/rentalprediction/. Use this site for the entire tutorial in case any changes or additions are made there. If you want to use Native Scoring as a feature you do not have to install the Machine Learning Services feature but this demo will require it because we train the model using Python with SQL Server.
## Requirements
- SQL Server 2017 or later for Windows installed (Developer Edition will work just fine). You must choose the Machine Learning Services feature during installation (or add this feature if you have already installed)
- You need to download the TutorialDB database backup from https://sqlchoice.blob.core.windows.net/sqlchoice/static/TutorialDB.bak
- Enable this configuration option and restart SQL Server
EXEC sp_configure 'external scripts enabled', 1;
RECONFIGURE WITH OVERRIDE
GO
**Note:** With SQL Server 2019, python support exists for SQL Server on Linux. A linux version of this same demo can be achieved by installing Machine Learning services for Linux as documented at https://docs.microsoft.com/en-us/machine-learning-server/install/machine-learning-server-linux-install and follow the demo steps below.
## Demo Steps
1. Run **setup.sql** to restore the TutorialDB database backup
2. Run the statements and examine the output from **rental_prediction.sql** to see an example of a machine learning model with Python and SQL Server.
3. Run the statements and examine the output from **native_scoring.sql** to see an example of a machine learning model trained with Python but executed with native scoring in T-SQL.
# SQL Server demo using Python and Native Scoring
This demo shows the capability of running Python code in SQL Server 2017 and later and the Native scoring feature.
This demo pulls SQL commands from the main site for a rental prediction tutorial using Python which you can find at https://microsoft.github.io/sql-ml-tutorials/python/rentalprediction/. Use this site for the entire tutorial in case any changes or additions are made there. If you want to use Native Scoring as a feature you do not have to install the Machine Learning Services feature but this demo will require it because we train the model using Python with SQL Server.
## Requirements
- SQL Server 2017 or later for Windows installed (Developer Edition will work just fine). You must choose the Machine Learning Services feature during installation (or add this feature if you have already installed)
- You need to download the TutorialDB database backup from https://sqlchoice.blob.core.windows.net/sqlchoice/static/TutorialDB.bak
- Enable this configuration option and restart SQL Server
EXEC sp_configure 'external scripts enabled', 1;
RECONFIGURE WITH OVERRIDE
GO
**Note:** With SQL Server 2019, python support exists for SQL Server on Linux. A linux version of this same demo can be achieved by installing Machine Learning services for Linux as documented at https://docs.microsoft.com/en-us/machine-learning-server/install/machine-learning-server-linux-install and follow the demo steps below.
## Demo Steps
1. Run **setup.sql** to restore the TutorialDB database backup
2. Run the statements and examine the output from **rental_prediction.sql** to see an example of a machine learning model with Python and SQL Server.
3. Run the statements and examine the output from **native_scoring.sql** to see an example of a machine learning model trained with Python but executed with native scoring in T-SQL.

Просмотреть файл

@ -1,113 +1,113 @@
USE TutorialDB;
-- Table containing ski rental data
SELECT * FROM [dbo].[rental_data];
-------------------------- STEP 1 - Setup model table ----------------------------------------
DROP TABLE IF EXISTS rental_py_models;
GO
CREATE TABLE rental_py_models (
model_name VARCHAR(30) NOT NULL DEFAULT('default model') PRIMARY KEY,
model VARBINARY(MAX) NOT NULL
);
GO
-------------------------- STEP 2 - Train model ----------------------------------------
-- Stored procedure that trains and generates an R model using the rental_data and a decision tree algorithm
DROP PROCEDURE IF EXISTS generate_rental_py_model;
go
CREATE PROCEDURE generate_rental_py_model (@trained_model varbinary(max) OUTPUT)
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
df = rental_train_data
# Get all the columns from the dataframe.
columns = df.columns.tolist()
# Store the variable well be predicting on.
target = "RentalCount"
from sklearn.linear_model import LinearRegression
# Initialize the model class.
lin_model = LinearRegression()
# Fit the model to the training data.
lin_model.fit(df[columns], df[target])
import pickle
#Before saving the model to the DB table, we need to convert it to a binary object
trained_model = pickle.dumps(lin_model)
'
, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay", "Snow", "Holiday" from dbo.rental_data where Year < 2015'
, @input_data_1_name = N'rental_train_data'
, @params = N'@trained_model varbinary(max) OUTPUT'
, @trained_model = @trained_model OUTPUT;
END;
GO
------------------- STEP 3 - Save model to table -------------------------------------
TRUNCATE TABLE rental_py_models;
DECLARE @model VARBINARY(MAX);
EXEC generate_rental_py_model @model OUTPUT;
INSERT INTO rental_py_models (model_name, model) VALUES('linear_model', @model);
SELECT * FROM rental_py_models;
------------------ STEP 4 - Use the model to predict number of rentals --------------------------
DROP PROCEDURE IF EXISTS py_predict_rentalcount;
GO
CREATE PROCEDURE py_predict_rentalcount (@model varchar(100))
AS
BEGIN
DECLARE @py_model varbinary(max) = (select model from rental_py_models where model_name = @model);
EXEC sp_execute_external_script
@language = N'Python'
, @script = N'import pickle
rental_model = pickle.loads(py_model)
df = rental_score_data
#print(df)
# Get all the columns from the dataframe.
columns = df.columns.tolist()
# Filter the columns to remove ones we dont want.
# columns = [c for c in columns if c not in ["Year"]]
# Store the variable well be predicting on.
target = "RentalCount"
# Generate our predictions for the test set.
lin_predictions = rental_model.predict(df[columns])
print(lin_predictions)
# Import the scikit-learn function to compute error.
from sklearn.metrics import mean_squared_error
# Compute error between our test predictions and the actual values.
lin_mse = mean_squared_error(lin_predictions, df[target])
#print(lin_mse)
import pandas as pd
predictions_df = pd.DataFrame(lin_predictions)
OutputDataSet = pd.concat([predictions_df, df["RentalCount"], df["Month"], df["Day"], df["WeekDay"], df["Snow"], df["Holiday"], df["Year"]], axis=1)
'
, @input_data_1 = N'Select "RentalCount", "Year" ,"Month", "Day", "WeekDay", "Snow", "Holiday" from rental_data where Year = 2015'
, @input_data_1_name = N'rental_score_data'
, @params = N'@py_model varbinary(max)'
, @py_model = @py_model
with result sets (("RentalCount_Predicted" float, "RentalCount" float, "Month" float,"Day" float,"WeekDay" float,"Snow" float,"Holiday" float, "Year" float));
END;
GO
---------------- STEP 5 - Create DB table to store predictions -----------------------
DROP TABLE IF EXISTS [dbo].[py_rental_predictions];
GO
--Create a table to store the predictions in
CREATE TABLE [dbo].[py_rental_predictions](
[RentalCount_Predicted] [int] NULL,
[RentalCount_Actual] [int] NULL,
[Month] [int] NULL,
[Day] [int] NULL,
[WeekDay] [int] NULL,
[Snow] [int] NULL,
[Holiday] [int] NULL,
[Year] [int] NULL
) ON [PRIMARY]
GO
---------------- STEP 6 - Save the predictions in a DB table -----------------------
TRUNCATE TABLE py_rental_predictions;
--Insert the results of the predictions for test set into a table
INSERT INTO py_rental_predictions
EXEC py_predict_rentalcount 'linear_model';
-- Select contents of the table
SELECT * FROM py_rental_predictions;
USE TutorialDB;
-- Table containing ski rental data
SELECT * FROM [dbo].[rental_data];
-------------------------- STEP 1 - Setup model table ----------------------------------------
DROP TABLE IF EXISTS rental_py_models;
GO
CREATE TABLE rental_py_models (
model_name VARCHAR(30) NOT NULL DEFAULT('default model') PRIMARY KEY,
model VARBINARY(MAX) NOT NULL
);
GO
-------------------------- STEP 2 - Train model ----------------------------------------
-- Stored procedure that trains and generates an R model using the rental_data and a decision tree algorithm
DROP PROCEDURE IF EXISTS generate_rental_py_model;
go
CREATE PROCEDURE generate_rental_py_model (@trained_model varbinary(max) OUTPUT)
AS
BEGIN
EXECUTE sp_execute_external_script
@language = N'Python'
, @script = N'
df = rental_train_data
# Get all the columns from the dataframe.
columns = df.columns.tolist()
# Store the variable well be predicting on.
target = "RentalCount"
from sklearn.linear_model import LinearRegression
# Initialize the model class.
lin_model = LinearRegression()
# Fit the model to the training data.
lin_model.fit(df[columns], df[target])
import pickle
#Before saving the model to the DB table, we need to convert it to a binary object
trained_model = pickle.dumps(lin_model)
'
, @input_data_1 = N'select "RentalCount", "Year", "Month", "Day", "WeekDay", "Snow", "Holiday" from dbo.rental_data where Year < 2015'
, @input_data_1_name = N'rental_train_data'
, @params = N'@trained_model varbinary(max) OUTPUT'
, @trained_model = @trained_model OUTPUT;
END;
GO
------------------- STEP 3 - Save model to table -------------------------------------
TRUNCATE TABLE rental_py_models;
DECLARE @model VARBINARY(MAX);
EXEC generate_rental_py_model @model OUTPUT;
INSERT INTO rental_py_models (model_name, model) VALUES('linear_model', @model);
SELECT * FROM rental_py_models;
------------------ STEP 4 - Use the model to predict number of rentals --------------------------
DROP PROCEDURE IF EXISTS py_predict_rentalcount;
GO
CREATE PROCEDURE py_predict_rentalcount (@model varchar(100))
AS
BEGIN
DECLARE @py_model varbinary(max) = (select model from rental_py_models where model_name = @model);
EXEC sp_execute_external_script
@language = N'Python'
, @script = N'import pickle
rental_model = pickle.loads(py_model)
df = rental_score_data
#print(df)
# Get all the columns from the dataframe.
columns = df.columns.tolist()
# Filter the columns to remove ones we dont want.
# columns = [c for c in columns if c not in ["Year"]]
# Store the variable well be predicting on.
target = "RentalCount"
# Generate our predictions for the test set.
lin_predictions = rental_model.predict(df[columns])
print(lin_predictions)
# Import the scikit-learn function to compute error.
from sklearn.metrics import mean_squared_error
# Compute error between our test predictions and the actual values.
lin_mse = mean_squared_error(lin_predictions, df[target])
#print(lin_mse)
import pandas as pd
predictions_df = pd.DataFrame(lin_predictions)
OutputDataSet = pd.concat([predictions_df, df["RentalCount"], df["Month"], df["Day"], df["WeekDay"], df["Snow"], df["Holiday"], df["Year"]], axis=1)
'
, @input_data_1 = N'Select "RentalCount", "Year" ,"Month", "Day", "WeekDay", "Snow", "Holiday" from rental_data where Year = 2015'
, @input_data_1_name = N'rental_score_data'
, @params = N'@py_model varbinary(max)'
, @py_model = @py_model
with result sets (("RentalCount_Predicted" float, "RentalCount" float, "Month" float,"Day" float,"WeekDay" float,"Snow" float,"Holiday" float, "Year" float));
END;
GO
---------------- STEP 5 - Create DB table to store predictions -----------------------
DROP TABLE IF EXISTS [dbo].[py_rental_predictions];
GO
--Create a table to store the predictions in
CREATE TABLE [dbo].[py_rental_predictions](
[RentalCount_Predicted] [int] NULL,
[RentalCount_Actual] [int] NULL,
[Month] [int] NULL,
[Day] [int] NULL,
[WeekDay] [int] NULL,
[Snow] [int] NULL,
[Holiday] [int] NULL,
[Year] [int] NULL
) ON [PRIMARY]
GO
---------------- STEP 6 - Save the predictions in a DB table -----------------------
TRUNCATE TABLE py_rental_predictions;
--Insert the results of the predictions for test set into a table
INSERT INTO py_rental_predictions
EXEC py_predict_rentalcount 'linear_model';
-- Select contents of the table
SELECT * FROM py_rental_predictions;

Просмотреть файл

@ -1,8 +1,8 @@
USE master
GO
RESTORE DATABASE TutorialDB
FROM DISK = 'c:\demos\sql2019\python\TutorialDB.bak'
WITH
MOVE 'TutorialDB' TO 'c:\demos\sql2019\python\TutorialDB.mdf',
MOVE 'TutorialDB_log' TO 'c:\demos\sql2019\python\TutorialDB.ldf'
USE master
GO
RESTORE DATABASE TutorialDB
FROM DISK = 'c:\demos\sql2019\python\TutorialDB.bak'
WITH
MOVE 'TutorialDB' TO 'c:\demos\sql2019\python\TutorialDB.mdf',
MOVE 'TutorialDB_log' TO 'c:\demos\sql2019\python\TutorialDB.ldf'
GO

Просмотреть файл

@ -1,11 +1,11 @@
# Module 5 Activities - Modern Development Platform
These represent demos and examples you can run to see new capabilities for the modern development platform integrated with SQL Server 2019
## python
Learn how to execute a machine learning model with Python and built-in native scoring with SQL Server.
## java
# Module 5 Activities - Modern Development Platform
These represent demos and examples you can run to see new capabilities for the modern development platform integrated with SQL Server 2019
## python
Learn how to execute a machine learning model with Python and built-in native scoring with SQL Server.
## java
Learn how to execute a java program with the extensibility architecture of SQL Server 2019 using T-SQL.

Просмотреть файл

@ -1,48 +1,48 @@
# Deploy SQL Server on Linux
This demo is used to show the basic steps to deploy SQL Server on Linux using RedHat Enterprise Linux. While this exercise shows you how to deploy SQL Server on RedHat, Ubuntu and SUSE are also supported. See our installation gudiance for SQL Server 2017 https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017 and SQL Server 2019 https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017#sqlvnext for more information on how to install SQL Server on those Linux distributions.
## Requirements
- A RedHat Linux installation. This can be either in a VM or bare metal machine. Look at the the Prerequisites in our documentation page at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017#prerequisites
- Ensure your machine or VM meets the Systems requirements at https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017#system.
- Use either a built-in ssh from your client (Ex. Linux, macOS, or bash shell for Windows at https://docs.microsoft.com/en-us/windows/wsl/install-win10). As a windows usser, I like to use Mobaxterm (https://mobaxterm.mobatek.net/).
- You also need to establish connectivity with your Linux server or VM by having the IP address available and port 22 for ssh open for connectivity.
## How to deploy SQL Server on Linux
Run all of the following commands from your ssh session with the bash shell. This lab assumes your Linux Server is connected to the internet. You can do an offline installation of SQL Server on Linux. See the documentation at <https://docs.microsoft.com/sql/linux/sql-server-linux-setup#offline>
For this exercise we will be using SQL Server 2019 Preview but you can also do the same set of instructions for SQL Server 2017 as documented at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017.
1. Copy the repository configuration file using the following command
`sudo curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-preview.repo`
The repository configuration file is a text file that contains the location of the SQL Server package for RHEL. This repo file will point to the latest preview build of SQL Server 2019 on Linux. See our documentation for how to use a repository file for other branches <https://docs.microsoft.com/sql/linux/sql-server-linux-change-repo>
With a good internet connection this should take a few seconds.
2. Use the yum package manager to kick off the installation with the following command (-y means don't prompt)
`sudo yum install -y mssql-server`
This involves download an ~220Mb file and performing the first phase of install. With a good internet connection this should only take a few minutes.
3. Now you must complete the installation by executing a bash shell script we install called **mssql-conf** (which uses a series of python scripts). We will also use mssql-conf later to perform a configuration task for SQL Server. Execute the following command
`sudo /opt/mssql/bin/mssql-conf setup`
Go through the prompts to pick Edition (choose Developer or Enterprise Core for these labs), accept the EULA, and put in the sa password (must meet strong password requirements like SQL Server on Windows). Remember the sa password as you will use it often in the labs.
4. Open up the firewall on Linux for the SQL Server port by running the following two commands. This is required if you plan to connect to SQL Server on a remote client.
`sudo firewall-cmd --zone=public --add-port=1433/tcp --permanent`
`sudo firewall-cmd --reload`
Believe it or not, that's it! You have now installed SQL Server on Linux which includes the core database engine and SQL Server Agent
5. Install the command line tools as documented at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017#tools
# Deploy SQL Server on Linux
This demo is used to show the basic steps to deploy SQL Server on Linux using RedHat Enterprise Linux. While this exercise shows you how to deploy SQL Server on RedHat, Ubuntu and SUSE are also supported. See our installation gudiance for SQL Server 2017 https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017 and SQL Server 2019 https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017#sqlvnext for more information on how to install SQL Server on those Linux distributions.
## Requirements
- A RedHat Linux installation. This can be either in a VM or bare metal machine. Look at the the Prerequisites in our documentation page at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017#prerequisites
- Ensure your machine or VM meets the Systems requirements at https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup?view=sql-server-2017#system.
- Use either a built-in ssh from your client (Ex. Linux, macOS, or bash shell for Windows at https://docs.microsoft.com/en-us/windows/wsl/install-win10). As a windows usser, I like to use Mobaxterm (https://mobaxterm.mobatek.net/).
- You also need to establish connectivity with your Linux server or VM by having the IP address available and port 22 for ssh open for connectivity.
## How to deploy SQL Server on Linux
Run all of the following commands from your ssh session with the bash shell. This lab assumes your Linux Server is connected to the internet. You can do an offline installation of SQL Server on Linux. See the documentation at <https://docs.microsoft.com/sql/linux/sql-server-linux-setup#offline>
For this exercise we will be using SQL Server 2019 Preview but you can also do the same set of instructions for SQL Server 2017 as documented at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017.
1. Copy the repository configuration file using the following command
`sudo curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-preview.repo`
The repository configuration file is a text file that contains the location of the SQL Server package for RHEL. This repo file will point to the latest preview build of SQL Server 2019 on Linux. See our documentation for how to use a repository file for other branches <https://docs.microsoft.com/sql/linux/sql-server-linux-change-repo>
With a good internet connection this should take a few seconds.
2. Use the yum package manager to kick off the installation with the following command (-y means don't prompt)
`sudo yum install -y mssql-server`
This involves download an ~220Mb file and performing the first phase of install. With a good internet connection this should only take a few minutes.
3. Now you must complete the installation by executing a bash shell script we install called **mssql-conf** (which uses a series of python scripts). We will also use mssql-conf later to perform a configuration task for SQL Server. Execute the following command
`sudo /opt/mssql/bin/mssql-conf setup`
Go through the prompts to pick Edition (choose Developer or Enterprise Core for these labs), accept the EULA, and put in the sa password (must meet strong password requirements like SQL Server on Windows). Remember the sa password as you will use it often in the labs.
4. Open up the firewall on Linux for the SQL Server port by running the following two commands. This is required if you plan to connect to SQL Server on a remote client.
`sudo firewall-cmd --zone=public --add-port=1433/tcp --permanent`
`sudo firewall-cmd --reload`
Believe it or not, that's it! You have now installed SQL Server on Linux which includes the core database engine and SQL Server Agent
5. Install the command line tools as documented at https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017#tools
6. Install the mssql-cli tool as documented at https://github.com/dbcli/mssql-cli/blob/master/doc/installation/linux.md#red-hat-enterprise-linux-rhel-7. This is a new command line tool which you will use in other exercises.

Просмотреть файл

@ -1,2 +1,2 @@
sudo cp WideWorldImporters-Full.bak /var/opt/mssql
sudo chown mssql:mssql /var/opt/mssql/WideWorldImporters-Full.bak
sudo cp WideWorldImporters-Full.bak /var/opt/mssql
sudo chown mssql:mssql /var/opt/mssql/WideWorldImporters-Full.bak

Просмотреть файл

@ -1,290 +1,290 @@
# Explore SQL Server on Linux
In this exercise you will learn how to explore SQL Server on Linux after you have deployed by learning
- Explore the SQL Server installation
- Common Linux commands
- How to connect to SQL Server
- How to restore a backup and run queries
- How to configure SQL Server
## Requirements
Many of these requirements are met if you follow the **deploy** exercises in this Module.
- SQL Server deployed on Linux
- SQL Server command line tools deployed for Linux
- SQL Server Managed Studio installed on a Windows client that can connect to your Linux Server or VM.
- Azure Data Studio deployed on Windows, Linux, or macOS on a client that can connect to your Linux Server of VM. You can read more about how to install Azure Data Studio at https://docs.microsoft.com/en-us/sql/azure-data-studio/download?view=sql-server-2017.
**Tip**: For many of my demos, I setup a resource group in Azure and put all of my VMs that I use for connectivity in the same resource group. This puts them all in the same virtual network. Now I can put the private IP addresses of all my VMs in the /etc/hosts file on either Windows or Linux and use those names to connect between VMs.
## Explore the SQL Server installation
In this exercise, you will explore the SQL Server installation post deployment.
1. Run the following command to see the state of SQL Server
`sudo systemctl status mssql-server`
Using sudo allows you to see the tail of the SQL Server ERRORLOG file.
2. Run the following commands to stop, start, and restart SQL Server.
`sudo systemctl stop mssql-server`
`sudo systemctl status mssql-server`
`sudo systemctl start mssql-server`
`sudo systemctl status mssql-server`
`sudo systemctl restart mssql-server`
`sudo systemctl status mssql-server`
Note that there are no return values when starting, stopping, or restarting. You must run systemctl status to check on the status of SQL Server. With each start of SQL Server, you should see different PID values (for new processes).
3. Let's see where everything is installed. Run the following command to see where the binaries are installed
`sudo ls -l /opt/mssql/bin`
This directory contains the sqlservr executable, mssql-conf script, and other files to support crash dumps. There is no method today to change the location of these files.
4. Run these commands to see where the default directories for databases and ERRORLOG log (and other log files) are stored
`sudo ls -l /var/opt/mssql/data`
`sudo ls -l /var/opt/mssql/log`
Note from the results that the owner of these files is mssql and mssql. This is a group and non-interactive user called mssql which is the context under which sqlservr executes. Any time sqlservr needs to read or write a file, that file or directory must have mssql:mssql permissions. There is no method to change this today. You can change the default locations of database files, backups, transaction log files, and the ERRORLOG directory using the mssql-conf script.
5. Let's dump out the current ERRORLOG file using a command on Linux called **cat** (and another variation using **more** so you can page the results)
`sudo cat /var/opt/mssql/log/errorlog`
`sudo more /var/opt/mssql/log/errorlog`
## Common Linux commands
Now that you have deployed SQL Server on Linux here are a few common commands you may need for Linux while working with SQL Server.
1. Find out information about the computer running Linux by running the following command
`sudo dmidecode -t 1`
2. Find out information about the Linux distribution by running the following command
`cat /etc/*-release`
3. Find out information about memory configured on the Linux Server by running the following command
`cat /proc/meminfo`
The **MemTotal** is the total amount of physical memory on the Linux Server
The /proc directory is known as the *proc filesystem* and there is other interesting information exposed in files in this directory.
4. Find out about the number of cores, sockets, NUMA nodes, and chip architecture by running the following command
`lscpu`
5. The **ps** command is used to view all processes on the Linux Server. Use this command to scroll through all processes including parent/child process relationships
`ps axjf | more`
6. Run the following command to see a list of disks and mounted file systems on these disks including disk sizes
`df -H`
The disk starting with /dev are the true disks for the server.
7. To see basic performance information by process run the following command
`top`
**top** will sort the results with the process using the most CPU at the top which since nothing else is running is sqlservr
The **KiB Mem** values show physical total, free, and used memory.
The **RES** column is the amount of physical memory used by a process like sqlservr.
**top** is interactive so type in "q" to quit the program
8. **iotop** is a utility to monitor I/O statistics per process. However, it is not installed by default. Run the following command to first install iotop
`sudo yum install -y iotop`
Now run the following command to execute iotop
`sudo iotop`
This shows the overall I/O on the system plus I/O per process. Type in "q" to exit the program. Run this version of the command to only view I/O for processes actually using I/O. This program is interactive and refreshes itself every few seconds
`sudo iotop -o`
There are many other options with iotop. Execute the command `man iotop` to experiment with all iotop options.
9. **htop** is an interactive program to see process utilization information across processors and processes. However, it is not installed by default so run the following commands first to install htop.
`sudo wget dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm`
`sudo rpm -ihv epel-release-7-11.noarch.rpm`
`sudo yum install -y htop`
Now run the interactive htop command to observe its display
`htop`
Type "q" to exit the tool
10. You will likely need a good editor while using Linux. While the editor vi is installed by default, I recommend you use the **nano** editor. It may be already installed but if not run the following command to install it
`sudo yum install -y nano`
Let's use nano to create a shell script to run on Linux
`nano dumperrorlog.sh`
nano is a full screen editor. Type in the following in the editor window
`sudo cat /var/opt/mssql/log/errorlog`
Type Ctrl+X to exit. You will get prompted to save the file
Run the following command to make the script executable
`chmod u+x dumperrorlog.sh`
Now execute the script
`./dumperrorlog.sh`
## How to connect to SQL Server
Here are a few examples of how to do basic connectivity to SQL Server on Linux. These exercises assume you have installed the command line tools for Linux, mssql-cli, and have a machine or VM running Windows that can connect to the Linux server or VM.
1. Run a quick test to connect with sqlcmd by executing the following
`sqlcmd -Usa -Slocalhost`
Put in your sa password. At the sqlcmd prompt, run the following T-SQL statement
```sql
SELECT @@VERSION
GO
```
Type in "exit" to quit sqlcmd
Note: You can connect with any sqlcmd tool from Windows, Linux, or macOS in the same was provided you have connectivity to your Linux Server or VM. The -S<server> parameter would be the hostname or IP address of the Linux Server or VM.
2. Now test mssql-cli like we did for sqlcmd by running the following command
`mssql-cli -Usa -Slocalhost`
You should get a new prompt like sqlcmd. At this prompt type in the following T-SQL command and hit Enter
```sql
SELECT @@VERSION
```
Notice as you started typing you see Intellisense functionality kick-in which is one of the differences from sqlcmd.
If you are not put back into the mssql-cli prompt, type "q" to get back to the prompt.
mssql-cli does not recognize the "GO" keyword as sqlcmd does. Use a ";" to separate batches. You can also hit F3 to type statements in multiple lines but they will all be in one batch.
Type in "exit" to quit mssql-cli
Note: You can connect with any mssql-cli tool from Windows, Linux, or macOS in the same was provided you have connectivity to your Linux Server or VM. The -S<server> parameter would be the hostname or IP address of the Linux Server or VM.
3. Connect with SQL Server Management Studio (SSMS) using SQL Authentication with the sa account and the server name or IP address:port for your Linux Server. Notice how SSMS works "as is" against the Linux Server and looks almost like a SQL Server on Windows deployment.
Use Object Explorer and the Query Editor just like you would a normal SQL Server instance. Go through some of the steps in the SSMS tutorial in our documentation at <https://docs.microsoft.com/sql/ssms/tutorials/tutorial-sql-server-management-studio>
4. Go through the the quickstart tutorial for connecting to SQL Server from Azure Data Studio with the SQL Server on Linux deployment at https://docs.microsoft.com/en-us/sql/azure-data-studio/quickstart-sql-server?view=sql-server-2017.
## How to restore a backup and run queries
In this exercise, you will learn how to restore a backup of a database to SQL Server on Linux, and run queries against the database.
Now you will learn the great compatibility story of SQL Server on Linux by restoring a backup from SQL Server on Windows to SQL Server on Linux. And you will interact with this database using sqlcmd and mssql-cli. This section of the lab assumes your Linux Server is connected to the internet. If you are not connected to the internet, you can download the database to restore from <https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak> and then copy it to your Linux Server (MobaXterm drag and drop is really nice for this)
1. From your Linux ssh session, run the following command from the bash shell
`wget https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak`
Depending on your network speed this should take no more than a few minutes
2. Copy and restore the WideWorldImporters database. Copy the **cpwwi.sh**, **restorewwi.sh**, and **restorewwi_linux.sql** files from the downloaded zip of the gitHub repo into your home directory on Linux. MobaXterm provides drag and drop capabilities to do this. Copy these files and drop them into the "explorer" pane in MobaXterm on the left hand side from your ssh session.
Note: You can skip this step if you have already cloned the git repo in the prelab. If you have done this, the scripts in this lab are in the **sqllinuxlab** subdirectory. You can copy them into your home directory or edit them to ensure you have the right path for the WideWorldImporters backup file.
3. Run the following commands from the bash shell to make the scripts executable (supply the root password if prompted)
`sudo chmod u+x cpwwi.sh`
`sudo chmod u+x restorewwi.sh`
4. Copy the backup file to the SQL Server directory so it can access the file and change permissions on the backup file by executing the following command in the bash shell
`./cpwwi.sh`
5. Now restore the database by executing the following command from the bash shell
`./restorewwi.sh`
6. Connect with sa to run a query against this database. Run sqlcmd first to connect. Type in the sa password when prompted
`sqlcmd -Usa -Slocalhost`
7. From the sqlcmd prompt run these commands
```sql
USE WideWorldImporters
GO
SELECT * FROM [Sales].[Customers]
GO
```
Type in "exit" to quit sqlcmd
9. Now run the same set of commands using mssql-cli. Connect to SQL Server with mssql-cli. Type in the sa password when prompted
`mssql-cli -Usa -Slocalhost`
10. Run the following T-SQL commands from the msql-cli prompt (BONUS: Use Intellisense to complete these queries)
`USE WideWorldImporters;SELECT * FROM Sales.Customers;`
See how mssql-cli by default will present rows in a vertical record format. Hit Enter or Space to keep paging as many rows as you like.
Type in "q" at any time to get back to the prompt and "exit" to quit mssql-cli
## How to configure SQL Server
In this exercise, you will learn how to configure SQL Server on Linux with the mssql-conf tool.
There may be situations where you need to enable a traceflag as global and at SQL Server startup time. For Windows, this is done through the SQL Server Configuration Manager. For SQL Server on Linux, you will use the mssql-conf script. A list of all documented traceflags can be found at <https://docs.microsoft.com/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql>.
Let's say you wanted to enable trace flag 1222 for deadlock details to be reported in the ERRORLOG.
1. Run the following command from an ssh session with the bash shell
`sudo /opt/mssql/bin/mssql-conf traceflag 1222 on`
2. Per these instructions, restart SQL Server with the following command:
`sudo systemctl restart mssql-server`
Note: If this is successful, the command just returns to the shell prompt
3. Verify the trace flag was properly set by looking at the ERRORLOG with the following command
`sudo more /var/opt/mssql/log/errorlog`
4. Use sqlcmd or mssql-cli to verify this trace flag is set by running the following T-SQL statement
```sql
DBCC TRACESTATUS(-1)
# Explore SQL Server on Linux
In this exercise you will learn how to explore SQL Server on Linux after you have deployed by learning
- Explore the SQL Server installation
- Common Linux commands
- How to connect to SQL Server
- How to restore a backup and run queries
- How to configure SQL Server
## Requirements
Many of these requirements are met if you follow the **deploy** exercises in this Module.
- SQL Server deployed on Linux
- SQL Server command line tools deployed for Linux
- SQL Server Managed Studio installed on a Windows client that can connect to your Linux Server or VM.
- Azure Data Studio deployed on Windows, Linux, or macOS on a client that can connect to your Linux Server of VM. You can read more about how to install Azure Data Studio at https://docs.microsoft.com/en-us/sql/azure-data-studio/download?view=sql-server-2017.
**Tip**: For many of my demos, I setup a resource group in Azure and put all of my VMs that I use for connectivity in the same resource group. This puts them all in the same virtual network. Now I can put the private IP addresses of all my VMs in the /etc/hosts file on either Windows or Linux and use those names to connect between VMs.
## Explore the SQL Server installation
In this exercise, you will explore the SQL Server installation post deployment.
1. Run the following command to see the state of SQL Server
`sudo systemctl status mssql-server`
Using sudo allows you to see the tail of the SQL Server ERRORLOG file.
2. Run the following commands to stop, start, and restart SQL Server.
`sudo systemctl stop mssql-server`
`sudo systemctl status mssql-server`
`sudo systemctl start mssql-server`
`sudo systemctl status mssql-server`
`sudo systemctl restart mssql-server`
`sudo systemctl status mssql-server`
Note that there are no return values when starting, stopping, or restarting. You must run systemctl status to check on the status of SQL Server. With each start of SQL Server, you should see different PID values (for new processes).
3. Let's see where everything is installed. Run the following command to see where the binaries are installed
`sudo ls -l /opt/mssql/bin`
This directory contains the sqlservr executable, mssql-conf script, and other files to support crash dumps. There is no method today to change the location of these files.
4. Run these commands to see where the default directories for databases and ERRORLOG log (and other log files) are stored
`sudo ls -l /var/opt/mssql/data`
`sudo ls -l /var/opt/mssql/log`
Note from the results that the owner of these files is mssql and mssql. This is a group and non-interactive user called mssql which is the context under which sqlservr executes. Any time sqlservr needs to read or write a file, that file or directory must have mssql:mssql permissions. There is no method to change this today. You can change the default locations of database files, backups, transaction log files, and the ERRORLOG directory using the mssql-conf script.
5. Let's dump out the current ERRORLOG file using a command on Linux called **cat** (and another variation using **more** so you can page the results)
`sudo cat /var/opt/mssql/log/errorlog`
`sudo more /var/opt/mssql/log/errorlog`
## Common Linux commands
Now that you have deployed SQL Server on Linux here are a few common commands you may need for Linux while working with SQL Server.
1. Find out information about the computer running Linux by running the following command
`sudo dmidecode -t 1`
2. Find out information about the Linux distribution by running the following command
`cat /etc/*-release`
3. Find out information about memory configured on the Linux Server by running the following command
`cat /proc/meminfo`
The **MemTotal** is the total amount of physical memory on the Linux Server
The /proc directory is known as the *proc filesystem* and there is other interesting information exposed in files in this directory.
4. Find out about the number of cores, sockets, NUMA nodes, and chip architecture by running the following command
`lscpu`
5. The **ps** command is used to view all processes on the Linux Server. Use this command to scroll through all processes including parent/child process relationships
`ps axjf | more`
6. Run the following command to see a list of disks and mounted file systems on these disks including disk sizes
`df -H`
The disk starting with /dev are the true disks for the server.
7. To see basic performance information by process run the following command
`top`
**top** will sort the results with the process using the most CPU at the top which since nothing else is running is sqlservr
The **KiB Mem** values show physical total, free, and used memory.
The **RES** column is the amount of physical memory used by a process like sqlservr.
**top** is interactive so type in "q" to quit the program
8. **iotop** is a utility to monitor I/O statistics per process. However, it is not installed by default. Run the following command to first install iotop
`sudo yum install -y iotop`
Now run the following command to execute iotop
`sudo iotop`
This shows the overall I/O on the system plus I/O per process. Type in "q" to exit the program. Run this version of the command to only view I/O for processes actually using I/O. This program is interactive and refreshes itself every few seconds
`sudo iotop -o`
There are many other options with iotop. Execute the command `man iotop` to experiment with all iotop options.
9. **htop** is an interactive program to see process utilization information across processors and processes. However, it is not installed by default so run the following commands first to install htop.
`sudo wget dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-11.noarch.rpm`
`sudo rpm -ihv epel-release-7-11.noarch.rpm`
`sudo yum install -y htop`
Now run the interactive htop command to observe its display
`htop`
Type "q" to exit the tool
10. You will likely need a good editor while using Linux. While the editor vi is installed by default, I recommend you use the **nano** editor. It may be already installed but if not run the following command to install it
`sudo yum install -y nano`
Let's use nano to create a shell script to run on Linux
`nano dumperrorlog.sh`
nano is a full screen editor. Type in the following in the editor window
`sudo cat /var/opt/mssql/log/errorlog`
Type Ctrl+X to exit. You will get prompted to save the file
Run the following command to make the script executable
`chmod u+x dumperrorlog.sh`
Now execute the script
`./dumperrorlog.sh`
## How to connect to SQL Server
Here are a few examples of how to do basic connectivity to SQL Server on Linux. These exercises assume you have installed the command line tools for Linux, mssql-cli, and have a machine or VM running Windows that can connect to the Linux server or VM.
1. Run a quick test to connect with sqlcmd by executing the following
`sqlcmd -Usa -Slocalhost`
Put in your sa password. At the sqlcmd prompt, run the following T-SQL statement
```sql
SELECT @@VERSION
GO
```
Type in "exit" to quit sqlcmd
Note: You can connect with any sqlcmd tool from Windows, Linux, or macOS in the same was provided you have connectivity to your Linux Server or VM. The -S<server> parameter would be the hostname or IP address of the Linux Server or VM.
2. Now test mssql-cli like we did for sqlcmd by running the following command
`mssql-cli -Usa -Slocalhost`
You should get a new prompt like sqlcmd. At this prompt type in the following T-SQL command and hit Enter
```sql
SELECT @@VERSION
```
Notice as you started typing you see Intellisense functionality kick-in which is one of the differences from sqlcmd.
If you are not put back into the mssql-cli prompt, type "q" to get back to the prompt.
mssql-cli does not recognize the "GO" keyword as sqlcmd does. Use a ";" to separate batches. You can also hit F3 to type statements in multiple lines but they will all be in one batch.
Type in "exit" to quit mssql-cli
Note: You can connect with any mssql-cli tool from Windows, Linux, or macOS in the same was provided you have connectivity to your Linux Server or VM. The -S<server> parameter would be the hostname or IP address of the Linux Server or VM.
3. Connect with SQL Server Management Studio (SSMS) using SQL Authentication with the sa account and the server name or IP address:port for your Linux Server. Notice how SSMS works "as is" against the Linux Server and looks almost like a SQL Server on Windows deployment.
Use Object Explorer and the Query Editor just like you would a normal SQL Server instance. Go through some of the steps in the SSMS tutorial in our documentation at <https://docs.microsoft.com/sql/ssms/tutorials/tutorial-sql-server-management-studio>
4. Go through the the quickstart tutorial for connecting to SQL Server from Azure Data Studio with the SQL Server on Linux deployment at https://docs.microsoft.com/en-us/sql/azure-data-studio/quickstart-sql-server?view=sql-server-2017.
## How to restore a backup and run queries
In this exercise, you will learn how to restore a backup of a database to SQL Server on Linux, and run queries against the database.
Now you will learn the great compatibility story of SQL Server on Linux by restoring a backup from SQL Server on Windows to SQL Server on Linux. And you will interact with this database using sqlcmd and mssql-cli. This section of the lab assumes your Linux Server is connected to the internet. If you are not connected to the internet, you can download the database to restore from <https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak> and then copy it to your Linux Server (MobaXterm drag and drop is really nice for this)
1. From your Linux ssh session, run the following command from the bash shell
`wget https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/WideWorldImporters-Full.bak`
Depending on your network speed this should take no more than a few minutes
2. Copy and restore the WideWorldImporters database. Copy the **cpwwi.sh**, **restorewwi.sh**, and **restorewwi_linux.sql** files from the downloaded zip of the gitHub repo into your home directory on Linux. MobaXterm provides drag and drop capabilities to do this. Copy these files and drop them into the "explorer" pane in MobaXterm on the left hand side from your ssh session.
Note: You can skip this step if you have already cloned the git repo in the prelab. If you have done this, the scripts in this lab are in the **sqllinuxlab** subdirectory. You can copy them into your home directory or edit them to ensure you have the right path for the WideWorldImporters backup file.
3. Run the following commands from the bash shell to make the scripts executable (supply the root password if prompted)
`sudo chmod u+x cpwwi.sh`
`sudo chmod u+x restorewwi.sh`
4. Copy the backup file to the SQL Server directory so it can access the file and change permissions on the backup file by executing the following command in the bash shell
`./cpwwi.sh`
5. Now restore the database by executing the following command from the bash shell
`./restorewwi.sh`
6. Connect with sa to run a query against this database. Run sqlcmd first to connect. Type in the sa password when prompted
`sqlcmd -Usa -Slocalhost`
7. From the sqlcmd prompt run these commands
```sql
USE WideWorldImporters
GO
SELECT * FROM [Sales].[Customers]
GO
```
Type in "exit" to quit sqlcmd
9. Now run the same set of commands using mssql-cli. Connect to SQL Server with mssql-cli. Type in the sa password when prompted
`mssql-cli -Usa -Slocalhost`
10. Run the following T-SQL commands from the msql-cli prompt (BONUS: Use Intellisense to complete these queries)
`USE WideWorldImporters;SELECT * FROM Sales.Customers;`
See how mssql-cli by default will present rows in a vertical record format. Hit Enter or Space to keep paging as many rows as you like.
Type in "q" at any time to get back to the prompt and "exit" to quit mssql-cli
## How to configure SQL Server
In this exercise, you will learn how to configure SQL Server on Linux with the mssql-conf tool.
There may be situations where you need to enable a traceflag as global and at SQL Server startup time. For Windows, this is done through the SQL Server Configuration Manager. For SQL Server on Linux, you will use the mssql-conf script. A list of all documented traceflags can be found at <https://docs.microsoft.com/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql>.
Let's say you wanted to enable trace flag 1222 for deadlock details to be reported in the ERRORLOG.
1. Run the following command from an ssh session with the bash shell
`sudo /opt/mssql/bin/mssql-conf traceflag 1222 on`
2. Per these instructions, restart SQL Server with the following command:
`sudo systemctl restart mssql-server`
Note: If this is successful, the command just returns to the shell prompt
3. Verify the trace flag was properly set by looking at the ERRORLOG with the following command
`sudo more /var/opt/mssql/log/errorlog`
4. Use sqlcmd or mssql-cli to verify this trace flag is set by running the following T-SQL statement
```sql
DBCC TRACESTATUS(-1)
```

Просмотреть файл

@ -1 +1 @@
/opt/mssql-tools/bin/sqlcmd -Usa -irestorewwi_linux.sql
/opt/mssql-tools/bin/sqlcmd -Usa -irestorewwi_linux.sql

Просмотреть файл

@ -1,6 +1,6 @@
restore database WideWorldImporters from disk = '/var/opt/mssql/WideWorldImporters-Full.bak' with
move 'WWI_Primary' to '/var/opt/mssql/data/WideWorldImporters.mdf',
move 'WWI_UserData' to '/var/opt/mssql/data/WideWorldImporters_UserData.ndf',
move 'WWI_Log' to '/var/opt/mssql/data/WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to '/var/opt/mssql/data/WideWorldImporters_InMemory_Data_1'
go
restore database WideWorldImporters from disk = '/var/opt/mssql/WideWorldImporters-Full.bak' with
move 'WWI_Primary' to '/var/opt/mssql/data/WideWorldImporters.mdf',
move 'WWI_UserData' to '/var/opt/mssql/data/WideWorldImporters_UserData.ndf',
move 'WWI_Log' to '/var/opt/mssql/data/WideWorldImporters.ldf',
move 'WWI_InMemory_Data_1' to '/var/opt/mssql/data/WideWorldImporters_InMemory_Data_1'
go

Просмотреть файл

@ -1,11 +1,11 @@
# Module 6 Activities - SQL Server on Linux
These represent demos and examples you can run to deploy and explore and use SQL Server on Linux
## deploy
Learn the basics of how to deploy SQL Server on Linux.
## explore
# Module 6 Activities - SQL Server on Linux
These represent demos and examples you can run to deploy and explore and use SQL Server on Linux
## deploy
Learn the basics of how to deploy SQL Server on Linux.
## explore
Learn how to explore SQL Server on Linux including common Linux commands, connecting to SQL Server, restoring a backup, querying, and configuring SQL Server.

Просмотреть файл

@ -1,11 +1,11 @@
# Module 7 Activities - SQL Server Containers
These represent demos and examples to show you the basics of SQL Server containers and how to update and upgrade SQL Server using containers.
## sqlcontainers
Learn the basics of SQL Server with containers
## sqlcontainerupdate
# Module 7 Activities - SQL Server Containers
These represent demos and examples to show you the basics of SQL Server containers and how to update and upgrade SQL Server using containers.
## sqlcontainers
Learn the basics of SQL Server with containers
## sqlcontainerupdate
Learn more about storage volumes and updating and upgrading SQL Server using containers.

Просмотреть файл

@ -1,5 +1,5 @@
sudo docker stop sql2017cu10
sudo docker stop sql2
sudo docker rm sql2017cu10
sudo docker rm sql2
sudo docker volume rm sqlvolume sqlvolume2
sudo docker stop sql2017cu10
sudo docker stop sql2
sudo docker rm sql2017cu10
sudo docker rm sql2
sudo docker volume rm sqlvolume sqlvolume2

Просмотреть файл

@ -1,56 +1,56 @@
## SQL Server Containers Fundamentals
In this exercise you will be exploring the fundamentals for SQL Server containers.
## Requirements
- These exercises were built to run using Docker for Linux on RedHat Enterprise Linux. However, you can review all the details of the scripts provided and use them with Docker for Windows or Docker for macOS.
- If using a RedHat Linux Server or VM and Docker is not installed, you can use these steps to install the Docker Community Edition for centOS
`sudo yum install -y yum-utils device-mapper-persistent-data lvm2`
`sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo`
`sudo yum install http://mirror.centos.org/centos/7/extras/x86_64/Packages/pigz-2.3.3-1.el7.centos.x86_64.rpm`
`sudo yum install docker-ce`
Then make sure the Docker engine is running with these commands
`sudo systemctl status docker`
`sudo systemctl start docker`
- Make sure all scripts are executable by running the following command
`chmod u+x *.sh`
- Restore the WideWorldImporters backup by using the **pullwwi.sh** script. This requires internet connectivity. You can also manually copy the backup into the current directory where you run the exercises.
## Learning the basics of SQL Server containers
1. Run **step1_dockerruncu10.sh** to start a container with SQL Server 2017 CU8. This container is called sql2017cu10
2. Run **step2_dockercopy.sh** to copy the WWI backyup into the container.
3. Run **step3_docker_restorewwi.sh** to restore the backup. This uses docker exec to **run sqlcmd inside the container**. Since this takes a few minutes it will run in the background using the -d paramater for docker exec.
4. Run **step4_dockerrun2.sh** to start another SQL container with the latest SQL 2017 update. This container is called sql2. Notice a different volume is used along with port 1402 instead of 1401.
5. Run **step5_containers.sh** to see both containers running. You now have two SQL Servers running on the same Linux machine using containers.
6. Run **step6_procs.sh** to see the process for the Linux host which include the docker daemon. Note the sqlservr processes as children underneath that process.
7. Run **step7_namespaces.sh** to see the different namespaces for the SQL Server containers
8. The restore should be finished from Step 3. Run **step8_dockerquery.sh** to run a query for the database by connecting **using sqlcmd outside of the container**.
9. Use docker exec to interact with the sql2017cu10 container through a shell by executing **step9_dockerexec.sh**. Notice the shell has the hostname used to run the container at the prompt.
- Run ps -axf to see the isolation of containers and that sqlservr is just about the only process running.
- Go look at the ERRORLOG file in /var/opt/mssql/log.
- Exit the shell inside the container by typing **exit**.
## SQL Server Containers Fundamentals
In this exercise you will be exploring the fundamentals for SQL Server containers.
## Requirements
- These exercises were built to run using Docker for Linux on RedHat Enterprise Linux. However, you can review all the details of the scripts provided and use them with Docker for Windows or Docker for macOS.
- If using a RedHat Linux Server or VM and Docker is not installed, you can use these steps to install the Docker Community Edition for centOS
`sudo yum install -y yum-utils device-mapper-persistent-data lvm2`
`sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo`
`sudo yum install http://mirror.centos.org/centos/7/extras/x86_64/Packages/pigz-2.3.3-1.el7.centos.x86_64.rpm`
`sudo yum install docker-ce`
Then make sure the Docker engine is running with these commands
`sudo systemctl status docker`
`sudo systemctl start docker`
- Make sure all scripts are executable by running the following command
`chmod u+x *.sh`
- Restore the WideWorldImporters backup by using the **pullwwi.sh** script. This requires internet connectivity. You can also manually copy the backup into the current directory where you run the exercises.
## Learning the basics of SQL Server containers
1. Run **step1_dockerruncu10.sh** to start a container with SQL Server 2017 CU8. This container is called sql2017cu10
2. Run **step2_dockercopy.sh** to copy the WWI backyup into the container.
3. Run **step3_docker_restorewwi.sh** to restore the backup. This uses docker exec to **run sqlcmd inside the container**. Since this takes a few minutes it will run in the background using the -d paramater for docker exec.
4. Run **step4_dockerrun2.sh** to start another SQL container with the latest SQL 2017 update. This container is called sql2. Notice a different volume is used along with port 1402 instead of 1401.
5. Run **step5_containers.sh** to see both containers running. You now have two SQL Servers running on the same Linux machine using containers.
6. Run **step6_procs.sh** to see the process for the Linux host which include the docker daemon. Note the sqlservr processes as children underneath that process.
7. Run **step7_namespaces.sh** to see the different namespaces for the SQL Server containers
8. The restore should be finished from Step 3. Run **step8_dockerquery.sh** to run a query for the database by connecting **using sqlcmd outside of the container**.
9. Use docker exec to interact with the sql2017cu10 container through a shell by executing **step9_dockerexec.sh**. Notice the shell has the hostname used to run the container at the prompt.
- Run ps -axf to see the isolation of containers and that sqlservr is just about the only process running.
- Go look at the ERRORLOG file in /var/opt/mssql/log.
- Exit the shell inside the container by typing **exit**.
10. Leave all containers running as they will be used in the next set of exercises in the **sqlcontainerupdate** folder.

Просмотреть файл

@ -1,8 +1,8 @@
sudo docker run -e\
'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
--hostname sql2017cu10\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--name sql2017cu10\
-d\
mcr.microsoft.com/mssql/server:2017-CU10-ubuntu
sudo docker run -e\
'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
--hostname sql2017cu10\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--name sql2017cu10\
-d\
mcr.microsoft.com/mssql/server:2017-CU10-ubuntu

Просмотреть файл

@ -1 +1 @@
sudo docker cp WideWorldImporters-Full.bak sql2017cu10:/var/opt/mssql
sudo docker cp WideWorldImporters-Full.bak sql2017cu10:/var/opt/mssql

Просмотреть файл

@ -1,2 +1,2 @@
sudo docker exec -d sql2017cu10\
sudo docker exec -d sql2017cu10\
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Sql2017isfast' -Q 'RESTORE DATABASE WideWorldImp$

Просмотреть файл

@ -1,8 +1,8 @@
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
--hostname sql2\
-p 1402:1433\
-v sqlvolume2:/var/opt/mssql\
--name sql2\
-d\
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
--hostname sql2\
-p 1402:1433\
-v sqlvolume2:/var/opt/mssql\
--name sql2\
-d\
mcr.microsoft.com/mssql/server:2017-latest-ubuntu

Просмотреть файл

@ -1 +1 @@
sudo ps -axf
sudo ps -axf

Просмотреть файл

@ -1 +1 @@
sudo lsns
sudo lsns

Просмотреть файл

@ -1 +1 @@
sqlcmd -Usa -Slocalhost,1401 -Q'USE WideWorldImporters;SELECT * FROM [Application].[People];' -PSql2017isfast
sqlcmd -Usa -Slocalhost,1401 -Q'USE WideWorldImporters;SELECT * FROM [Application].[People];' -PSql2017isfast

Просмотреть файл

@ -1 +1 @@
sudo docker exec -it sql2017cu10 bash
sudo docker exec -it sql2017cu10 bash

Просмотреть файл

@ -1,6 +1,6 @@
sudo docker stop sql2019
sudo docker rm sql2019
sudo docker stop sql2017latest
sudo docker rm sql2017latest
sudo docker stop sql2019
sudo docker rm sql2019
sudo docker stop sql2017latest
sudo docker rm sql2017latest

Просмотреть файл

@ -1,2 +1,2 @@
sudo docker exec -it sql2017latest bash
sudo docker exec -it sql2017latest bash

Просмотреть файл

@ -1,26 +1,26 @@
# SQL Containers exercises showing volume storage and update/upgrade of containers
This is a set of exercies to learn more about storage volumes for containers and see how to update, rollback, and upgrade SQL Server using containers.
## Requirements
- Complete the exercises in the sqlcontainers folder in this Module.
- Make sure all scripts are executable by running the following command
`chmod u+x *.sh`
## Looking at volume storage, updating, rolling back, and upgrading SQL containers
1. Let's update the sql2017cu10 container with the latest CU by running **step1_dockerupdate.sh**. This has to run a few upgrade scripts so takes a few minutes. While this is running, let's look at volume storage.
2. See details of the volumes used by the containers from the sqlcontainers exercise volumes by running **step2_inspectvols.sh**.
3. See what files are stored in the host folders used to provide volume storage by running **step3_volstorage.sh**.
4. Let's see if the container is updated by running **step4_dockerquery.sh**. If the query cannot run because script upgrades are still running use **execintodocker.sh** to see the status in the ERRORLOG file of the container.
5. If time permits, you can execute **step5_dockerrollback.sh** to go back to the SQL 2017 CU10 build of that container.
6. Upgrade the container to SQL Server 2019 preview by execute **step6_dockerupgrade.sh**.
# SQL Containers exercises showing volume storage and update/upgrade of containers
This is a set of exercies to learn more about storage volumes for containers and see how to update, rollback, and upgrade SQL Server using containers.
## Requirements
- Complete the exercises in the sqlcontainers folder in this Module.
- Make sure all scripts are executable by running the following command
`chmod u+x *.sh`
## Looking at volume storage, updating, rolling back, and upgrading SQL containers
1. Let's update the sql2017cu10 container with the latest CU by running **step1_dockerupdate.sh**. This has to run a few upgrade scripts so takes a few minutes. While this is running, let's look at volume storage.
2. See details of the volumes used by the containers from the sqlcontainers exercise volumes by running **step2_inspectvols.sh**.
3. See what files are stored in the host folders used to provide volume storage by running **step3_volstorage.sh**.
4. Let's see if the container is updated by running **step4_dockerquery.sh**. If the query cannot run because script upgrades are still running use **execintodocker.sh** to see the status in the ERRORLOG file of the container.
5. If time permits, you can execute **step5_dockerrollback.sh** to go back to the SQL 2017 CU10 build of that container.
6. Upgrade the container to SQL Server 2019 preview by execute **step6_dockerupgrade.sh**.
7. Run **cleanup.sh** to stop and remove containers from this exercise. There is a cleanup.sh script in the **sqlcontainers** folder to cleanup containers created for that exercise.

Просмотреть файл

@ -1,10 +1,10 @@
sudo docker stop sql2017cu10
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--hostname sql2017latest\
--name\
sql2017latest\
-d\
sudo docker stop sql2017cu10
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--hostname sql2017latest\
--name\
sql2017latest\
-d\
mcr.microsoft.com/mssql/server:2017-latest

Просмотреть файл

@ -1,2 +1,2 @@
sudo ls /var/lib/docker/volumes/sqlvolume/_data
sudo ls /var/lib/docker/volumes/sqlvolume/_data
sudo ls /var/lib/docker/volumes/sqlvolume2/_data

Просмотреть файл

@ -1 +1 @@
sqlcmd -Usa -Slocalhost,1401 -Q'USE WideWorldImporters;SELECT * FROM [Application].[People];' -PSql2017isfast
sqlcmd -Usa -Slocalhost,1401 -Q'USE WideWorldImporters;SELECT * FROM [Application].[People];' -PSql2017isfast

Просмотреть файл

@ -1,2 +1,2 @@
sudo docker stop sql2017latest
sudo docker start sql2017cu10
sudo docker stop sql2017latest
sudo docker start sql2017cu10

Просмотреть файл

@ -1,9 +1,9 @@
sudo docker stop sql2017cu10
sudo docker stop sql2017latest
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--name sql2019\
-d\
sudo docker stop sql2017cu10
sudo docker stop sql2017latest
sudo docker run\
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=Sql2017isfast'\
-p 1401:1433\
-v sqlvolume:/var/opt/mssql\
--name sql2019\
-d\
mcr.microsoft.com/mssql/rhel/server:2019-CTP2.2

Просмотреть файл

@ -1,97 +1,97 @@
USE [master]
GO
-- Enabled PB connectivity to a Hadoop HDFS source which in this case is just Azure Blob Storage
--
sp_configure @configname = 'hadoop connectivity', @configvalue = 7;
GO
RECONFIGURE
GO
-- Enable PB export to be able to ingest data into the HDFS target
--
sp_configure 'allow polybase export', 1
GO
RECONFIGURE
GO
-- STOP: SQL Server must be restarted for this to take effect
--
USE [WideWorldImporters]
GO
-- Only run this if you have not already created a master key in the db
--
--CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
--GO
-- IDENTITY: any string (this is not used for authentication to Azure storage).
-- SECRET: your Azure storage account key.
DROP DATABASE SCOPED CREDENTIAL AzureStorageCredential
GO
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'user', Secret = 'C5aFpK587sIDFIMSEqXwA08xlhDM34/rfOz2g+sVq/hcKReo6agvT9JZcWGe9NtEyHEypK095WZtDdE/gkKZNQ=='
GO
-- LOCATION: Azure account storage account name and blob container name.
-- CREDENTIAL: The database scoped credential created above.
DROP EXTERNAL DATA SOURCE bwdatalake
GO
CREATE EXTERNAL DATA SOURCE bwdatalake with (
TYPE = HADOOP,
LOCATION ='wasbs://wwi@bwdatalake.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
)
GO
-- FORMAT TYPE: Type of format in Hadoop (DELIMITEDTEXT, RCFILE, ORC, PARQUET).
CREATE EXTERNAL FILE FORMAT TextFileFormat WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (FIELD_TERMINATOR ='|',
USE_TYPE_DEFAULT = TRUE))
GO
-- Create a schema called hdfs
--
DROP SCHEMA hdfs
GO
CREATE SCHEMA hdfs
GO
-- LOCATION: path to file or directory that contains the data (relative to HDFS root).
DROP EXTERNAL TABLE [hdfs].[WWI_Order_Reviews]
GO
CREATE EXTERNAL TABLE [hdfs].[WWI_Order_Reviews] (
[OrderID] int NOT NULL,
[CustomerID] int NOT NULL,
[Rating] int NULL,
[Review_Comments] nvarchar(1000) NOT NULL
)
WITH (LOCATION='/WWI/',
DATA_SOURCE = bwdatalake,
FILE_FORMAT = TextFileFormat
)
GO
-- Ingest some data
--
INSERT INTO [hdfs].[WWI_Order_Reviews] VALUES (1, 832, 10, 'I had a great experience with my order')
GO
CREATE STATISTICS StatsforReviews on [hdfs].[WWI_Order_Reviews](OrderID, CustomerID)
GO
-- Now query the external table
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
GO
-- Let's do a filter to enable pushdown
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
WHERE OrderID = 1
GO
-- Let's join the review with our order and customer data
--
SELECT o.OrderDate, c.CustomerName, p.FullName as SalesPerson, wor.Rating, wor.Review_Comments
FROM [Sales].[Orders] o
JOIN [hdfs].[WWI_Order_Reviews] wor
ON o.OrderID = wor.OrderID
JOIN [Application].[People] p
ON p.PersonID = o.SalespersonPersonID
JOIN [Sales].[Customers] c
ON c.CustomerID = wor.CustomerID
USE [master]
GO
-- Enabled PB connectivity to a Hadoop HDFS source which in this case is just Azure Blob Storage
--
sp_configure @configname = 'hadoop connectivity', @configvalue = 7;
GO
RECONFIGURE
GO
-- Enable PB export to be able to ingest data into the HDFS target
--
sp_configure 'allow polybase export', 1
GO
RECONFIGURE
GO
-- STOP: SQL Server must be restarted for this to take effect
--
USE [WideWorldImporters]
GO
-- Only run this if you have not already created a master key in the db
--
--CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
--GO
-- IDENTITY: any string (this is not used for authentication to Azure storage).
-- SECRET: your Azure storage account key.
DROP DATABASE SCOPED CREDENTIAL AzureStorageCredential
GO
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'user', Secret = 'C5aFpK587sIDFIMSEqXwA08xlhDM34/rfOz2g+sVq/hcKReo6agvT9JZcWGe9NtEyHEypK095WZtDdE/gkKZNQ=='
GO
-- LOCATION: Azure account storage account name and blob container name.
-- CREDENTIAL: The database scoped credential created above.
DROP EXTERNAL DATA SOURCE bwdatalake
GO
CREATE EXTERNAL DATA SOURCE bwdatalake with (
TYPE = HADOOP,
LOCATION ='wasbs://wwi@bwdatalake.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
)
GO
-- FORMAT TYPE: Type of format in Hadoop (DELIMITEDTEXT, RCFILE, ORC, PARQUET).
CREATE EXTERNAL FILE FORMAT TextFileFormat WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (FIELD_TERMINATOR ='|',
USE_TYPE_DEFAULT = TRUE))
GO
-- Create a schema called hdfs
--
DROP SCHEMA hdfs
GO
CREATE SCHEMA hdfs
GO
-- LOCATION: path to file or directory that contains the data (relative to HDFS root).
DROP EXTERNAL TABLE [hdfs].[WWI_Order_Reviews]
GO
CREATE EXTERNAL TABLE [hdfs].[WWI_Order_Reviews] (
[OrderID] int NOT NULL,
[CustomerID] int NOT NULL,
[Rating] int NULL,
[Review_Comments] nvarchar(1000) NOT NULL
)
WITH (LOCATION='/WWI/',
DATA_SOURCE = bwdatalake,
FILE_FORMAT = TextFileFormat
)
GO
-- Ingest some data
--
INSERT INTO [hdfs].[WWI_Order_Reviews] VALUES (1, 832, 10, 'I had a great experience with my order')
GO
CREATE STATISTICS StatsforReviews on [hdfs].[WWI_Order_Reviews](OrderID, CustomerID)
GO
-- Now query the external table
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
GO
-- Let's do a filter to enable pushdown
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
WHERE OrderID = 1
GO
-- Let's join the review with our order and customer data
--
SELECT o.OrderDate, c.CustomerName, p.FullName as SalesPerson, wor.Rating, wor.Review_Comments
FROM [Sales].[Orders] o
JOIN [hdfs].[WWI_Order_Reviews] wor
ON o.OrderID = wor.OrderID
JOIN [Application].[People] p
ON p.PersonID = o.SalespersonPersonID
JOIN [Sales].[Customers] c
ON c.CustomerID = wor.CustomerID
GO

Просмотреть файл

@ -1,12 +1,12 @@
-- List out the nodes in the scale out group
--
SELECT * FROM sys.dm_exec_compute_nodes
GO
-- Get more details about the status of the nodes
--
SELECT * FROM sys.dm_exec_compute_node_status
GO
-- List out detailed errors from the nodes
--
SELECT * FROM sys.dm_exec_compute_node_errors
-- List out the nodes in the scale out group
--
SELECT * FROM sys.dm_exec_compute_nodes
GO
-- Get more details about the status of the nodes
--
SELECT * FROM sys.dm_exec_compute_node_status
GO
-- List out detailed errors from the nodes
--
SELECT * FROM sys.dm_exec_compute_node_errors
GO

Просмотреть файл

@ -1,51 +1,51 @@
# SQL Server 2019 Polybase Fundamentals
This folder contains demo scripts to show the basic funcionality of Polybase by examining the configuration of nodes through DMV, creating an external table over HDFS, and monitoring execution details through DMVs.
## Requirements - Install and Configure Polybase
These demos require that you install SQL Server 2019 on Windows Server and configure a head node and at least one compute node (i.e. a scale out group). This demo currently requires SQL Server 2019 CTP 2.3 or higher.
I used the installation instructions in the documentation from:
https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-installation?view=sql-server-ver15
and this to setup the scale out group
https://docs.microsoft.com/en-us/sql/relational-databases/polybase/configure-scale-out-groups-windows?view=sql-server-ver15
For my demos, I used the following deployment to setup a head node and 2 compute nodes using Azure (Note: I used the same resource group for all of these servers so they were part of the same virtual network)
1 Windows Server 2019 Server which I configured using Server Manager as a domain controller (with a domain name of bobsql.com)
1 Windows Server 2019 Server (bwpolybase) which I joined to the bobsql.com domain. I installed SQL Server 2019 and chose the Polybase feature (including Java which required me to stop the install and install JRE 8 from the web). I chose the option for a Scale out group which required me to use the domain admin from bobsql.com during the install option for services.
2 other Windows Server 2019 Servers (bwpolybase2 and bwpolybase3)) with the same process to join the domain bobsql.com and install SQL server 2019 with Polybase.
I had to enable Polybase on all 3 SQL Servers per the documentation using sp_configure 'polybase enabled' 1 and this required a restart of SQL SErver. You must first do this step before setting up the scale out group.
I also first ensured that the Windows Firewall was configured for SQL Server and Polybase to open up firewall ports. Rules are already installed you just have to make sure they are enabled
- SQL Server PolyBase - Database Engine - <SQLServerInstanceName> (TCP-In)
- SQL Server PolyBase - PolyBase Services - <SQLServerInstanceName> (TCP-In)
- SQL Server PolyBase - SQL Browser - (UDP-In)
I then used the sp_polybase_join_group procedure per the documentation on bwpolybase2 and bwpolybase3 to join the scale out group. This required restarting the Polybase services on each machine.
## Demo Steps
### Check the Polybase configuration
1. Run the T-SQL commands in the script **polybase_status.sql** to see configuration of the scale out group and details of the head and compute nodes
2. Use SSMS to browse tables in the DWConfiguration, DWDiagnostics, and DWQueue databases which are installed on all nodes.
### Create an external table and track query and polybase execution
1. Download and restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers
2. For my demo, I simply setup an Azure storage container using the instructions as found at https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal (note I did not create a blob just the storage account and container which in my case I named **wwi**).
3. Run all the T-SQL commands in **hdfs_external_table.sql**. You will need to edit the appropriate details to point to your Azure storage container including the credential and location for the data source
# SQL Server 2019 Polybase Fundamentals
This folder contains demo scripts to show the basic funcionality of Polybase by examining the configuration of nodes through DMV, creating an external table over HDFS, and monitoring execution details through DMVs.
## Requirements - Install and Configure Polybase
These demos require that you install SQL Server 2019 on Windows Server and configure a head node and at least one compute node (i.e. a scale out group). This demo currently requires SQL Server 2019 CTP 2.3 or higher.
I used the installation instructions in the documentation from:
https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-installation?view=sql-server-ver15
and this to setup the scale out group
https://docs.microsoft.com/en-us/sql/relational-databases/polybase/configure-scale-out-groups-windows?view=sql-server-ver15
For my demos, I used the following deployment to setup a head node and 2 compute nodes using Azure (Note: I used the same resource group for all of these servers so they were part of the same virtual network)
1 Windows Server 2019 Server which I configured using Server Manager as a domain controller (with a domain name of bobsql.com)
1 Windows Server 2019 Server (bwpolybase) which I joined to the bobsql.com domain. I installed SQL Server 2019 and chose the Polybase feature (including Java which required me to stop the install and install JRE 8 from the web). I chose the option for a Scale out group which required me to use the domain admin from bobsql.com during the install option for services.
2 other Windows Server 2019 Servers (bwpolybase2 and bwpolybase3)) with the same process to join the domain bobsql.com and install SQL server 2019 with Polybase.
I had to enable Polybase on all 3 SQL Servers per the documentation using sp_configure 'polybase enabled' 1 and this required a restart of SQL SErver. You must first do this step before setting up the scale out group.
I also first ensured that the Windows Firewall was configured for SQL Server and Polybase to open up firewall ports. Rules are already installed you just have to make sure they are enabled
- SQL Server PolyBase - Database Engine - <SQLServerInstanceName> (TCP-In)
- SQL Server PolyBase - PolyBase Services - <SQLServerInstanceName> (TCP-In)
- SQL Server PolyBase - SQL Browser - (UDP-In)
I then used the sp_polybase_join_group procedure per the documentation on bwpolybase2 and bwpolybase3 to join the scale out group. This required restarting the Polybase services on each machine.
## Demo Steps
### Check the Polybase configuration
1. Run the T-SQL commands in the script **polybase_status.sql** to see configuration of the scale out group and details of the head and compute nodes
2. Use SSMS to browse tables in the DWConfiguration, DWDiagnostics, and DWQueue databases which are installed on all nodes.
### Create an external table and track query and polybase execution
1. Download and restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers
2. For my demo, I simply setup an Azure storage container using the instructions as found at https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal (note I did not create a blob just the storage account and container which in my case I named **wwi**).
3. Run all the T-SQL commands in **hdfs_external_table.sql**. You will need to edit the appropriate details to point to your Azure storage container including the credential and location for the data source
4. The the T-SQL commands in **trace_pb_query_execution.sql** to trace the execution of a query in Polybase.

Просмотреть файл

@ -1,37 +1,37 @@
-- Find out queries against external tables
--
SELECT er.execution_id, t.*, er.*
FROM sys.dm_exec_distributed_requests er
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS t
ORDER BY end_time DESC
go
-- Find your execution_id and use this for the next query
--
SELECT execution_id, step_index, operation_type, distribution_type, location_type, status, total_elapsed_time, command
FROM sys.dm_exec_distributed_request_steps
WHERE execution_id = 'QID1285'
GO
-- Get more details on each step
--
SELECT execution_id, compute_node_id, spid, step_index, distribution_id, status, total_elapsed_time, row_count
FROM sys.dm_exec_distributed_sql_requests
WHERE execution_id = 'QID1285'
GO
-- Get more details from the compute nodes
--
SELECT *
FROM sys.dm_exec_dms_workers
WHERE execution_id = 'QID1285'
ORDER BY step_index, dms_step_index, distribution_id
go
-- Look more at external operations
--
SELECT *
FROM sys.dm_exec_external_work
WHERE execution_id = 'QID1285'
GO
-- Find out queries against external tables
--
SELECT er.execution_id, t.*, er.*
FROM sys.dm_exec_distributed_requests er
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS t
ORDER BY end_time DESC
go
-- Find your execution_id and use this for the next query
--
SELECT execution_id, step_index, operation_type, distribution_type, location_type, status, total_elapsed_time, command
FROM sys.dm_exec_distributed_request_steps
WHERE execution_id = 'QID1285'
GO
-- Get more details on each step
--
SELECT execution_id, compute_node_id, spid, step_index, distribution_id, status, total_elapsed_time, row_count
FROM sys.dm_exec_distributed_sql_requests
WHERE execution_id = 'QID1285'
GO
-- Get more details from the compute nodes
--
SELECT *
FROM sys.dm_exec_dms_workers
WHERE execution_id = 'QID1285'
ORDER BY step_index, dms_step_index, distribution_id
go
-- Look more at external operations
--
SELECT *
FROM sys.dm_exec_external_work
WHERE execution_id = 'QID1285'
GO

Просмотреть файл

@ -1,11 +1,11 @@
# Module 8 Activities - Data Virtualization
These represent demos and examples to show you the basics of SQL Server containers and how to update and upgrade SQL Server using containers.
## polybase
Learn the basics of Polybase with SQL Server 2019
## sqldatahub
# Module 8 Activities - Data Virtualization
These represent demos and examples to show you the basics of SQL Server containers and how to update and upgrade SQL Server using containers.
## polybase
Learn the basics of Polybase with SQL Server 2019
## sqldatahub
Learn how to connect to many different data sources with SQL Server 2019 to create your own data hub connecting to SQL Server 2008R2, Azure SQL Database, CosmosDB, Oracle, HDFS, and SAP HANA.

Просмотреть файл

@ -1,94 +1,94 @@
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL AzureSQLDatabaseCredentials
GO
CREATE DATABASE SCOPED CREDENTIAL AzureSQLDatabaseCredentials
WITH IDENTITY = 'thewandog', Secret = '$cprsqlserver2019'
GO
/* LOCATION: Location string should be of format '<vendor>://<server>[:<port>]'.
* PUSHDOWN: specify whether computation should be pushed down to the source. ON by default.
* CREDENTIAL: the database scoped credential, created above.
*/
DROP EXTERNAL DATA SOURCE AzureSQLDatabase
GO
CREATE EXTERNAL DATA SOURCE AzureSQLDatabase
WITH (
LOCATION = 'sqlserver://bwazuredb.database.windows.net',
PUSHDOWN = ON,
CREDENTIAL = AzureSQLDatabaseCredentials
)
GO
DROP SCHEMA azuresqldb
go
CREATE SCHEMA azuresqldb
GO
-- WWI was created with Latin1_General_100_CI_AS collation so I need to make my columns that
-- if I want to support UNION.
--
DROP EXTERNAL TABLE azuresqldb.ModernStockItems
GO
CREATE EXTERNAL TABLE azuresqldb.ModernStockItems
(
[StockItemID] [int] NOT NULL,
[StockItemName] [nvarchar](100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[SupplierID] [int] NOT NULL,
[ColorID] [int] NULL,
[UnitPackageID] [int] NOT NULL,
[OuterPackageID] [int] NOT NULL,
[Brand] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[Size] [nvarchar](20) COLLATE Latin1_General_100_CI_AS NULL,
[LeadTimeDays] [int] NOT NULL,
[QuantityPerOuter] [int] NOT NULL,
[IsChillerStock] [bit] NOT NULL,
[Barcode] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[UnitPrice] [decimal](18, 2) NOT NULL,
[RecommendedRetailPrice] [decimal](18, 2) NULL,
[TypicalWeightPerUnit] [decimal](18, 3) NOT NULL,
--[MarketingComments] [nvarchar](max) NULL,
--[InternalComments] [nvarchar](max) NULL,
--[Photo] [varbinary](max) NULL,
--[CustomFields] [nvarchar](max) NULL,
--[Tags] AS (json_query([CustomFields],N'$.Tags')),
--[SearchDetails] AS (concat([StockItemName],N' ',[MarketingComments])),
[LastEditedBy] [int] NOT NULL
)
WITH (
LOCATION='wwiazure.dbo.ModernStockItems',
DATA_SOURCE=AzureSQLDatabase
)
GO
CREATE STATISTICS ModernStockItemsStats ON azuresqldb.ModernStockItems ([StockItemID]) WITH FULLSCAN
GO
-- Let's scan the table first to make sure it works
--
SELECT * FROM azuresqldb.ModernStockItems
GO
-- Now try to filter on just the stockitemid
--
SELECT * FROM azuresqldb.ModernStockItems WHERE StockItemID = 100000
GO
-- Find all stockitems from the Graphic Design Institute supplier
--
SELECT msi.StockItemName, msi.Brand, c.ColorName
FROM azuresqldb.ModernStockItems msi
JOIN [Purchasing].[Suppliers] s
ON msi.SupplierID = s.SupplierID
and s.SupplierName = 'Graphic Design Institute'
JOIN [Warehouse].[Colors] c
ON msi.ColorID = c.ColorID
UNION
SELECT si.StockItemName, si.Brand, c.ColorName
FROM [Warehouse].[StockItems] si
JOIN [Purchasing].[Suppliers] s
ON si.SupplierID = s.SupplierID
and s.SupplierName = 'Graphic Design Institute'
JOIN [Warehouse].[Colors] c
ON si.ColorID = c.ColorID
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL AzureSQLDatabaseCredentials
GO
CREATE DATABASE SCOPED CREDENTIAL AzureSQLDatabaseCredentials
WITH IDENTITY = 'thewandog', Secret = '$cprsqlserver2019'
GO
/* LOCATION: Location string should be of format '<vendor>://<server>[:<port>]'.
* PUSHDOWN: specify whether computation should be pushed down to the source. ON by default.
* CREDENTIAL: the database scoped credential, created above.
*/
DROP EXTERNAL DATA SOURCE AzureSQLDatabase
GO
CREATE EXTERNAL DATA SOURCE AzureSQLDatabase
WITH (
LOCATION = 'sqlserver://bwazuredb.database.windows.net',
PUSHDOWN = ON,
CREDENTIAL = AzureSQLDatabaseCredentials
)
GO
DROP SCHEMA azuresqldb
go
CREATE SCHEMA azuresqldb
GO
-- WWI was created with Latin1_General_100_CI_AS collation so I need to make my columns that
-- if I want to support UNION.
--
DROP EXTERNAL TABLE azuresqldb.ModernStockItems
GO
CREATE EXTERNAL TABLE azuresqldb.ModernStockItems
(
[StockItemID] [int] NOT NULL,
[StockItemName] [nvarchar](100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[SupplierID] [int] NOT NULL,
[ColorID] [int] NULL,
[UnitPackageID] [int] NOT NULL,
[OuterPackageID] [int] NOT NULL,
[Brand] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[Size] [nvarchar](20) COLLATE Latin1_General_100_CI_AS NULL,
[LeadTimeDays] [int] NOT NULL,
[QuantityPerOuter] [int] NOT NULL,
[IsChillerStock] [bit] NOT NULL,
[Barcode] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[UnitPrice] [decimal](18, 2) NOT NULL,
[RecommendedRetailPrice] [decimal](18, 2) NULL,
[TypicalWeightPerUnit] [decimal](18, 3) NOT NULL,
--[MarketingComments] [nvarchar](max) NULL,
--[InternalComments] [nvarchar](max) NULL,
--[Photo] [varbinary](max) NULL,
--[CustomFields] [nvarchar](max) NULL,
--[Tags] AS (json_query([CustomFields],N'$.Tags')),
--[SearchDetails] AS (concat([StockItemName],N' ',[MarketingComments])),
[LastEditedBy] [int] NOT NULL
)
WITH (
LOCATION='wwiazure.dbo.ModernStockItems',
DATA_SOURCE=AzureSQLDatabase
)
GO
CREATE STATISTICS ModernStockItemsStats ON azuresqldb.ModernStockItems ([StockItemID]) WITH FULLSCAN
GO
-- Let's scan the table first to make sure it works
--
SELECT * FROM azuresqldb.ModernStockItems
GO
-- Now try to filter on just the stockitemid
--
SELECT * FROM azuresqldb.ModernStockItems WHERE StockItemID = 100000
GO
-- Find all stockitems from the Graphic Design Institute supplier
--
SELECT msi.StockItemName, msi.Brand, c.ColorName
FROM azuresqldb.ModernStockItems msi
JOIN [Purchasing].[Suppliers] s
ON msi.SupplierID = s.SupplierID
and s.SupplierName = 'Graphic Design Institute'
JOIN [Warehouse].[Colors] c
ON msi.ColorID = c.ColorID
UNION
SELECT si.StockItemName, si.Brand, c.ColorName
FROM [Warehouse].[StockItems] si
JOIN [Purchasing].[Suppliers] s
ON si.SupplierID = s.SupplierID
and s.SupplierName = 'Graphic Design Institute'
JOIN [Warehouse].[Colors] c
ON si.ColorID = c.ColorID
GO

Просмотреть файл

@ -1,61 +1,61 @@
-- Database created in Azure is called wwiazure
-- This is not managed instance so you can't execute a USE database
-- Create a new database called wwiazure (server tier doesn't matter for this demo)
--
-- This table is supposed to mimic the [Warehouse].[StockItems] table in the WWI database
-- in SQL Server. I need to use Latin1_General_100_CI_AS collation for the columns because that
-- is how WWI was created so if I want to UNION data together with WWI I must use that collation
DROP TABLE IF EXISTS [ModernStockItems]
GO
CREATE TABLE [ModernStockItems](
[StockItemID] [int] NOT NULL,
[StockItemName] [nvarchar](100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[SupplierID] [int] NOT NULL,
[ColorID] [int] NULL,
[UnitPackageID] [int] NOT NULL,
[OuterPackageID] [int] NOT NULL,
[Brand] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[Size] [nvarchar](20) COLLATE Latin1_General_100_CI_AS NULL,
[LeadTimeDays] [int] NOT NULL,
[QuantityPerOuter] [int] NOT NULL,
[IsChillerStock] [bit] NOT NULL,
[Barcode] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[UnitPrice] [decimal](18, 2) NOT NULL,
[RecommendedRetailPrice] [decimal](18, 2) NULL,
[TypicalWeightPerUnit] [decimal](18, 3) NOT NULL,
--[MarketingComments] [nvarchar](max) NULL, -- Not allowed for an external table
--[InternalComments] [nvarchar](max) NULL, -- Not allowed for an external table
--[Photo] [varbinary](max) NULL, -- Not allowed for an external table
--[CustomFields] [nvarchar](max) NULL, -- Not allowed for an external table
--[Tags] AS (json_query([CustomFields],N'$.Tags')), -- Not allowed for an external table
--[SearchDetails] AS (concat([StockItemName],N' ',[MarketingComments])), -- Not allowed for an external table
[LastEditedBy] [int] NOT NULL,
CONSTRAINT [PK_Warehouse_StockItems] PRIMARY KEY CLUSTERED
(
[StockItemID] ASC
)
)
GO
-- Now insert some data. We don't coordinate with unique keys in WWI on SQL Server
-- so pick numbers way larger than exist in the current StockItems in WWI which is only 227
INSERT INTO ModernStockItems VALUES
(100000,
'Dallas Cowboys Jersey',
5,
4, -- Blue
4, -- Box
4, -- Bob
'Under Armour',
'L',
30,
1,
0,
'123456789',
2.0,
50,
75,
2.0,
1
)
GO
-- Database created in Azure is called wwiazure
-- This is not managed instance so you can't execute a USE database
-- Create a new database called wwiazure (server tier doesn't matter for this demo)
--
-- This table is supposed to mimic the [Warehouse].[StockItems] table in the WWI database
-- in SQL Server. I need to use Latin1_General_100_CI_AS collation for the columns because that
-- is how WWI was created so if I want to UNION data together with WWI I must use that collation
DROP TABLE IF EXISTS [ModernStockItems]
GO
CREATE TABLE [ModernStockItems](
[StockItemID] [int] NOT NULL,
[StockItemName] [nvarchar](100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[SupplierID] [int] NOT NULL,
[ColorID] [int] NULL,
[UnitPackageID] [int] NOT NULL,
[OuterPackageID] [int] NOT NULL,
[Brand] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[Size] [nvarchar](20) COLLATE Latin1_General_100_CI_AS NULL,
[LeadTimeDays] [int] NOT NULL,
[QuantityPerOuter] [int] NOT NULL,
[IsChillerStock] [bit] NOT NULL,
[Barcode] [nvarchar](50) COLLATE Latin1_General_100_CI_AS NULL,
[TaxRate] [decimal](18, 3) NOT NULL,
[UnitPrice] [decimal](18, 2) NOT NULL,
[RecommendedRetailPrice] [decimal](18, 2) NULL,
[TypicalWeightPerUnit] [decimal](18, 3) NOT NULL,
--[MarketingComments] [nvarchar](max) NULL, -- Not allowed for an external table
--[InternalComments] [nvarchar](max) NULL, -- Not allowed for an external table
--[Photo] [varbinary](max) NULL, -- Not allowed for an external table
--[CustomFields] [nvarchar](max) NULL, -- Not allowed for an external table
--[Tags] AS (json_query([CustomFields],N'$.Tags')), -- Not allowed for an external table
--[SearchDetails] AS (concat([StockItemName],N' ',[MarketingComments])), -- Not allowed for an external table
[LastEditedBy] [int] NOT NULL,
CONSTRAINT [PK_Warehouse_StockItems] PRIMARY KEY CLUSTERED
(
[StockItemID] ASC
)
)
GO
-- Now insert some data. We don't coordinate with unique keys in WWI on SQL Server
-- so pick numbers way larger than exist in the current StockItems in WWI which is only 227
INSERT INTO ModernStockItems VALUES
(100000,
'Dallas Cowboys Jersey',
5,
4, -- Blue
4, -- Box
4, -- Bob
'Under Armour',
'L',
30,
1,
0,
'123456789',
2.0,
50,
75,
2.0,
1
)
GO

Просмотреть файл

@ -1,13 +1,13 @@
# SQL Server 2019 Polybase example connecting to Azure SQL Database
This demo shows you how to setup an Azure SQL Database external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements
1. Create a new database in Azure. For purposes of this demo it doesn't matter whether the database is a Managed Instance or any tier of Azure DB. For purposes of this demo I called my database **wwiazure**. To make connectivity easier, I created a new virtual network for my Azure SQL Server and included the polybase head node server, bwpolybase, in the same virtual network as Azure SQL Server. You can read more about how to do this at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview
2. Connecting to the azure SQL Server hosting your database, I ran the script **createazuredbtable.sql** to create the table and insert some data. Notice the COLLATE clauses I needed to use to match what WWI uses from the example database. The table created for this demo mimics the **Warehouse.StockItem** table in the WideWorldImporters database.
## Demo Steps
# SQL Server 2019 Polybase example connecting to Azure SQL Database
This demo shows you how to setup an Azure SQL Database external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements
1. Create a new database in Azure. For purposes of this demo it doesn't matter whether the database is a Managed Instance or any tier of Azure DB. For purposes of this demo I called my database **wwiazure**. To make connectivity easier, I created a new virtual network for my Azure SQL Server and included the polybase head node server, bwpolybase, in the same virtual network as Azure SQL Server. You can read more about how to do this at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-vnet-service-endpoint-rule-overview
2. Connecting to the azure SQL Server hosting your database, I ran the script **createazuredbtable.sql** to create the table and insert some data. Notice the COLLATE clauses I needed to use to match what WWI uses from the example database. The table created for this demo mimics the **Warehouse.StockItem** table in the WideWorldImporters database.
## Demo Steps
1. On my SQL Server 2019 head node (bwpolybase), I used the **azuredb_external_table.sql** script to create the database scoped credential, externaL data source, external table, and sample SELECT statements to query the external table and join it with local SQL Server 2019 tables in the WideWorldImporters database. Take note of the COLLATE required to match WWI and the syntax for the external data source to point to the azure SQL Server.

Просмотреть файл

@ -1,69 +1,69 @@
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL CosmosDBCredentials
GO
-- You can get the IDENTITY (user) and secret (password) from the Connection String option in the
-- Azure portal
CREATE DATABASE SCOPED CREDENTIAL CosmosDBCredentials
WITH IDENTITY = 'wwi', Secret = 'hSoxMUeEgNjeeWh4FTz5jmGRlSN4Ko6HoYqiJsbleFzewe86EEXJrvwkAqBgitypJdjUbeJqnTVNBO6NUa0DZQ=='
GO
DROP EXTERNAL DATA SOURCE CosmosDB
GO
-- The LOCATION is built from <HOST>:<PORT> from the Connection String in the Azure Portal
CREATE EXTERNAL DATA SOURCE CosmosDB
WITH (
LOCATION = 'mongodb://wwi.documents.azure.com:10255',
PUSHDOWN = ON,
CREDENTIAL = CosmosDBCredentials
)
GO
DROP SCHEMA cosmosdb
go
CREATE SCHEMA cosmosdb
GO
/* LOCATION: sql server table/view in 'database_name.schema_name.object_name' format
* DATA_SOURCE: the external data source, created above.
*/
DROP EXTERNAL TABLE cosmosdb.Orders
GO
CREATE EXTERNAL TABLE cosmosdb.Orders
(
[_id] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[id] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[OrderID] int NOT NULL,
[SalesPersonPersonID] int NOT NULL,
[CustomerName] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[CustomerContact] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[OrderDate] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[CustomerPO] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NULL,
[ExpectedDeliverDate] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL
)
WITH (
LOCATION='WideWorldImporters.Orders',
DATA_SOURCE=CosmosDB
)
GO
CREATE STATISTICS CosmosDBOrderSalesPersonStats ON cosmosdb.Orders ([SalesPersonPersonID]) WITH FULLSCAN
GO
-- Scan the external table just to make sure it works
--
SELECT * FROM cosmosdb.Orders
GO
-- Filter on a specific SalesPersonPersonID
--
SELECT * FROM cosmosdb.Orders WHERE SalesPersonPersonID = 2
GO
-- Find out the name of the salesperson and which customer they worked with
-- to test out the new mobile app experience.
SELECT FullName, o.CustomerName, o.CustomerContact, o.OrderDate
FROM cosmosdb.Orders o
JOIN [Application].[People] p
ON o.SalesPersonPersonID = p.PersonID
GO
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL CosmosDBCredentials
GO
-- You can get the IDENTITY (user) and secret (password) from the Connection String option in the
-- Azure portal
CREATE DATABASE SCOPED CREDENTIAL CosmosDBCredentials
WITH IDENTITY = 'wwi', Secret = 'hSoxMUeEgNjeeWh4FTz5jmGRlSN4Ko6HoYqiJsbleFzewe86EEXJrvwkAqBgitypJdjUbeJqnTVNBO6NUa0DZQ=='
GO
DROP EXTERNAL DATA SOURCE CosmosDB
GO
-- The LOCATION is built from <HOST>:<PORT> from the Connection String in the Azure Portal
CREATE EXTERNAL DATA SOURCE CosmosDB
WITH (
LOCATION = 'mongodb://wwi.documents.azure.com:10255',
PUSHDOWN = ON,
CREDENTIAL = CosmosDBCredentials
)
GO
DROP SCHEMA cosmosdb
go
CREATE SCHEMA cosmosdb
GO
/* LOCATION: sql server table/view in 'database_name.schema_name.object_name' format
* DATA_SOURCE: the external data source, created above.
*/
DROP EXTERNAL TABLE cosmosdb.Orders
GO
CREATE EXTERNAL TABLE cosmosdb.Orders
(
[_id] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[id] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[OrderID] int NOT NULL,
[SalesPersonPersonID] int NOT NULL,
[CustomerName] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[CustomerContact] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[OrderDate] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL,
[CustomerPO] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NULL,
[ExpectedDeliverDate] NVARCHAR(100) COLLATE Latin1_General_100_CI_AS NOT NULL
)
WITH (
LOCATION='WideWorldImporters.Orders',
DATA_SOURCE=CosmosDB
)
GO
CREATE STATISTICS CosmosDBOrderSalesPersonStats ON cosmosdb.Orders ([SalesPersonPersonID]) WITH FULLSCAN
GO
-- Scan the external table just to make sure it works
--
SELECT * FROM cosmosdb.Orders
GO
-- Filter on a specific SalesPersonPersonID
--
SELECT * FROM cosmosdb.Orders WHERE SalesPersonPersonID = 2
GO
-- Find out the name of the salesperson and which customer they worked with
-- to test out the new mobile app experience.
SELECT FullName, o.CustomerName, o.CustomerContact, o.OrderDate
FROM cosmosdb.Orders o
JOIN [Application].[People] p
ON o.SalesPersonPersonID = p.PersonID
GO

Просмотреть файл

@ -1,23 +1,23 @@
# SQL Server 2019 Polybase example connecting to CosmosDB
This demo shows you how to setup a CosmosDB external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements
1. Create a new database, collection, and document with CosmosDB in Azure. I used the Azure portal to create a new cosmosdb database in the same resource group as my polybase head node (bwpolybase). When I used the portal to create a new cosmosdb instance, I chose the Azure CosmosDB Database for Mongo API for the API selection. I used the Data Explorer tool from the portal to create my database called WideWorldImporters with a collection called Orders. Then I created a new document with field names and values like the following (Note: the _id field was created by Data Explorer and the id field was a default value already provided by the tool)
{
"_id" : ObjectId("5c54aa72dd13c70f445745bf"),
"id" : "1",
"OrderID" : 1,
"SalesPersonPersonID" : 2,
"CustomerName" : "Vandelay Industries",
"CustomerContact" : "Art Vandelay",
"OrderDate" : "2018-05-14",
"CustomerPO" : "20180514",
"ExpectedDeliveryDate" : "2018-05-21"
}
## Demo Steps
# SQL Server 2019 Polybase example connecting to CosmosDB
This demo shows you how to setup a CosmosDB external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements
1. Create a new database, collection, and document with CosmosDB in Azure. I used the Azure portal to create a new cosmosdb database in the same resource group as my polybase head node (bwpolybase). When I used the portal to create a new cosmosdb instance, I chose the Azure CosmosDB Database for Mongo API for the API selection. I used the Data Explorer tool from the portal to create my database called WideWorldImporters with a collection called Orders. Then I created a new document with field names and values like the following (Note: the _id field was created by Data Explorer and the id field was a default value already provided by the tool)
{
"_id" : ObjectId("5c54aa72dd13c70f445745bf"),
"id" : "1",
"OrderID" : 1,
"SalesPersonPersonID" : 2,
"CustomerName" : "Vandelay Industries",
"CustomerContact" : "Art Vandelay",
"OrderDate" : "2018-05-14",
"CustomerPO" : "20180514",
"ExpectedDeliveryDate" : "2018-05-21"
}
## Demo Steps
1. On my SQL Server 2019 head node (bwpolybase), I used the **cosmosdb_external_table.sql** script to create the database scoped credential, externaL data source, external table, and sample SELECT statements to query the external table and join it with local SQL Server 2019 tables in the WideWorldImporters database. The **Connection String** option from the portal of the instance shows you the username and password to use. It also has HOST and PORT fields which are used to build the LOCATION sytnax for the data source.

Просмотреть файл

@ -1,97 +1,97 @@
USE [master]
GO
-- Enabled PB connectivity to a Hadoop HDFS source which in this case is just Azure Blob Storage
--
sp_configure @configname = 'hadoop connectivity', @configvalue = 7;
GO
RECONFIGURE
GO
-- Enable PB export to be able to ingest data into the HDFS target
--
sp_configure 'allow polybase export', 1
GO
RECONFIGURE
GO
-- STOP: SQL Server must be restarted for this to take effect
--
USE [WideWorldImporters]
GO
-- Only run this if you have not already created a master key in the db
--
--CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
--GO
-- IDENTITY: any string (this is not used for authentication to Azure storage).
-- SECRET: your Azure storage account key.
DROP DATABASE SCOPED CREDENTIAL AzureStorageCredential
GO
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'user', Secret = 'C5aFpK587sIDFIMSEqXwA08xlhDM34/rfOz2g+sVq/hcKReo6agvT9JZcWGe9NtEyHEypK095WZtDdE/gkKZNQ=='
GO
-- LOCATION: Azure account storage account name and blob container name.
-- CREDENTIAL: The database scoped credential created above.
DROP EXTERNAL DATA SOURCE bwdatalake
GO
CREATE EXTERNAL DATA SOURCE bwdatalake with (
TYPE = HADOOP,
LOCATION ='wasbs://wwi@bwdatalake.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
)
GO
-- FORMAT TYPE: Type of format in Hadoop (DELIMITEDTEXT, RCFILE, ORC, PARQUET).
CREATE EXTERNAL FILE FORMAT TextFileFormat WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (FIELD_TERMINATOR ='|',
USE_TYPE_DEFAULT = TRUE))
GO
-- Create a schema called hdfs
--
DROP SCHEMA hdfs
GO
CREATE SCHEMA hdfs
GO
-- LOCATION: path to file or directory that contains the data (relative to HDFS root).
DROP EXTERNAL TABLE [hdfs].[WWI_Order_Reviews]
GO
CREATE EXTERNAL TABLE [hdfs].[WWI_Order_Reviews] (
[OrderID] int NOT NULL,
[CustomerID] int NOT NULL,
[Rating] int NULL,
[Review_Comments] nvarchar(1000) NOT NULL
)
WITH (LOCATION='/WWI/',
DATA_SOURCE = bwdatalake,
FILE_FORMAT = TextFileFormat
)
GO
-- Ingest some data
--
INSERT INTO [hdfs].[WWI_Order_Reviews] VALUES (1, 832, 10, 'I had a great experience with my order')
GO
CREATE STATISTICS StatsforReviews on [hdfs].[WWI_Order_Reviews](OrderID, CustomerID)
GO
-- Now query the external table
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
GO
-- Let's do a filter to enable pushdown
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
WHERE OrderID = 1
GO
-- Let's join the review with our order and customer data
--
SELECT o.OrderDate, c.CustomerName, p.FullName as SalesPerson, wor.Rating, wor.Review_Comments
FROM [Sales].[Orders] o
JOIN [hdfs].[WWI_Order_Reviews] wor
ON o.OrderID = wor.OrderID
JOIN [Application].[People] p
ON p.PersonID = o.SalespersonPersonID
JOIN [Sales].[Customers] c
ON c.CustomerID = wor.CustomerID
USE [master]
GO
-- Enabled PB connectivity to a Hadoop HDFS source which in this case is just Azure Blob Storage
--
sp_configure @configname = 'hadoop connectivity', @configvalue = 7;
GO
RECONFIGURE
GO
-- Enable PB export to be able to ingest data into the HDFS target
--
sp_configure 'allow polybase export', 1
GO
RECONFIGURE
GO
-- STOP: SQL Server must be restarted for this to take effect
--
USE [WideWorldImporters]
GO
-- Only run this if you have not already created a master key in the db
--
--CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
--GO
-- IDENTITY: any string (this is not used for authentication to Azure storage).
-- SECRET: your Azure storage account key.
DROP DATABASE SCOPED CREDENTIAL AzureStorageCredential
GO
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH IDENTITY = 'user', Secret = 'C5aFpK587sIDFIMSEqXwA08xlhDM34/rfOz2g+sVq/hcKReo6agvT9JZcWGe9NtEyHEypK095WZtDdE/gkKZNQ=='
GO
-- LOCATION: Azure account storage account name and blob container name.
-- CREDENTIAL: The database scoped credential created above.
DROP EXTERNAL DATA SOURCE bwdatalake
GO
CREATE EXTERNAL DATA SOURCE bwdatalake with (
TYPE = HADOOP,
LOCATION ='wasbs://wwi@bwdatalake.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
)
GO
-- FORMAT TYPE: Type of format in Hadoop (DELIMITEDTEXT, RCFILE, ORC, PARQUET).
CREATE EXTERNAL FILE FORMAT TextFileFormat WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (FIELD_TERMINATOR ='|',
USE_TYPE_DEFAULT = TRUE))
GO
-- Create a schema called hdfs
--
DROP SCHEMA hdfs
GO
CREATE SCHEMA hdfs
GO
-- LOCATION: path to file or directory that contains the data (relative to HDFS root).
DROP EXTERNAL TABLE [hdfs].[WWI_Order_Reviews]
GO
CREATE EXTERNAL TABLE [hdfs].[WWI_Order_Reviews] (
[OrderID] int NOT NULL,
[CustomerID] int NOT NULL,
[Rating] int NULL,
[Review_Comments] nvarchar(1000) NOT NULL
)
WITH (LOCATION='/WWI/',
DATA_SOURCE = bwdatalake,
FILE_FORMAT = TextFileFormat
)
GO
-- Ingest some data
--
INSERT INTO [hdfs].[WWI_Order_Reviews] VALUES (1, 832, 10, 'I had a great experience with my order')
GO
CREATE STATISTICS StatsforReviews on [hdfs].[WWI_Order_Reviews](OrderID, CustomerID)
GO
-- Now query the external table
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
GO
-- Let's do a filter to enable pushdown
--
SELECT * FROM [hdfs].[WWI_Order_Reviews]
WHERE OrderID = 1
GO
-- Let's join the review with our order and customer data
--
SELECT o.OrderDate, c.CustomerName, p.FullName as SalesPerson, wor.Rating, wor.Review_Comments
FROM [Sales].[Orders] o
JOIN [hdfs].[WWI_Order_Reviews] wor
ON o.OrderID = wor.OrderID
JOIN [Application].[People] p
ON p.PersonID = o.SalespersonPersonID
JOIN [Sales].[Customers] c
ON c.CustomerID = wor.CustomerID
GO

Просмотреть файл

@ -1,9 +1,9 @@
# SQL Server demo with Polybase for HDFS
## Requirements
Follow all the instructions in the fundamentals folder which is at the same level as the sqldatahub folder.
## Demos Steps
# SQL Server demo with Polybase for HDFS
## Requirements
Follow all the instructions in the fundamentals folder which is at the same level as the sqldatahub folder.
## Demos Steps
1. Run all the T-SQL commands in **hdfs_external_table.sql**. You will need to edit the appropriate details to point to your Azure storage container including the credential and location for the data source.

Просмотреть файл

@ -1,7 +1,7 @@
CREATE TABLE gl.accountsreceivable (
arid int primary key,
ardate date,
ardesc varchar2(100),
arref int,
aramt number(10,2)
);
CREATE TABLE gl.accountsreceivable (
arid int primary key,
ardate date,
ardesc varchar2(100),
arref int,
aramt number(10,2)
);

Просмотреть файл

@ -1,8 +1,8 @@
CREATE USER gl IDENTIFIED BY glpwd DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users;
GRANT CREATE SESSION TO gl;
GRANT CREATE TABLE TO gl;
GRANT CREATE VIEW TO gl;
GRANT CREATE ANY TRIGGER TO gl;
GRANT CREATE ANY PROCEDURE TO gl;
GRANT CREATE SEQUENCE TO gl;
GRANT CREATE SYNONYM TO gl;
CREATE USER gl IDENTIFIED BY glpwd DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users;
GRANT CREATE SESSION TO gl;
GRANT CREATE TABLE TO gl;
GRANT CREATE VIEW TO gl;
GRANT CREATE ANY TRIGGER TO gl;
GRANT CREATE ANY PROCEDURE TO gl;
GRANT CREATE SEQUENCE TO gl;
GRANT CREATE SYNONYM TO gl;

Просмотреть файл

@ -1,67 +1,67 @@
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL OracleCredentials
GO
CREATE DATABASE SCOPED CREDENTIAL OracleCredentials
WITH IDENTITY = 'gl', Secret = 'glpwd'
GO
/* LOCATION: Location string should be of format '<vendor>://<server>[:<port>]'.
* PUSHDOWN: specify whether computation should be pushed down to the source. ON by default.
* CREDENTIAL: the database scoped credential, created above.
*/
DROP EXTERNAL DATA SOURCE OracleServer
GO
CREATE EXTERNAL DATA SOURCE OracleServer
WITH (
LOCATION = 'oracle://bworacle:49161',
PUSHDOWN = ON,
CREDENTIAL = OracleCredentials
)
GO
DROP SCHEMA oracle
go
CREATE SCHEMA oracle
GO
/* LOCATION: oracle table/view in 'database_name.schema_name.object_name' format
* DATA_SOURCE: the external data source, created above.
*/
DROP EXTERNAL TABLE oracle.accountsreceivable
GO
CREATE EXTERNAL TABLE oracle.accountsreceivable
(
arid int,
ardate date,
ardesc nvarchar(100) COLLATE Latin1_General_100_CI_AS,
arref int,
aramt decimal(10,2)
)
WITH (
LOCATION='[XE].[GL].[ACCOUNTSRECEIVABLE]',
DATA_SOURCE=OracleServer
)
GO
CREATE STATISTICS arrefstats ON oracle.accountsreceivable ([arref]) WITH FULLSCAN
GO
-- Let's scan the table to make sure it works
SELECT * FROM oracle.accountsreceivable
GO
-- Try a simple filter
SELECT * FROM oracle.accountsreceivable
WHERE arref = 336252
GO
-- Join with a local table
--
SELECT ct.*, oa.arid, oa.ardesc
FROM oracle.accountsreceivable oa
JOIN [Sales].[CustomerTransactions] ct
ON oa.arref = ct.CustomerTransactionID
USE [WideWorldImporters]
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'S0me!nfo'
GO
/* specify credentials to external data source
* IDENTITY: user name for external source.
* SECRET: password for external source.
*/
DROP DATABASE SCOPED CREDENTIAL OracleCredentials
GO
CREATE DATABASE SCOPED CREDENTIAL OracleCredentials
WITH IDENTITY = 'gl', Secret = 'glpwd'
GO
/* LOCATION: Location string should be of format '<vendor>://<server>[:<port>]'.
* PUSHDOWN: specify whether computation should be pushed down to the source. ON by default.
* CREDENTIAL: the database scoped credential, created above.
*/
DROP EXTERNAL DATA SOURCE OracleServer
GO
CREATE EXTERNAL DATA SOURCE OracleServer
WITH (
LOCATION = 'oracle://bworacle:49161',
PUSHDOWN = ON,
CREDENTIAL = OracleCredentials
)
GO
DROP SCHEMA oracle
go
CREATE SCHEMA oracle
GO
/* LOCATION: oracle table/view in 'database_name.schema_name.object_name' format
* DATA_SOURCE: the external data source, created above.
*/
DROP EXTERNAL TABLE oracle.accountsreceivable
GO
CREATE EXTERNAL TABLE oracle.accountsreceivable
(
arid int,
ardate date,
ardesc nvarchar(100) COLLATE Latin1_General_100_CI_AS,
arref int,
aramt decimal(10,2)
)
WITH (
LOCATION='[XE].[GL].[ACCOUNTSRECEIVABLE]',
DATA_SOURCE=OracleServer
)
GO
CREATE STATISTICS arrefstats ON oracle.accountsreceivable ([arref]) WITH FULLSCAN
GO
-- Let's scan the table to make sure it works
SELECT * FROM oracle.accountsreceivable
GO
-- Try a simple filter
SELECT * FROM oracle.accountsreceivable
WHERE arref = 336252
GO
-- Join with a local table
--
SELECT ct.*, oa.arid, oa.ardesc
FROM oracle.accountsreceivable oa
JOIN [Sales].[CustomerTransactions] ct
ON oa.arref = ct.CustomerTransactionID
GO

Просмотреть файл

@ -1,2 +1,2 @@
sqlplus64 system/oracle@localhost:49161/xe
sqlplus64 system/oracle@localhost:49161/xe

Просмотреть файл

@ -1,45 +1,45 @@
# SQL Server 2019 Polybase example connecting to Oracle
This demo shows you how to setup a Oracle external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements - Installing and setting up Oracle
SQL Server external tables should work with most current Oracle versions (11g+) so for this demo you can choose any Oracle installation or platform you like. For my demo, I used Oracle Express 11g using docker containers on Red Hat Enterpise Linux in an Azure Virtual Machine. The following are the steps and scripts I used to install and Oracle instance using a docker container and create a table to be used for the demo. I created my RHEL VM in Azure in the same resource group (bwsql2019demos) as the polybase head node running SQL Server 2019 on Windows Server. I then on this head node server added an entry in the /etc/hosts file for the RHEL Azure VM private IP address with a name of bworacle so I can use this name when creating an external data source.
1. Install Docker CE for CentOS using these instructions at https://docs.docker.com/install/linux/docker-ce/centos/
2. Used these instructions to pull a docker container image for Oracle at https://github.com/wnameless/docker-oracle-xe-11g. This site has instructions for running the container, instance ID, and password for SYSTEM.
3. Installed OCI client and SQLPlus RPM packages from http://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/index.htm
4. I had to configure SQLPLUS (sqlplus64 is actually the program to use) by setting the following environment variables:
- ORACLE_SID=xe
- LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
- ORACLE_HOME=/usr/lib/oracle/18.3/client64
5. I was then able to connect to ORACLE XE on this machine using syntax like
sqlplus64 system/oracle@localhost:49161/xe
49161 is the port number from running the docker image for XE
oracle is the password for SYSTEM
I've included a script called **oraconnect.sh** as an example.
6. I wanted a user other than SYSTEM so used the script **createuser.sql** to create a new user called g1.
7. Using this new login, I ran the script **createtab.sql** to create a new table with the instance. You can run this script using sqlplus64 like the following:
sqlplus64 gl/glpwd@localhost:49161/xe @createtab.sql
8. I then executed the **insertdata.sql** script finding a valid CustomerTransactionID from the Sales.CustomerTransactions table in the WideWorldImporters database. This ID becomes the arref fields in the accounts receivable table.
## Demo Steps
1. With everything in hand on my Oracle server, I now can use the oracle_external_table.sql script to create the data source and external table.
Note the syntax for the LOCATION string for the external table which I was required to use UPPERCASE even though I didn't create these objects in uppercase using sqlplus64.
LOCATION='[XE].[GL].[ACCOUNTSRECEIVABLE]'
This script also includes examples to query the table and join together with the [Sales].[CustomerTransactions] table.
# SQL Server 2019 Polybase example connecting to Oracle
This demo shows you how to setup a Oracle external data source and table with examples of how to query the data source and join data to local SQL Server 2019 data. The demo assumes you have installed a SQL Server 2019 Polybase scale out group as documented in the **fundamentals** folder of this overall demo.
## Requirements - Installing and setting up Oracle
SQL Server external tables should work with most current Oracle versions (11g+) so for this demo you can choose any Oracle installation or platform you like. For my demo, I used Oracle Express 11g using docker containers on Red Hat Enterpise Linux in an Azure Virtual Machine. The following are the steps and scripts I used to install and Oracle instance using a docker container and create a table to be used for the demo. I created my RHEL VM in Azure in the same resource group (bwsql2019demos) as the polybase head node running SQL Server 2019 on Windows Server. I then on this head node server added an entry in the /etc/hosts file for the RHEL Azure VM private IP address with a name of bworacle so I can use this name when creating an external data source.
1. Install Docker CE for CentOS using these instructions at https://docs.docker.com/install/linux/docker-ce/centos/
2. Used these instructions to pull a docker container image for Oracle at https://github.com/wnameless/docker-oracle-xe-11g. This site has instructions for running the container, instance ID, and password for SYSTEM.
3. Installed OCI client and SQLPlus RPM packages from http://yum.oracle.com/repo/OracleLinux/OL7/oracle/instantclient/x86_64/index.htm
4. I had to configure SQLPLUS (sqlplus64 is actually the program to use) by setting the following environment variables:
- ORACLE_SID=xe
- LD_LIBRARY_PATH=/usr/lib/oracle/18.3/client64/lib
- ORACLE_HOME=/usr/lib/oracle/18.3/client64
5. I was then able to connect to ORACLE XE on this machine using syntax like
sqlplus64 system/oracle@localhost:49161/xe
49161 is the port number from running the docker image for XE
oracle is the password for SYSTEM
I've included a script called **oraconnect.sh** as an example.
6. I wanted a user other than SYSTEM so used the script **createuser.sql** to create a new user called g1.
7. Using this new login, I ran the script **createtab.sql** to create a new table with the instance. You can run this script using sqlplus64 like the following:
sqlplus64 gl/glpwd@localhost:49161/xe @createtab.sql
8. I then executed the **insertdata.sql** script finding a valid CustomerTransactionID from the Sales.CustomerTransactions table in the WideWorldImporters database. This ID becomes the arref fields in the accounts receivable table.
## Demo Steps
1. With everything in hand on my Oracle server, I now can use the oracle_external_table.sql script to create the data source and external table.
Note the syntax for the LOCATION string for the external table which I was required to use UPPERCASE even though I didn't create these objects in uppercase using sqlplus64.
LOCATION='[XE].[GL].[ACCOUNTSRECEIVABLE]'
This script also includes examples to query the table and join together with the [Sales].[CustomerTransactions] table.

Просмотреть файл

@ -1,28 +1,28 @@
# SQL Server Data Hub Polybase demos
In this demo I show you how to use SQL Server as a hub for data virtualization. Consider the example company WideWorldImporters (read more at https://docs.microsoft.com/en-us/sql/samples/wide-world-importers-what-is?view=sql-server-2017 )
This demo will cover scenarios where this company has data in other sources but would like to avoid building complex and expensive ETL programs to move the data into SQL Server 2019. In some cases, they are going to migrate their data but they would first like access to the data so applications and reports can seamlessly run while just connecting to SQL Server 2019.
They have identified the following data sources and business scenarios:
**SQL Server 2008R2** - This is the legacy SQL Server which contains a list of Suppliers the company no longer uses but they want to access for historical reasons.
**Azure SQL Database** - A new cloud based application is prototyping data for StockItems in Azure
**CosmosDB** - A research team is experimenting with a mobile based application to take orders from customers using a noSQL data store like Azure CosmosDB
**Oracle** - The company's accounting system is in Oracle but will be migrated soon to SQL Server. For now, the company wants to be able to access accounts receivable data which lines up with transactions in WideWorldImporters database.
**Hadoop** - The company's website ordering system now has a new feature for customers to review the ordering process. The developers find it very convenient to stream a large amount of data for these reviews in the form of files in Hadoop. The system today just streams this in Azure Blog Storage.
**SAPHana** - The company just acquired a new company and would like to start reviewing the customer profiles the new acquired company brings. The new company has a data warehouse stored in SAPHana that can be queried.
All of these data sources will become external data sources and tables. For the purposes of this demo, all of the examples will use resources in Azure. The hub for SQL Server 2019 will be based on the Polybase configuration installed and configured in the **Fundamentals** folder.
1. If you have not restored the backup already, download and restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers
2. First, run the scenario for SQL2008r2 by going to the **SQL2008r2** folder and following the instructions in the readme.md file.
# SQL Server Data Hub Polybase demos
In this demo I show you how to use SQL Server as a hub for data virtualization. Consider the example company WideWorldImporters (read more at https://docs.microsoft.com/en-us/sql/samples/wide-world-importers-what-is?view=sql-server-2017 )
This demo will cover scenarios where this company has data in other sources but would like to avoid building complex and expensive ETL programs to move the data into SQL Server 2019. In some cases, they are going to migrate their data but they would first like access to the data so applications and reports can seamlessly run while just connecting to SQL Server 2019.
They have identified the following data sources and business scenarios:
**SQL Server 2008R2** - This is the legacy SQL Server which contains a list of Suppliers the company no longer uses but they want to access for historical reasons.
**Azure SQL Database** - A new cloud based application is prototyping data for StockItems in Azure
**CosmosDB** - A research team is experimenting with a mobile based application to take orders from customers using a noSQL data store like Azure CosmosDB
**Oracle** - The company's accounting system is in Oracle but will be migrated soon to SQL Server. For now, the company wants to be able to access accounts receivable data which lines up with transactions in WideWorldImporters database.
**Hadoop** - The company's website ordering system now has a new feature for customers to review the ordering process. The developers find it very convenient to stream a large amount of data for these reviews in the form of files in Hadoop. The system today just streams this in Azure Blog Storage.
**SAPHana** - The company just acquired a new company and would like to start reviewing the customer profiles the new acquired company brings. The new company has a data warehouse stored in SAPHana that can be queried.
All of these data sources will become external data sources and tables. For the purposes of this demo, all of the examples will use resources in Azure. The hub for SQL Server 2019 will be based on the Polybase configuration installed and configured in the **Fundamentals** folder.
1. If you have not restored the backup already, download and restore the WideWorldImporters backup from https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers
2. First, run the scenario for SQL2008r2 by going to the **SQL2008r2** folder and following the instructions in the readme.md file.
3.

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше