Format changes and fixes
More format and fixes for typos, grammatical, etc
This commit is contained in:
Родитель
404672daa8
Коммит
19ea26e712
|
@ -9,7 +9,15 @@
|
|||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="graphics/textbubble.png"> <h2>About this Workshop</h2>
|
||||
|
||||
Welcome to this Microsoft solutions workshop on *SQL Server 2019 on OpenShift*.
|
||||
Welcome to this Microsoft solutions workshop on *SQL Server 2019 on OpenShift*.
|
||||
|
||||
Red Hat OpenShift Container Platform brings together Docker and Kubernetes, and provides an API to manage these services. OpenShift Container Platform allows you to create and manage containers. From a perspective of SQL Server, OpenShift provides:
|
||||
|
||||
- A scalable architecture to deploy containerized applications and data platforms such as SQL Server
|
||||
- Persistent storage for stateful containers like SQL Server
|
||||
- Built-in load balancers to abstract application connections to SQL Server
|
||||
- Built-in high availability for stateful containers like SQL Server
|
||||
- An ecosystem for Operators to simplify application deployment and manage high availability
|
||||
|
||||
In this course you will learn the basics of deployment, connection, query execution, performance, high availability, operators, and Always On Availability Groups in SQL Server 2019 with OpenShift.
|
||||
|
||||
|
@ -85,6 +93,8 @@ To complete this workshop you will need the following:
|
|||
- Access to a OpenShift 3.11 cluster
|
||||
- Access to all the scripts provided from this workshop from the GitHub repo.
|
||||
|
||||
The Prerequisites module in this workshop provides all the details of tools and software required to take this workshop.
|
||||
|
||||
You might be taking this workshop from an instructor who will provide access to an OpenShift cluster and possibly a client workstation with all the tools and files installed.
|
||||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="./graphics/bulletlist.png">
|
||||
|
@ -112,7 +122,7 @@ This workshop uses OpenShift, SQL Server 2019, Azure Data Studio, SQL Command Li
|
|||
|
||||
<img style="float: left; margin: 0px 15px 15px 0px;" src="./graphics/pinmap.png"> <h2>Related Workshops</h2><br>
|
||||
|
||||
- [Modernize your Database with SQL Server 2019](https://github.com/Microsoft/sqlworkshops/tree/rgward/ModernizeYourDatabases2019)
|
||||
- [Modernize your Database with SQL Server 2019](https://github.com/Microsoft/sqlworkshops/tree/master/ModernizeYourDatabases2019)
|
||||
- [The OpenShift Interactive Learning Portal](https://learn.openshift.com/)
|
||||
|
||||
<br>
|
||||
|
|
|
@ -33,10 +33,10 @@ In order to go through the activities of this workshop you will need the followi
|
|||
- The OpenShift CLI (oc.exe)
|
||||
- Azure Data Studio (Minimum version is 1.5.2). Install from https://docs.microsoft.com/en-us/sql/azure-data-studio/download
|
||||
- SQL Command Line Tools (sqlcmd). Check the **For Further Study** section for links to install these tools.
|
||||
- git client (only needed if you do not have the latest version of the workshop provided to you by the instructor)
|
||||
- **git** client (only needed if you do not have the latest version of the workshop provided to you by the instructor)
|
||||
- In addition, the client computer must be able to connect to the Internet to download a sample file or your instructor must provide it for you (WideWorldImporters-Full.bak)
|
||||
|
||||
The workshop currently supports a single node OpenShift cluster but can be run on a multiple cluster environment. Each user will need ~8Gb of RAM to run the containers in the workshop.
|
||||
The workshop currently supports a single node OpenShift cluster but can be run on a multiple cluster environment. For cluster administrators building a cluster for this workshop, each user will need ~8Gb of RAM to run the containers in the workshop.
|
||||
|
||||
**Note**: *If you are attending this course in-person, the instructor may provide you with a client environment and full access to an OpenShift cluster including login credentials.*
|
||||
|
||||
|
@ -56,7 +56,7 @@ Navigate to your home directory `~` and enter the following command:
|
|||
|
||||
**NOTE**: *If you have used `git clone` to pull down the repo of the workshops in the past, run `git pull` in the sqlworkshops directory to get the latest version.*
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Login to your OpenShift cluster, via a web browser, using the URL provided to you for the <b>openshiftConsoleUrl</b> in a web browser.</p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Login to your OpenShift cluster, via a web browser, using the URL provided to you for the <b>openshiftConsoleUrl</b>.</p>
|
||||
|
||||
**NOTE**: *You may get warnings from the web page saying "This site is not secure". Click Details and then "Go on to the webpage".*
|
||||
|
||||
|
|
|
@ -45,13 +45,13 @@ Proceed to the **Activity** below to learn these deployment steps.
|
|||
|
||||
Follow these steps to deploy SQL Server on OpenShift:
|
||||
|
||||
<pre>Note: At any point in this Module if you need to "start over", use the script cleanup.sh to delete the project and go back to Step 1.</pre>
|
||||
**NOTE**: *At any point in this Module if you need to "start over", use the script **cleanup.sh** to delete the project and go back the first step of the Activity.*
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Change directories to the <b>sqlworkshops/SQLonOpenShift/sqlonopenshift/01_deploy</b> folder.</p>
|
||||
|
||||
Open a shell and use the `cd` command.
|
||||
|
||||
**NOTE**: *You must nto the OpenShift cluster first, using instructions from the Prerequisites*
|
||||
**NOTE**: *You must log into the OpenShift cluster first, using instructions from the Prerequisites*
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Ensure your scripts are executable</p>
|
||||
|
||||
|
@ -61,7 +61,9 @@ Run the following command (depending on your Linux shell and client you may need
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Create a new Project</p>
|
||||
|
||||
If you are running this workshop as a cluster admin and the instructor did not create a new project, then create a new project called **mssql** with the following command or execute the **step1_create_project.sh** script.
|
||||
If you are running this workshop as a cluster admin and the instructor did not create a new project, then create a new project called **mssql** with the following command or execute the **step1_create_project.sh** script:
|
||||
|
||||
**NOTE**: *This activity assumes a project named called mssql so if a cluster administrator will create a project for workshop users it must be called mssql.*
|
||||
|
||||
`oc new-project mssql`
|
||||
|
||||
|
@ -82,6 +84,8 @@ Use the following command or execute the **step2_create_secret.sh** script:
|
|||
|
||||
`oc create secret generic mssql --from-literal=SA_PASSWORD="Sql2019isfast"`
|
||||
|
||||
**NOTE**: *If you choose a different sa password then what is supplied in this activity, you will need to make changes to future steps which assume the password used in this step.*
|
||||
|
||||
When this completes you should see the following message and be placed back at the shell prompt:
|
||||
|
||||
<pre>secret/mssql created</pre>
|
||||
|
@ -90,6 +94,8 @@ When this completes you should see the following message and be placed back at t
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Create a <b>PersistentVolumeClaim</b> to store SQL Server databases and files</p>
|
||||
|
||||
A PersistentVolumeClaim allows you to persist SQL Server database files even if the container for SQL Server is stopped or moved by OpenShift.
|
||||
|
||||
Use the following command or execute the **step3_storage.sh** script:
|
||||
|
||||
`oc apply -f storage.yaml`
|
||||
|
@ -114,7 +120,7 @@ deployment.apps/mssql-deployment created
|
|||
ervice/mssql-service created
|
||||
</pre>
|
||||
|
||||
Take a minute to browse the **sqldeployment.yaml** file to see key pieces of how SQL Server were deployed, including details of the container image, arguments, label to "tag" the deployment, which **PersistentVolumeClaim** to use (from the previous step) and the **LoadBalancer** service that is attached to this pod.
|
||||
Deployment is an asynchronous operation. A complete of this command does not mean the deployment is complete.
|
||||
|
||||
You have now submitted a deployment, which is a logical collection of objects including a *pod*, a *container*, and **LoadBalancer** service. OpenShift will schedule a SQL Server container in a *pod* on a *node* on the cluster.
|
||||
|
||||
|
@ -130,6 +136,8 @@ When the value of **AVAILABLE** becomes **1**, the deployment was successful and
|
|||
|
||||
**NOTE**: *Depending on the load of your cluster and whether the container image of SQL Server is already present, the deployment may take several minutes.*
|
||||
|
||||
Take a minute to browse the **sqldeployment.yaml** file to see key pieces of how SQL Server was deployed, including details of the container image, arguments, label to "tag" the deployment, which **PersistentVolumeClaim** to use (from the previous step) and the **LoadBalancer** service that is attached to this pod.
|
||||
|
||||
You can run the following command to check on the status of the pod and LoadBalancer service:
|
||||
|
||||
`oc get all`
|
||||
|
|
|
@ -26,7 +26,7 @@ You'll cover the following topics in this Module:
|
|||
|
||||
SQL Server provides several tools to connect and execute queries. Applications can use a variety of languages including C++, .Net, node.js, and Java. To see examples of how to write applications to connect to SQL Server, visit https://aka.ms/sqldev.
|
||||
|
||||
The simplest method to connect to SQL Server deployed on OpenShift is to use the command line tool **sqlcmd**, which is works in the Windows, Linux, and MacOS Operating Systems. The *Prerequisites* for this workshop provides instructions for installing the SQL Command Line tools including **sqlcmd**. In some deliveries of this workshop, **sqlcmd** may already be installed.
|
||||
The simplest method to connect to SQL Server deployed on OpenShift is to use the command line tool **sqlcmd**, which available on Windows, Linux, and MacOS Operating Systems. The *Prerequisites* for this workshop provides instructions for installing the SQL Command Line tools including **sqlcmd**. In some deliveries of this workshop, **sqlcmd** may already be installed.
|
||||
|
||||
To connect to SQL Server, you need:
|
||||
|
||||
|
@ -94,7 +94,7 @@ Follow these steps to restore a database backup to SQL Server deployed on OpenSh
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Locate the Database Backup</p>
|
||||
|
||||
If your workshop does not already include a copy of the backup of the WideWorldImporters database (a file called **WideWorldImporters-Full.bak**) execute the script **getwwi.sh** to download the backup.
|
||||
If your workshop does not already include a copy of the backup of the WideWorldImporters database (a file called **WideWorldImporters-Full.bak**) execute the script **getwwi.sh** to download the backup. This script assumes connectivity to the internet.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Copy the Database Backup to the SQL Server 2019 Container</p>
|
||||
|
||||
|
@ -123,13 +123,13 @@ Execute the following commands using the sqlcmd tool or execute the script **ste
|
|||
|
||||
In this example, you used the `-i` parameter for **sqlcmd** to execute a *script* with the `RESTORE DATABASE` command. You can examine the contents of the **restorewwi.sql** T-SQL script to see the example syntax using `cat restorewwi.sql` from the shell.
|
||||
|
||||
The WideWorldImporters backup you downloaded was created on SQL Server 2016 on Windows. One of the advantages for SQL Server is compatibility. A backup on Windows can be restored on Linux and vice versa). SQL Server 2019 will automatically detect the older version and upgrade the database. This is why the RESTORE command can take a few minutes to execute. When the command completes the output to the shell prompt will scroll across several lines but end with something similar to the following:
|
||||
The WideWorldImporters backup you downloaded was created on SQL Server 2016 on Windows. One of the great stories for SQL Server is compatibility across operating systems. Database backups are interoperable between SQL Server on Windows and Linux. SQL Server 2019 will automatically detect the older version and upgrade the database. This is why the RESTORE command can take a few minutes to execute. When the command completes the output to the shell prompt will scroll across several lines but end with something similar to the following:
|
||||
|
||||
<pre>
|
||||
Database 'WideWorldImporters' running the upgrade step from version 895 to version 896<br>Database 'WideWorldImporters' running the upgrade step from version 896 to version 897<br>RESTORE DATABASE successfully processed 58455 pages in 30.797 seconds (14.828 MB/sec).
|
||||
</pre>
|
||||
|
||||
Notice the end of the restore command displays how many database pages were restored (SQL Server stores data in 8K pages) and the duration it took to restore the database. The database has been restored, brought online, and is available to run queries.
|
||||
Notice the end of the restore command displays how many database pages were restored (SQL Server stores data in 8K pages) and the duration it took to restore the database. The database has now been restored, brought online, and is available to run queries.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
|
@ -139,7 +139,7 @@ The T-SQL language allows all types of queries to be executed against your data
|
|||
|
||||
If you are given a database backup to restore, one of the first things you want to do is explore what is in the database. SQL Server provides a rich set of metadata about the database through *catalog views*. This allows you to find out what tables, columns, and other objects exist in a database.
|
||||
|
||||
In addition, to find out what data exists within tables in the database, you will use the most often used T-SQL command SELECT against tables you have permissions to query.
|
||||
In addition, to find out what data exists within tables in the database, you will use the most often used T-SQL command **SELECT** against tables you have permissions to query.
|
||||
|
||||
SQL Server also provides a robust set of *dynamic management views* (DMV) through SELECT statements to query the state of the database engine.
|
||||
|
||||
|
@ -252,7 +252,7 @@ Abhoy Prabhupda (423) 555-0100 abhoy@tailspintoys.com
|
|||
(10 rows affected)
|
||||
</pre>
|
||||
|
||||
In this example, you used the `TOP 10` option of a `SELECT` statement to only retrieve the first 10 rows in the People table and the `ORDER BY` clause to sort the results by name (default ascending)).
|
||||
In this example, you used the `TOP 10` option of a `SELECT` statement to only retrieve the first 10 rows in the People table and the `ORDER BY` clause to sort the results by name (default ascending).
|
||||
|
||||
These results contain privacy information. You can review a feature of SQL Server called Dynamic Data Masking to mask privacy information from application users. See more at [https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking](https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking).
|
||||
|
||||
|
@ -283,15 +283,18 @@ GO
|
|||
|
||||
The output should look something similar to this:
|
||||
|
||||
<pre>session_id login_time host_name program_name reads writes cpu_time
|
||||
<pre>
|
||||
session_id login_time host_name program_name reads writes cpu_time
|
||||
---------- ----------------------- -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -----------
|
||||
51 2019-04-12 15:04:50.513 mssql-deploymen SQLServerCEIP 0 0 50
|
||||
52 2019-04-12 15:08:21.147 troyryanwin10 SQLCMD 0 0 0
|
||||
(2 rows affected)
|
||||
|
||||
session_id start_time status command
|
||||
---------- ----------------------- ------------------------------ --------------------------------
|
||||
52 2019-04-12 15:08:21.317 running SELECT
|
||||
(1 rows affected)
|
||||
|
||||
cpu_count committed_kb
|
||||
----------- --------------------
|
||||
2 405008
|
||||
|
|
|
@ -23,7 +23,7 @@ You'll cover the following topics in this Module:
|
|||
|
||||
<h2><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/pencil2.png"><a name="3-0">3.0 SQL Server Intelligent Query Processing</a></h2>
|
||||
|
||||
In this module you will learn about the Intelligent Query processing capabilities in SQL Server 2019. You will use various Activities to understand these concepts using the SQL Server container you deployed in OpenShift. This demonstrates the compatibility of the database engine in SQL Server 2019 running on Windows, Linux, and uin containers.
|
||||
In this module you will learn about the Intelligent Query processing capabilities in SQL Server 2019. You will use various Activities to understand these concepts using the SQL Server container you deployed in OpenShift. This demonstrates the compatibility of the database engine in SQL Server 2019 running on Windows, Linux, and Containers.
|
||||
|
||||
Intelligent Query processing is a suite of features built into the query processor for SQL Server 2019 allowing developers and data professionals to accelerate database performance automatically without any application changes. T-SQL queries simply need to be run with a database compatibility level of 150 to take advantage of these enhancements.
|
||||
|
||||
|
@ -83,7 +83,7 @@ The first time you launch Azure Data Studio, you may see the following choices.
|
|||
|
||||
<p><img style="margin: 0px 30px 15x 0px;" src="../graphics/ADS_initial_prompts.jpg" width="250" height="150">
|
||||
|
||||
You will now be presented with the following screen to enter in your connection details for SQL Server. For Server, put in the values for **EXTERNAL IP, PORT** from step 1 above. Change the **Authentication** type to **SQL Login**, Put in a user name of **sa** with the Password you used for the **secret** in Module 01 when you deployed SQL Server. Click the checkbox for **Remember Password** so you will not have to enter this information again for future connections.
|
||||
You will now be presented with the following screen to enter in your connection details for SQL Server. For Server, put in the values for **EXTERNAL IP, PORT** from step 1 above. Change the **Authentication** type to **SQL Login**, Put in a user name of **sa** with the Password you used for the **secret** in Module 01 when you deployed SQL Server (the default password for this workshop is Sql2019isfast). Click the checkbox for **Remember Password** so you will not have to enter this information again for future connections.
|
||||
|
||||
Now click the **Connect** button to connect. An example of a connection looks similar to this graphic:
|
||||
|
||||
|
@ -138,7 +138,7 @@ This procedure uses a table variable populated from a user table and then joins
|
|||
|
||||
**NOTE**: *In this example the TOP 1 T-SQL syntax is used so that the procedure only produces 1 row. This is only done to make the output easier to read using this workshop and demo since this procedure will be executed multiple times. Normal execution of this procedure may not include `TOP`.*
|
||||
|
||||
Now click the **Run** button to execute the script. You will be prompted to pick the connection to execute the script. Select the connection you created in Step 4.
|
||||
Now click the **Run** button to execute the script. You will be prompted to pick the connection to execute the script. Select the connection you created in previous steps of this Activity.
|
||||
|
||||
When you execute this script the results should look similar to this graphic:
|
||||
|
||||
|
@ -146,11 +146,11 @@ When you execute this script the results should look similar to this graphic:
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Change the Compatibility Level to see the previous performance</p>
|
||||
|
||||
You observe that this Stored Procedure executes quickly with a single execution but over several iterations the total duration increases to over 20 seconds, which is not acceptable.
|
||||
You observe that this Stored Procedure executes quickly with a single execution but over several iterations the total duration increases to over 20 seconds, which is not acceptable for the needs of the WideWorldImporters company.
|
||||
|
||||
Open the script **repro130.sql** by using the File Menu/Open File option of Azure Data Studio. The file can be found in the **sqlworkshops/SQLonOpenShift/sqlonopenshift/03_performance/iqp** folder.
|
||||
|
||||
The script looks similar to the following
|
||||
The script looks similar to the following:
|
||||
|
||||
|
||||
```sql
|
||||
|
@ -171,7 +171,7 @@ GO
|
|||
|
||||
The script will ensure the database is in a compatibility mode that is less than 150 so Intelligent Query Processing will NOT be enabled. The script also turns off rowcount messages to be returned to the client to reduce network traffic for this test. Then the script executes the stored procedure. Notice the syntax of **GO 25**. This is a client tool tip that says to run the batch 25 times (avoids having to construct a loop).
|
||||
|
||||
Click the **Run** button to execute the script to observe the results. Choose the connection by clicking on the **IP,PORT** you created for the SQL Server container and click **Connect**.
|
||||
Click the **Run** button to execute the script to observe the results. Choose the connection by clicking on the **IP, PORT** you created for the SQL Server container and click **Connect**.
|
||||
|
||||
You will see while the query is executing in the bottom status bar the current elapsed execution time, the server connection details, a status of **Executing Query**, and number of rows being returned to the client.
|
||||
|
||||
|
@ -189,7 +189,7 @@ You can scroll in the **RESULTS** or **MESSAGES** pane. If you scroll down to th
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Change the Compatibility Level to see the performance improvements in SQL Server 2019</p>
|
||||
|
||||
You will now run the same code, but with database compatibility of 150 which uses the features in SQL Server 2019.
|
||||
You will now run the same code, but with database compatibility of 150, which enables Intelligent Query Processing.
|
||||
|
||||
Open the script **repro150.sql** by using the **File | Open File** option of Azure Data Studio. The file can be found in the **sqlworkshops/SQLonOpenShift/sqlonopenshift/03_performance/iqp** folder.
|
||||
|
||||
|
@ -210,13 +210,13 @@ SET NOCOUNT OFF
|
|||
GO
|
||||
```
|
||||
|
||||
Notice this is the same script, except database compatibility of 150 is used. This time, the query processor in SQL Server will enable table variable deferred compilation to use a better query plan.
|
||||
Notice this is the same script, except database compatibility of 150 is used. This time, the query processor in SQL Server will enable table variable deferred compilation which allows for a possible improved query plan choice.
|
||||
|
||||
Run the script and choose the SQL Server container connection. Go through the same steps as in Step 8 to analyze the results. The script should execute far faster than before. Your speeds can vary but should be 15 seconds or less.
|
||||
Run the script and choose the SQL Server container connection. Go through the same process as in previous steps to analyze the results. The script should execute far faster than before. Your speeds can vary but should be 15 seconds or less.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Post-Workshop example</p>
|
||||
|
||||
As a **post workshop exercise** you can go through this activity in a Jupyter Notebook in Azure Data Studio, which contains a SQL Kernel. Use the **File** menu of Azure Data Studio (**Open File** option) to open the **03_IQP_Table_Variable.ipynb** Jupyter Notebook in the **sqlworkshops/SQLOnOpenShift/sqlonopenshift/iqp/03_performance** folder. Follow the steps provided in the notebook to complete the activity.
|
||||
As a **post workshop exercise** you can go through this activity in a Jupyter Notebook in Azure Data Studio, which contains a SQL Kernel. Use the **File** menu of Azure Data Studio (**Open File** option) to open the **03_IQP_Table_Variable.ipynb** Jupyter Notebook in the **sqlworkshops/SQLOnOpenShift/sqlonopenshift/03_performance/iqp** folder. Follow the steps provided in the notebook to complete the activity.
|
||||
|
||||
The SQL notebook experience looks similar to the following:
|
||||
|
||||
|
@ -305,7 +305,7 @@ The following output is an example for the stored procedure execution with `comp
|
|||
|
||||
In this example SQL has recognized the table variable has more than 1 row and has chosen a different join method called a hash join. Furthermore, it has injected into the plan the concept of an **Adaptive Join** so that if there is small enough rowset in the table variable it could dynamically and automatically choose a **Nested Loops Join**. What is not obvious from this diagram (which you can see from the *properties* detail in the XML plan) that is the query processor is using a third concept called **batch mode processing on rowstore** (rowstore is normal table as opposed to a columnstore).
|
||||
|
||||
As a **post workshop exercise** you can go through this activity in a SQL notebook. Use the **File | Open** menu in Azure Data Studio to open the **03_Query_Store.ipynb** notebook in the **sqlworkshops/SQLOnOpenShift/sqlonopenshift/iqp/03_performance** folder. Follow the steps provided in the notebook to complete the activity.
|
||||
As a **post workshop exercise** you can go through this activity in a SQL notebook. Use the **File | Open** menu in Azure Data Studio to open the **03_Query_Store.ipynb** notebook in the **sqlworkshops/SQLOnOpenShift/sqlonopenshift/03_performance/iqp** folder. Follow the steps provided in the notebook to complete the activity.
|
||||
|
||||
In this activity you have seen how to use the Query Store for performance insights including the ability to see differences for the same query text of different query plans, including those that benefit from Intelligent Query Processing.
|
||||
|
||||
|
|
|
@ -81,7 +81,7 @@ Run the following command (depending on your Linux shell and client you may need
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Show the Pods for SQL Server</p>
|
||||
|
||||
One of the simple methods for High Availability for OpenShift is at the container level. Since SQL Server is the main program in the container, if SQL Server is shutdown or crashes, OpenShift will automatically restart the container in the same pod and node. Run the following commands to see the status of the current pod deployed for the project you created in Module 01.
|
||||
One of the simple methods for High Availability for OpenShift is at the container level. Since SQL Server is the main program in the container, if SQL Server is shutdown or crashes, OpenShift will automatically restart the container. In most cases, the container will be started with the same pod on the same node, but OpenShift may schedule the pod on the best node available. Run the following commands to see the status of the current pod deployed for the project you created in Module 01.
|
||||
|
||||
`oc get pods -o wide`
|
||||
|
||||
|
@ -301,8 +301,8 @@ Developer Edition (64-bit) on Linux (Red Hat Enterprise Linux Server 7.6 (Maipo)
|
|||
|
||||
If you are going to proceed and complete Module 05,you need to cleanup resources. This can be done by running the following commands. You can also execute the script **step6_cleanup.sh**:
|
||||
|
||||
`oc delete project mssql`
|
||||
`oc project default`
|
||||
`oc delete project mssql`<br>
|
||||
`oc project default`<br>
|
||||
|
||||
When this completes, you should see the following output and be placed back at the shell prompt:
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ Follow these steps to deploy an Always On Availability Group on OpenShift using
|
|||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Ensure your scripts are executable</p>
|
||||
|
||||
Run the following command (depending on your Linux shell and client you may need to preface this with sudo)
|
||||
Run the following command (depending on your Linux shell and client you may need to preface this with `sudo`)
|
||||
|
||||
`chmod u+x *.sh`
|
||||
|
||||
|
@ -151,7 +151,7 @@ service/mssql3 LoadBalancer 172.30.6.212 23.96.53.245 1433:30611/TCP
|
|||
|
||||
Run the `oc get all` command until the pods and LoadBalancer services are in this state.
|
||||
|
||||
**NOTE**: *You will see some pods that start with a name of mssql-initialize. You can ignore these. They are used to deploy the SQL Server Availability Group but may not be needed in the final design of the operator.*
|
||||
**NOTE**: *You will see some pods that start with a name of mssql-initialize. You can ignore these. They are used to deploy the SQL Server Availability Group but may not be needed in the final design of the operator for SQL Server 2019.*
|
||||
|
||||
In addition, notice that there are three objects from the oc get all output:
|
||||
|
||||
|
@ -203,7 +203,7 @@ Proceed to the activity to see how this works.
|
|||
|
||||
In this activity you will learn how to connect, add databases, add data, and query data to replicas in an availability group deployed in OpenShift.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Open a shell prompt and change directories to the <b>sqlworkshops/SQLonOpenShift/sqlonopenshift/05_operator</b> folder</p>
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Check Replica Status</p>
|
||||
|
||||
T-SQL provides capabilities so you can see which SQL Servers instances are currently the primary vs secondary replica.
|
||||
|
||||
|
@ -218,6 +218,7 @@ ON hars.replica_id = ar.replica_id
|
|||
GO
|
||||
```
|
||||
|
||||
Open a shell prompt and change directories to the <b>sqlworkshops/SQLonOpenShift/sqlonopenshift/05_operator</b> folder
|
||||
|
||||
Run the following command or execute the script **step6_check_replicas.sh** to see the replica status of the Availability Group deployed:
|
||||
|
||||
|
@ -235,7 +236,7 @@ mssql2-0 SECONDARY NULL
|
|||
mssql3-0 SECONDARY NULL
|
||||
</pre>
|
||||
|
||||
It is possible that the replica_server_name for your deployment is any of these replicas. In most cases, it will be **mssql1-0**. You will use this same command later to see that the status of the Availability Group is after a failover.
|
||||
It is possible that the replica_server_name for your deployment is any of these replicas. In most cases, it will be **mssql1-0**. You will use this same command later to see that the status of the Availability Group after a failover.
|
||||
|
||||
Now it is time to create a new database, backup the database, and then add the database to the Availability Group. Examine the contents of the script **setupag.sql** to see the T-SQL commands. Run the following command or script **step7_setupag.sh**:
|
||||
|
||||
|
@ -253,7 +254,7 @@ Processed 2 pages for database 'testag', file 'testag_log' on file 1.
|
|||
BACKUP DATABASE successfully processed 330 pages in 1.239 seconds (2.077 MB/sec).
|
||||
</pre>
|
||||
|
||||
Direct seeding should happen almost instantly because there is no user database in the database.
|
||||
Direct seeding should happen almost instantly because there is no user data in the database.
|
||||
|
||||
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png">Create Data for Replication</p>
|
||||
|
||||
|
@ -322,7 +323,7 @@ col1 col2
|
|||
(1 rows affected)
|
||||
</pre>
|
||||
|
||||
Now that you have successfully created a database, added it to the Availability Group, and synchronized data, you an proceed to the next section to test how a failover works.
|
||||
Now that you have successfully created a database, added it to the Availability Group, and synchronized data, you can proceed to the next section to test how a failover works.
|
||||
|
||||
<p style="border-bottom: 1px solid lightgrey;"></p>
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче