Update AzureDocument.md
This commit is contained in:
Родитель
ad5f4aec27
Коммит
4d3c52002e
|
@ -1,6 +1,6 @@
|
|||
# Accelerate real-time big data analytics with Spark connector for Azure SQL Databases and SQL Server
|
||||
# Accelerate real-time big data analytics with Spark connector for Azure SQL Database and SQL Server
|
||||
|
||||
The Spark connector for Azure SQL Databases and SQL Server enables SQL databases, including Azure SQL Databases and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real time transactional data in big data analytics and persist results for adhoc queries or reporting. Comparing to the built-in JDBC connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row by row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Databases and SQL Server also supports AAD authentication. It allows you securely connecting to your Azure SQL databases from Azure Databricks using your AAD account. It provides similar interfaces with the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector.
|
||||
The Spark connector for Azure SQL Database and SQL Server enables SQL databases, including Azure SQL Database and SQL Server, to act as input data source or output data sink for Spark jobs. It allows you to utilize real time transactional data in big data analytics and persist results for adhoc queries or reporting. Compared to the built-in JDBC connector, this connector provides the ability to bulk insert data into SQL databases. It can outperform row by row insertion with 10x to 20x faster performance. The Spark connector for Azure SQL Database and SQL Server also supports AAD authentication. It allows you securely connecting to your Azure SQL database from Azure Databricks using your AAD account. It provides similar interfaces with the built-in JDBC connector. It is easy to migrate your existing Spark jobs to use this new connector.
|
||||
|
||||
## Download
|
||||
To get started, download the Spark to SQL DB connector from the [azure-sqldb-spark repository](https://github.com/Azure/azure-sqldb-spark) on GitHub.
|
||||
|
@ -13,14 +13,14 @@ To get started, download the Spark to SQL DB connector from the [azure-sqldb-spa
|
|||
| Scala |2.10 or later |
|
||||
| Microsoft JDBC Driver for SQL Server |6.2 or later |
|
||||
| Microsoft SQL Server |SQL Server 2008 or later |
|
||||
| Azure SQL Databases |Supported |
|
||||
| Azure SQL Database |Supported |
|
||||
|
||||
The Spark connector for Azure SQL Databases and SQL Server utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker nodes and SQL databases:
|
||||
The Spark connector for Azure SQL Database and SQL Server utilizes the Microsoft JDBC Driver for SQL Server to move data between Spark worker nodes and SQL databases:
|
||||
|
||||
The dataflow is as following:
|
||||
1. The Spark master node connect to SQL Server or Azure SQL Databases and load data from a specific table or using a specific SQL query
|
||||
1. The Spark master node connect to SQL Server or Azure SQL Database and load data from a specific table or using a specific SQL query
|
||||
2. Spark master node distribute data to worker nodes for transformation.
|
||||
3. Worker node connect to SQL Server or Azure SQL Databases and write data to the database. User can choose to use row-by-row insertion or bulk insert.
|
||||
3. Worker node connect to SQL Server or Azure SQL Database and write data to the database. User can choose to use row-by-row insertion or bulk insert.
|
||||
|
||||
### Build the Spark to SQL DB connector
|
||||
Currently, the connector project uses maven. To build the connector without dependencies, you can run:
|
||||
|
@ -29,9 +29,9 @@ You can also download the latest versions of the JAR from the release folder
|
|||
Include the SQL DB Spark JAR
|
||||
|
||||
## Connect Spark to SQL DB using the connector
|
||||
You can connect to Azure SQL Databases or SQL Server from Spark jobs, read or write data. You can also run a DML or DDL query in an Azure SQL database or SQL Server database.
|
||||
You can connect to Azure SQL Database or SQL Server from Spark jobs, read or write data. You can also run a DML or DDL query in an Azure SQL database or SQL Server database.
|
||||
|
||||
### Read data from Azure SQL Databases or SQL Server
|
||||
### Read data from Azure SQL Database or SQL Server
|
||||
|
||||
```scala
|
||||
import com.microsoft.azure.sqldb.spark.config.Config
|
||||
|
@ -50,7 +50,7 @@ val config = Config(Map(
|
|||
val collection = sqlContext.read.sqlDB(config)
|
||||
collection.show()
|
||||
```
|
||||
### Reading data from Azure SQL Databases or SQL Server with specified SQL query
|
||||
### Reading data from Azure SQL Database or SQL Server with specified SQL query
|
||||
```scala
|
||||
import com.microsoft.azure.sqldb.spark.config.Config
|
||||
import com.microsoft.azure.sqldb.spark.connect._
|
||||
|
@ -68,7 +68,7 @@ val collection = sqlContext.read.sqlDb(config)
|
|||
collection.show()
|
||||
```
|
||||
|
||||
### Write data to Azure SQL Databases or SQL Server
|
||||
### Write data to Azure SQL Database or SQL Server
|
||||
```scala
|
||||
import com.microsoft.azure.sqldb.spark.config.Config
|
||||
import com.microsoft.azure.sqldb.spark.connect._
|
||||
|
@ -87,7 +87,7 @@ import org.apache.spark.sql.SaveMode
|
|||
collection.write.mode(SaveMode.Append).sqlDB(config)
|
||||
```
|
||||
|
||||
### Run DML or DDL query in Azure SQL Databases or SQL Server
|
||||
### Run DML or DDL query in Azure SQL Database or SQL Server
|
||||
```scala
|
||||
import com.microsoft.azure.sqldb.spark.config.Config
|
||||
import com.microsoft.azure.sqldb.spark.query._
|
||||
|
@ -108,8 +108,8 @@ val config = Config(Map(
|
|||
sqlContext.SqlDBQuery(config)
|
||||
```
|
||||
|
||||
## Connect Spark to Azure SQL Databases using AAD authentication
|
||||
You can connect to Azure SQL Databases using Azure Active Directory (AAD) authentication. Use AAD authentication to centrally manage identities of database users and as an alternative to SQL Server authentication.
|
||||
## Connect Spark to Azure SQL Database using AAD authentication
|
||||
You can connect to Azure SQL Database using Azure Active Directory (AAD) authentication. Use AAD authentication to centrally manage identities of database users and as an alternative to SQL Server authentication.
|
||||
### Connecting using ActiveDirectoryPassword Authentication Mode
|
||||
#### Setup Requirement
|
||||
If you are using the ActiveDirectoryPassword authentication mode you will need to download [azure-activedirectory-library-for-java](https://github.com/AzureAD/azure-activedirectory-library-for-java) and its dependencies, and include them in the Java build path.
|
||||
|
@ -153,8 +153,8 @@ val collection = sqlContext.read.SqlDB(config)
|
|||
collection.show()
|
||||
```
|
||||
|
||||
## Write data to Azure SQL databases or SQL Server using Bulk Insert
|
||||
The traditional jdbc connector writes data into Azure SQL databases or SQL Server using row-by-row insertion. You can use Spark to SQL DB connector to write data to SQL database using bulk insert. It will significantly improve the write performance when loading large data sets or loading data into tables where column store index is used.
|
||||
## Write data to Azure SQL database or SQL Server using Bulk Insert
|
||||
The traditional jdbc connector writes data into Azure SQL database or SQL Server using row-by-row insertion. You can use Spark to SQL DB connector to write data to SQL database using bulk insert. It will significantly improve the write performance when loading large data sets or loading data into tables where column store index is used.
|
||||
|
||||
```scala
|
||||
import com.microsoft.azure.sqldb.spark.bulkcopy.BulkCopyMetadata
|
||||
|
@ -188,7 +188,7 @@ df.bulkCopyToSqlDB(bulkCopyConfig, bulkCopyMetadata)
|
|||
```
|
||||
|
||||
## Next steps
|
||||
If you haven't already, download the Spark connector for Azure SQL Databases and SQL Server from [azure-sqldb-spark GitHub repository](https://github.com/Azure/azure-sqldb-spark) and explore the additional resources in the repo:
|
||||
If you haven't already, download the Spark connector for Azure SQL Database and SQL Server from [azure-sqldb-spark GitHub repository](https://github.com/Azure/azure-sqldb-spark) and explore the additional resources in the repo:
|
||||
|
||||
- [Sample Azure Databricks notebooks](https://github.com/Azure/azure-sqldb-spark/tree/master/samples/notebooks)
|
||||
- [Sample scripts (Scala)](https://github.com/Azure/azure-sqldb-spark/tree/master/samples/scripts)
|
||||
|
|
Загрузка…
Ссылка в новой задаче