A Spark connector for the Azure Common Data Model
Перейти к файлу
microsoft-github-policy-service[bot] 491d1b5c40
Auto merge mandatory file pr
This pr is auto merged as it contains a mandatory file and is opened for more than 10 days.
2023-05-31 18:47:33 +00:00
data/models Added model parsing tests. 2018-12-18 12:37:37 -08:00
project Basic read and write support for spark-cdm. 2018-12-14 15:14:27 -08:00
src use the spark timestampformatter for timestamps (problems with microseconds) 2019-11-08 15:44:25 +01:00
.gitignore Basic read and write support for spark-cdm. 2018-12-14 15:14:27 -08:00
LICENSE Basic read and write support for spark-cdm. 2018-12-14 15:14:27 -08:00
README.md Update README.md 2020-05-07 00:03:33 -07:00
SECURITY.md Microsoft mandatory file 2023-01-27 20:57:06 +00:00
build.sbt Reverted build.sbt version. 2019-10-29 10:20:55 -07:00

README.md

OBSOLETE

A new Spark CDM Connector is available at https://github.com/Azure/spark-cdm-connector. No further updates or tracking of issues or requests will take place in this repo. Many thanks to those who tried out this original connector and provided feedback. Please try out the new connector and let us know how we can improve it.

spark-cdm

A prototype Spark data source for the Azure "Common Data Model". Reading and writing is supported, but spark-cdm is definitely a work in progress. Please file issues for any bugs that you find. For more information about the Azure Common Data Model, check out this page.

Example

  1. Create an AAD app and give the service principal the "Storage Blob Data Contributor" role on the ADLSgen2 storage account used for your CDM data.
  2. Install the JAR in the release directory in this repo on your Spark cluster.
  3. Check out the below code for basic read and write examples.
val df = spark.read.format("com.microsoft.cdm")
                .option("cdmModel", "https://YOURADLSACCOUNT.dfs.core.windows.net/FILESYSTEM/path/to/model.json")
                .option("entity", "Query")
                .option("appId", "YOURAPPID")
                .option("appKey", "YOURAPPKEY")
                .option("tenantId", "YOURTENANTID")
                .load()

// Do whatever spark transformations you want
val transformedDf = df.filter(...)

// entity: name of the entity you wish to write.
// modelDirectory: writes to a model.json file located at the root of this directory. note: if there is already a model.json in this directory, we will append an entity to it
// modelName: name of the model to write. N/A in append case for now.
transformedDf.write.format("com.microsoft.cdm")
            .option("entity", "FilteredQueries")
            .option("appId", "YOURAPPID")
            .option("appKey", "YOURAPPKEY")
            .option("tenantId", "YOURTENANTID")
            .option("cdmFolder", "https://YOURADLSACCOUNT.dfs.core.windows.net/FILESYSTEM/path/to/output/directory/")
            .option("cdmModelName", "MyData")
            .save()

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.