.NET for Apache Spark provides high performance APIs for using [Apache Spark](https://spark.apache.org/) from C# and F#. With these .NET APIs, you can access the most popular Dataframe and SparkSQL aspects of Apache Spark, for working with structured data, and Spark Structured Streaming, for working with streaming data.
.NET for Apache Spark is compliant with .NET Standard - a formal specification of .NET APIs that are common across .NET implementations. This means you can use .NET for Apache Spark anywhere you write .NET code allowing you to reuse all the knowledge, skills, code, and libraries you already have as a .NET developer.
.NET for Apache Spark runs on Windows, Linux, and macOS using .NET Core, or Windows using .NET Framework. It also runs on all major cloud providers including [Azure HDInsight Spark](deployment/README.md#azure-hdinsight-spark), [Amazon EMR Spark](deployment/README.md#amazon-emr-spark), [AWS](deployment/README.md#databricks) & [Azure](deployment/README.md#databricks) Databricks.
**Note**: We currently have a Spark Project Improvement Proposal JIRA at [SPIP: .NET bindings for Apache Spark](https://issues.apache.org/jira/browse/SPARK-27006) to work with the community towards getting .NET support by default into Apache Spark. We highly encourage you to participate in the discussion.
There are two types of samples/apps in the .NET for Apache Spark repo:
* ![Icon](docs/img/app-type-getting-started.png) Getting Started - .NET for Apache Spark code focused on simple and minimalistic scenarios.
* ![Icon](docs/img/app-type-e2e.png) End-End apps/scenarios - Real world examples of industry standard benchmarks, usecases and business applications implemented using .NET for Apache Spark.
We welcome contributions to both categories!
<table>
<tr>
<tdwidth="25%">
<h4><b>Analytics Scenario</b></h4>
</td>
<td>
<h4width="35%"><b>Description</b></h4>
</td>
<td>
<h4><b>Scenarios</b></h4>
</td>
</tr>
<tr>
<tdwidth="25%">
<h5>Dataframes and SparkSQL</h5>
</td>
<tdwidth="35%">
Simple code snippets to help you get familiarized with the programmability experience of .NET for Apache Spark.
</td>
<td>
<h5>Basic
<ahref="examples/Microsoft.Spark.CSharp.Examples/Sql/Basic.cs">C#</a> <ahref="examples/Microsoft.Spark.FSharp.Examples/Sql/Basic.fs">F#</a> <ahref="#"><imgsrc="docs/img/app-type-getting-started.png"alt="Getting started icon"></a></h5>
</td>
</tr>
<tr>
<tdwidth="25%">
<h5>Structured Streaming</h5>
</td>
<tdwidth="35%">
Code snippets to show you how to utilize Apache Spark's Structured Streaming (<ahref="https://spark.apache.org/docs/2.3.1/structured-streaming-programming-guide.html">2.3.1</a>, <ahref="https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html">2.3.2</a>, <ahref="https://spark.apache.org/docs/2.4.1/structured-streaming-programming-guide.html">2.4.1</a>, <ahref="https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html">Latest</a>)
</td>
<td>
<h5>Word Count
<ahref="examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredNetworkWordCount.cs">C#</a> <ahref="examples/Microsoft.Spark.FSharp.Examples/Sql/Streaming/StructuredNetworkWordCount.fs">F#</a> <ahref="#"><imgsrc="docs/img/app-type-getting-started.png"alt="Getting started icon"></a></h5>
<h5>Windowed Word Count <ahref="examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredNetworkWordCountWindowed.cs">C#</a> <ahref="examples/Microsoft.Spark.FSharp.Examples/Sql/Streaming/StructuredNetworkWordCountWindowed.fs">F#</a> <ahref="#"><imgsrc="docs/img/app-type-getting-started.png"alt="Getting started icon"></a></h5>
<h5>Word Count on data from <ahref="https://kafka.apache.org/">Kafka</a> <ahref="examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredKafkaWordCount.cs">C#</a> <ahref="examples/Microsoft.Spark.FSharp.Examples/Sql/Streaming/StructuredKafkaWordCount.fs">F#</a> <ahref="#"><imgsrc="docs/img/app-type-getting-started.png"alt="Getting started icon"></a></h5>
</td>
</tr>
<tr>
<tdwidth="25%">
<h4>TPC-H Queries</h4>
</td>
<tdwidth="35%">
Code to show you how to author complex queries using .NET for Apache Spark.
- [Apache Spark](https://spark.apache.org/): Unified Analytics Engine for Big Data, the underlying backend execution engine for .NET for Apache Spark
- [Mobius](https://github.com/Microsoft/Mobius): C# and F# language binding and extensions to Apache Spark, a pre-cursor project to .NET for Apache Spark from the same Microsoft group.
- [PySpark](https://spark.apache.org/docs/latest/api/python/index.html): Python bindings for Apache Spark, one of the implementations .NET for Apache Spark derives inspiration from.
- [sparkR](https://spark.apache.org/docs/latest/sparkr.html): one of the implementations .NET for Apache Spark derives inspiration from.
- [Apache Arrow](https://arrow.apache.org/): A cross-language development platform for in-memory data. This library provides .NET for Apache Spark with efficient ways to transfer column major data between the JVM and .NET CLR.
- [Pyrolite](https://github.com/irmen/Pyrolite) - Java and .NET interface to Python's pickle and Pyro protocols. This library provides .NET for Apache Spark with efficient ways to transfer row major data between the JVM and .NET CLR.
- [Databricks](https://databricks.com/): Unified analytics platform. Many thanks to all the suggestions from them towards making .NET for Apache Spark run on Azure and AWS Databricks.
The .NET for Apache Spark team encourages [contributions](docs/contributing.md), both issues and PRs. The first step is finding an [existing issue](https://github.com/dotnet/spark/issues) you want to contribute to or if you cannot find any, [open an issue](https://github.com/dotnet/spark/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+).