apache-sparkbigdatacsharpdataframedatasetdstreameventhubsfsharpkafka-streamingmapreducemobiusnear-real-timerddsparkspark-streamingstreaming
4f478a01ca | ||
---|---|---|
csharp | ||
docs | ||
scala | ||
scripts | ||
.gitattributes | ||
.gitignore | ||
.travis.yml | ||
Build.cmd | ||
CONTRIBUTING.md | ||
LICENSE | ||
PythonWorkerFactory.scala.patch | ||
README.md | ||
RunSamples.cmd | ||
appveyor.yml | ||
build.sh | ||
downloadtools.ps1 | ||
linux-instructions.md | ||
precheck.cmd | ||
run-samples.sh | ||
windows-instructions.md |
README.md
SparkCLR
SparkCLR (pronounced Sparkler) adds C# language binding to Apache Spark, enabling the implementation of Spark driver code and data processing operations in C#.
For example, the word count sample in Apache Spark can be implemented in C# as follows :
var lines = sparkContext.TextFile(@"hdfs://path/to/input.txt");
var words = lines.FlatMap(s => s.Split(new[] { " " }, StringSplitOptions.None));
var wordCounts = words.Map(w => new KeyValuePair<string, int>(w.Trim(), 1))
.ReduceByKey((x, y) => x + y);
var wordCountCollection = wordCounts.Collect();
wordCounts.SaveAsTextFile(@"hdfs://path/to/wordcount.txt");
A simple DataFrame application using TempTable may look like the following:
var reqDataFrame = sqlContext.TextFile(@"hdfs://path/to/requests.csv");
var metricDataFrame = sqlContext.TextFile(@"hdfs://path/to/metrics.csv");
reqDataFrame.RegisterTempTable("requests");
metricDataFrame.RegisterTempTable("metrics");
// C0 - guid in requests DataFrame, C3 - guid in metrics DataFrame
var joinDataFrame = GetSqlContext().Sql(
"SELECT joinedtable.datacenter" +
", MAX(joinedtable.latency) maxlatency" +
", AVG(joinedtable.latency) avglatency " +
"FROM (" +
"SELECT a.C1 as datacenter, b.C6 as latency " +
"FROM requests a JOIN metrics b ON a.C0 = b.C3) joinedtable " +
"GROUP BY datacenter");
joinDataFrame.ShowSchema();
joinDataFrame.Show();
A simple DataFrame application using DataFrame DSL may look like the following:
// C0 - guid, C1 - datacenter
var reqDataFrame = sqlContext.TextFile(@"hdfs://path/to/requests.csv")
.Select("C0", "C1");
// C3 - guid, C6 - latency
var metricDataFrame = sqlContext.TextFile(@"hdfs://path/to/metrics.csv", ",", false, true)
.Select("C3", "C6"); //override delimiter, hasHeader & inferSchema
var joinDataFrame = reqDataFrame.Join(metricDataFrame, reqDataFrame["C0"] == metricDataFrame["C3"])
.GroupBy("C1");
var maxLatencyByDcDataFrame = joinDataFrame.Agg(new Dictionary<string, string> { { "C6", "max" } });
maxLatencyByDcDataFrame.ShowSchema();
maxLatencyByDcDataFrame.Show();
Refer to SparkCLR\csharp\Samples directory for complete samples.
Documents
Refer to the docs folder.
Build Status
Ubuntu 14.04.3 LTS | Windows |
---|---|
Building, Running and Debugging SparkCLR
(Note: Tested only with Spark 1.4.1)
License
SparkCLR is licensed under the MIT license. See LICENSE file for full license information.
Contribution
We welcome contributions. To contribute, follow the instructions in CONTRIBUTING.md.