2012-09-03 10:05:40 +04:00
---
layout: global
2012-09-13 06:47:31 +04:00
title: Spark Overview
2012-09-03 10:05:40 +04:00
---
2012-09-17 02:28:52 +04:00
{% comment %}
TODO(andyk): Rewrite to make the Java API a first class part of the story.
{% endcomment %}
2012-09-26 10:26:56 +04:00
Spark is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an
interpreter. It provides clean, language-integrated APIs in Scala and Java, with a rich array of parallel operators. Spark can
run on top of the [Apache Mesos ](http://incubator.apache.org/mesos/ ) cluster manager,
[Hadoop YARN ](http://hadoop.apache.org/docs/r2.0.1-alpha/hadoop-yarn/hadoop-yarn-site/YARN.html ),
Amazon EC2, or without an independent resource manager ("standalone mode").
2012-09-03 10:05:40 +04:00
# Downloading
Get Spark by checking out the master branch of the Git repository, using `git clone git://github.com/mesos/spark.git` .
# Building
2012-09-26 02:46:18 +04:00
Spark requires [Scala 2.9.2 ](http://www.scala-lang.org/ ). You will need to have Scala's `bin` directory in your `PATH` ,
2012-09-03 10:05:40 +04:00
or you will need to set the `SCALA_HOME` environment variable to point
2012-09-26 06:31:07 +04:00
to where you've installed Scala. Scala must also be accessible through one
2012-09-26 02:46:18 +04:00
of these methods on slave nodes on your cluster.
2012-09-03 10:05:40 +04:00
Spark uses [Simple Build Tool ](https://github.com/harrah/xsbt/wiki ), which is bundled with it. To compile the code, go into the top-level Spark directory and run
2012-09-26 02:46:18 +04:00
sbt/sbt package
2012-09-03 10:05:40 +04:00
# Testing the Build
Spark comes with a number of sample programs in the `examples` directory.
To run one of the samples, use `./run <class> <params>` in the top-level Spark directory
(the `run` script sets up the appropriate paths and launches that program).
For example, `./run spark.examples.SparkPi` will run a sample program that estimates Pi. Each of the
examples prints usage help if no params are given.
2012-09-26 06:31:07 +04:00
Note that all of the sample programs take a `<master>` parameter specifying the cluster URL
2012-09-26 10:26:56 +04:00
to connect to. This can be a [URL for a distributed cluster ]({{HOME_PATH}}scala-programming-guide.html#master-urls ),
2012-09-26 06:31:07 +04:00
or `local` to run locally with one thread, or `local[N]` to run locally with N threads. You should start by using
`local` for testing.
2012-09-03 10:05:40 +04:00
Finally, Spark can be used interactively from a modified version of the Scala interpreter that you can start through
`./spark-shell` . This is a great way to learn Spark.
2012-09-26 02:46:18 +04:00
# A Note About Hadoop Versions
2012-09-03 10:05:40 +04:00
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported
storage systems. Because the HDFS protocol has changed in different versions of
Hadoop, you must build Spark against the same version that your cluster runs.
You can change the version by setting the `HADOOP_VERSION` variable at the top
of `project/SparkBuild.scala` , then rebuilding Spark (`sbt/sbt clean compile`).
# Where to Go from Here
2012-09-26 06:31:07 +04:00
**Programming guides:**
2012-09-26 10:59:04 +04:00
2012-09-26 02:46:18 +04:00
* [Spark Programming Guide ]({{HOME_PATH}}scala-programming-guide.html ): how to get started using Spark, and details on the Scala API
* [Java Programming Guide ]({{HOME_PATH}}java-programming-guide.html ): using Spark from Java
2012-09-26 06:31:07 +04:00
**Deployment guides:**
2012-09-26 10:59:04 +04:00
2012-09-17 02:28:52 +04:00
* [Running Spark on Amazon EC2 ]({{HOME_PATH}}ec2-scripts.html ): scripts that let you launch a cluster on EC2 in about 5 minutes
2012-09-26 02:46:18 +04:00
* [Standalone Deploy Mode ]({{HOME_PATH}}spark-standalone.html ): launch a standalone cluster quickly without Mesos
* [Running Spark on Mesos ]({{HOME_PATH}}running-on-mesos.html ): deploy a private cluster using
[Apache Mesos ](http://incubator.apache.org/mesos )
* [Running Spark on YARN ]({{HOME_PATH}}running-on-yarn.html ): deploy Spark on top of Hadoop NextGen (YARN)
2012-09-26 06:31:07 +04:00
**Other documents:**
2012-09-26 10:59:04 +04:00
2012-09-26 06:31:07 +04:00
* [Configuration ]({{HOME_PATH}}configuration.html ): customize Spark via its configuration system
2012-09-27 09:54:39 +04:00
* [Tuning Guide ]({{HOME_PATH}}tuning.html ): best practices to optimize performance and memory use
* [API Docs (Scaladoc) ]({{HOME_PATH}}api/core/index.html )
2012-09-26 02:46:18 +04:00
* [Bagel ]({{HOME_PATH}}bagel-programming-guide.html ): an implementation of Google's Pregel on Spark
2012-09-03 10:05:40 +04:00
* [Contributing to Spark ](contributing-to-spark.html )
2012-09-26 06:31:07 +04:00
**External resources:**
2012-09-26 10:59:04 +04:00
2012-09-03 10:05:40 +04:00
* [Spark Homepage ](http://www.spark-project.org )
2012-09-26 06:31:07 +04:00
* [AMP Camp ](http://ampcamp.berkeley.edu/ ): a two-day training camp at UC Berkeley that featured talks and exercises
about Spark, Shark, Mesos, and more. [Videos ](http://ampcamp.berkeley.edu/agenda-2012 ),
[slides ](http://ampcamp.berkeley.edu/agenda-2012 ) and [exercises ](http://ampcamp.berkeley.edu/exercises-2012 ) are
available online for free.
2012-09-26 02:46:18 +04:00
* [Code Examples ](http://spark-project.org/examples.html ): more are also available in the [examples subfolder ](https://github.com/mesos/spark/tree/master/examples/src/main/scala/spark/examples ) of Spark
2012-09-26 06:31:07 +04:00
* [Paper describing the Spark system ](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf )
* [Mailing List ](http://groups.google.com/group/spark-users )
2012-09-03 10:05:40 +04:00
# Community
2012-09-26 06:31:07 +04:00
To get help using Spark or keep up with Spark development, sign up for the [spark-users mailing list ](http://groups.google.com/group/spark-users ).
2012-09-03 10:05:40 +04:00
If you're in the San Francisco Bay Area, there's a regular [Spark meetup ](http://www.meetup.com/spark-users/ ) every few weeks. Come by to meet the developers and other users.
2012-09-26 06:31:07 +04:00
Finally, if you'd like to contribute code to Spark, read [how to contribute ]({{HOME_PATH}}contributing-to-spark.html ).