Mirror of Apache Spark
Перейти к файлу
Matei Zaharia a30fac16ca Merge pull request #883 from alig/master
Don't require the spark home environment variable to be set for standalone mode (change needed by SIMR)
2013-09-01 12:27:50 -07:00
assembly Update Maven build to create assemblies expected by new scripts 2013-08-29 21:19:06 -07:00
bagel Change build and run instructions to use assemblies 2013-08-29 21:19:04 -07:00
bin Print output from spark-daemon only when it fails to launch 2013-08-31 17:31:07 -07:00
conf Another fix suggested by Patrick 2013-08-31 17:41:47 -07:00
core Merge pull request #883 from alig/master 2013-09-01 12:27:50 -07:00
docs Add assmebly plug in links 2013-09-01 09:43:42 -07:00
ec2
examples Update Maven build to create assemblies expected by new scripts 2013-08-29 21:19:06 -07:00
mllib Fix broken build by removing addIntercept 2013-08-30 00:16:32 -07:00
project Update Maven build to create assemblies expected by new scripts 2013-08-29 21:19:06 -07:00
python Merge pull request #861 from AndreSchumacher/pyspark_sampling_function 2013-08-31 13:39:24 -07:00
repl
repl-bin
sbt Fix finding of assembly JAR, as well as some pointers to ./run 2013-08-29 21:19:06 -07:00
streaming Merge pull request #701 from ScrapCodes/documentation-suggestions 2013-08-22 22:08:03 -07:00
tools
yarn Merge pull request #865 from tgravescs/fixtmpdir 2013-08-28 12:44:46 -07:00
.gitignore Fix PySpark for assembly run and include it in dist 2013-08-29 21:19:06 -07:00
LICENSE
NOTICE
README.md Small fixes to README 2013-08-31 18:08:05 -07:00
kmeans_data.txt
lr_data.txt
make-distribution.sh Fix path to assembly in make-distribution.sh 2013-08-29 21:19:07 -07:00
pagerank_data.txt
pom.xml Update Maven build to create assemblies expected by new scripts 2013-08-29 21:19:06 -07:00
pyspark More doc improvements + better warnings when you haven't built Spark 2013-08-30 12:41:25 -07:00
run-example More doc improvements + better warnings when you haven't built Spark 2013-08-30 12:41:25 -07:00
run.cmd
run2.cmd
spark-class Merge pull request #880 from mateiz/ui-tweaks 2013-08-31 18:55:41 -07:00
spark-executor Change build and run instructions to use assemblies 2013-08-29 21:19:04 -07:00
spark-shell Change build and run instructions to use assemblies 2013-08-29 21:19:04 -07:00
spark-shell.cmd

README.md

Apache Spark

Lightning-Fast Cluster Computing - http://spark.incubator.apache.org/

Online Documentation

You can find the latest Spark documentation, including a programming guide, on the project webpage at http://spark.incubator.apache.org/documentation.html. This README file only contains basic setup instructions.

Building

Spark requires Scala 2.9.3 (Scala 2.10 is not yet supported). The project is built using Simple Build Tool (SBT), which is packaged with it. To build Spark and its example programs, run:

sbt/sbt assembly

Once you've built Spark, the easiest way to start using it is the shell:

./spark-shell

Or, for the Python API, the Python shell (./pyspark).

Spark also comes with several sample programs in the examples directory. To run one of them, use ./run-example <class> <params>. For example:

./run-example spark.examples.SparkLR local[2]

will run the Logistic Regression example locally on 2 CPUs.

Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <master> parameter that is the cluster URL to connect to. This can be a mesos:// or spark:// URL, or "local" to run locally with one thread, or "local[N]" to run locally with N threads.

A Note About Hadoop Versions

Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs. You can change the version by setting the SPARK_HADOOP_VERSION environment when building Spark.

For Apache Hadoop versions 1.x, Cloudera CDH MRv1, and other Hadoop versions without YARN, use:

# Apache Hadoop 1.2.1
$ SPARK_HADOOP_VERSION=1.2.1 sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v1
$ SPARK_HADOOP_VERSION=2.0.0-mr1-cdh4.2.0 sbt/sbt assembly

For Apache Hadoop 2.x, 0.23.x, Cloudera CDH MRv2, and other Hadoop versions with YARN, also set SPARK_YARN=true:

# Apache Hadoop 2.0.5-alpha
$ SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly

# Cloudera CDH 4.2.0 with MapReduce v2
$ SPARK_HADOOP_VERSION=2.0.0-cdh4.2.0 SPARK_YARN=true sbt/sbt assembly

For convenience, these variables may also be set through the conf/spark-env.sh file described below.

When developing a Spark application, specify the Hadoop version by adding the "hadoop-client" artifact to your project's dependencies. For example, if you're using Hadoop 1.0.1 and build your application using SBT, add this entry to libraryDependencies:

"org.apache.hadoop" % "hadoop-client" % "1.2.1"

If your project is built with Maven, add this to your POM file's <dependencies> section:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-client</artifactId>
  <version>1.2.1</version>
</dependency>

Configuration

Please refer to the Configuration guide in the online documentation for an overview on how to configure Spark.

Contributing to Spark

Contributions via GitHub pull requests are gladly accepted from their original author. Along with any pull requests, please state that the contribution is your original work and that you license the work to the project under the project's open source license. Whether or not you state this explicitly, by submitting any copyrighted material via pull request, email, or other means you agree to license the material under the project's open source license and warrant that you have the legal authority to do so.