This commit is contained in:
Matei Zaharia 2012-09-26 23:53:38 -07:00
Родитель a4093f7563
Коммит bf18e0994e
2 изменённых файлов: 3 добавлений и 3 удалений

Просмотреть файл

@ -33,7 +33,7 @@ There are a few key differences between the Java and Scala APIs:
* RDD methods like `collect()` and `countByKey()` return Java collections types,
such as `java.util.List` and `java.util.Map`.
* Key-value pairs, which are simply written as `(key, value)` in Scala, are represented
by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key, value)`
by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key, value)`.
## RDD Classes

Просмотреть файл

@ -18,7 +18,7 @@ separate branch of Spark, called `yarn`, which you can do as follows:
- In order to distribute Spark within the cluster, it must be packaged into a single JAR file. This can be done by running `sbt/sbt assembly`
- Your application code must be packaged into a separate JAR file.
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_2.9.1-0.6.0-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`.
If you want to test out the YARN deployment mode, you can use the current Spark examples. A `spark-examples_2.9.2-0.6.0-SNAPSHOT.jar` file can be generated by running `sbt/sbt package`.
# Launching Spark on YARN
@ -35,7 +35,7 @@ The command to launch the YARN Client is as follows:
For example:
SPARK_JAR=./core/target/spark-core-assembly-0.6.0-SNAPSHOT.jar ./run spark.deploy.yarn.Client \
--jar examples/target/scala-2.9.1/spark-examples_2.9.1-0.6.0-SNAPSHOT.jar \
--jar examples/target/scala-2.9.2/spark-examples_2.9.2-0.6.0-SNAPSHOT.jar \
--class spark.examples.SparkPi \
--args standalone \
--num-workers 3 \