This commit is contained in:
Dmitriy Lyubimov 2013-07-27 23:08:00 -07:00
Родитель 6a47cee721
Коммит f5067abe85
2 изменённых файлов: 9 добавлений и 1 удалений

Просмотреть файл

@ -67,7 +67,7 @@ private[spark] class ClusterTaskSetManager(
val CPUS_PER_TASK = System.getProperty("spark.task.cpus", "1").toDouble
// Maximum times a task is allowed to fail before failing the job
val MAX_TASK_FAILURES = System.getProperty("spark.task.max.fail", "4").toInt
val MAX_TASK_FAILURES = System.getProperty("spark.task.maxFailures", "4").toInt
// Quantile of tasks at which to start speculation
val SPECULATION_QUANTILE = System.getProperty("spark.speculation.quantile", "0.75").toDouble

Просмотреть файл

@ -260,6 +260,14 @@ Apart from these, the following properties are also available, and may be useful
applications). Note that any RDD that persists in memory for more than this duration will be cleared as well.
</td>
</tr>
<tr>
<td>spark.task.maxFailures</td>
<td>4</td>
<td>
Number of individual task failures before giving up on the job.
Should greater or equal to 1. Number of allowed retries = this value - 1.
</td>
</tr>
</table>