зеркало из https://github.com/microsoft/spark.git
changes per comments.
This commit is contained in:
Родитель
6a47cee721
Коммит
f5067abe85
|
@ -67,7 +67,7 @@ private[spark] class ClusterTaskSetManager(
|
|||
val CPUS_PER_TASK = System.getProperty("spark.task.cpus", "1").toDouble
|
||||
|
||||
// Maximum times a task is allowed to fail before failing the job
|
||||
val MAX_TASK_FAILURES = System.getProperty("spark.task.max.fail", "4").toInt
|
||||
val MAX_TASK_FAILURES = System.getProperty("spark.task.maxFailures", "4").toInt
|
||||
|
||||
// Quantile of tasks at which to start speculation
|
||||
val SPECULATION_QUANTILE = System.getProperty("spark.speculation.quantile", "0.75").toDouble
|
||||
|
|
|
@ -260,6 +260,14 @@ Apart from these, the following properties are also available, and may be useful
|
|||
applications). Note that any RDD that persists in memory for more than this duration will be cleared as well.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>spark.task.maxFailures</td>
|
||||
<td>4</td>
|
||||
<td>
|
||||
Number of individual task failures before giving up on the job.
|
||||
Should greater or equal to 1. Number of allowed retries = this value - 1.
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
</table>
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче