Stop evaluating the model with a 200 cap too

Part of #1525
This commit is contained in:
Marco Castelluccio 2020-05-03 13:23:56 +02:00
Родитель 8f677de8fd
Коммит 41162a9b96
1 изменённых файлов: 1 добавлений и 1 удалений

Просмотреть файл

@ -403,7 +403,7 @@ class TestSelectModel(Model):
f"For confidence threshold {confidence_threshold}, with reduction {reduction_str}, and cap at {cap}: scheduled {average_scheduled} tasks on average (min {min_scheduled}, max {max_scheduled}). In {percentage_caught_one}% of pushes we caught at least one failure ({percentage_caught_one_or_some_didnt_run}% ignoring misses when some of our selected tasks didn't run). On average, we caught {average_caught_percentage}% of all seen failures."
)
for cap in [None, 200, 300, 500]:
for cap in [None, 300, 500]:
for reduction in reductions:
for confidence_threshold in [0.3, 0.5, 0.7, 0.8]:
do_eval(confidence_threshold, reduction, cap)