Benchmarks: Doc Revision - Add ReduceType into benchmarks doc. (#150)

Add ReduceType description into benchmarks doc.
This commit is contained in:
guoshzhao 2021-08-09 16:52:09 +08:00 коммит произвёл GitHub
Родитель acf365a856
Коммит d23ad898b0
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
1 изменённых файлов: 16 добавлений и 2 удалений

Просмотреть файл

@ -32,7 +32,7 @@ The structure of `benchmarks` package can be divided into layers from the bottom
2. Derived classes for all implemented benchmarks, which need to realize all the abstract interfaces. The benchmarks will be registered into `BenchmarkRegistry`.
3. `BenchmarkRegistry` provides a way of benchmark registration, maintains all the registered benchmarks, and supports benchmark launching by `BenchmarkContext`.
4. `BenchmarkContext` provides the context to launch one benchmark, including name, parameters, platform(CPU, GPU, etc.), and framework(Pytorch, TF, ONNX, etc.).
5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, etc.
5. `BenchmarkResult` defines the structured results for each benchmark in json format, including name, return_code, start_time, end_time, raw_data, summarized metrics, reduce type, etc.
The `Executor` on the uppermost layer is the entrance for all the benchmarks. It launches the benchmark by `BenchmarkRegistry` and fetch `BenchmarkResult`.
@ -181,7 +181,7 @@ This chapter will describe the interfaces with the caller (Superbench executor),
The inputs needed by the `benchmarks` package is simple, just the context object of the benchmark want to run:
### Invoke
#### Invoke
```py
context = BenchmarkRegistry.create_benchmark_context(
@ -219,6 +219,11 @@ result = {
...
'metricsM': List[Number],
},
'reduce': {
'metrics1': ReduceType,
...
'metricsM': ReduceType,
},
}
```
@ -244,6 +249,12 @@ result = {
'throughput-inference-float32': [avg_throughput1, ..., avg_throughputN],
'throughput-inference-float16': [avg_throughput1, ..., avg_throughputN],
},
'reduce': {
'throughput-train-float32': 'min',
'throughput-train-float16': 'min',
'throughput-inference-float32': None,
'throughput-inference-float16': None,
},
}
```
@ -261,5 +272,8 @@ result = {
'result': { # Key is metrics
'overhead': [overhead1, ..., overheadN],
},
'reduce': {
'overhead': None,
},
}
```