Co-authored-by: Adam Sitnik <adam.sitnik@gmail.com>
This commit is contained in:
Dan Moseley 2022-02-09 05:27:41 -07:00 коммит произвёл GitHub
Родитель a74ff5e229
Коммит ef3edbc52d
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
10 изменённых файлов: 89 добавлений и 93 удалений

Просмотреть файл

@ -34,10 +34,10 @@ For **Self-Contained Empty Console App Size On Disk** scenario, run precommand t
```cmd
cd emptyconsoletemplate
python3 pre.py publish -f net6.0 -c Release -r win-x64
python3 pre.py publish -f net7.0 -c Release -r win-x64
```
`-f net6.0` sets the new template project targeting `net6.0` framework; `-c Release` configures the publish to be in release; `-r win-x64` takes an [RID](https://docs.microsoft.com/en-us/dotnet/core/rid-catalog)(Runtime Identifier) and specifies which runtime it supports.
`-f net7.0` sets the new template project targeting `net7.0` framework; `-c Release` configures the publish to be in release; `-r win-x64` takes an [RID](https://docs.microsoft.com/en-us/dotnet/core/rid-catalog)(Runtime Identifier) and specifies which runtime it supports.
**Note that by specifying RID option `-r <RID>`, it defaults to publish the app into a [SCD](https://docs.microsoft.com/en-us/dotnet/core/deploying/#publish-self-contained)(Self-contained Deployment) app; without it, a [FDD](https://docs.microsoft.com/en-us/dotnet/core/deploying/#publish-framework-dependent)(Framework Dependent Deployment) app will be published.**
@ -81,31 +81,30 @@ Same instruction of [Scenario Tests Guide - Step 4](./scenarios-workflow.md#step
## Command Matrix
- \<tfm> values:
- netcoreapp2.1
- netcoreapp3.1
- net5.0
- net6.0
- net7.0
- \<-r RID> values:
- ""(WITHOUT `-r <RID>` --> FDD app)
- `"-r <RID>"` (WITH `-r` --> SCD app, [list of RID](https://docs.microsoft.com/en-us/dotnet/core/rid-catalog))
| Scenario | Asset Directory | Precommand | Testcommand | Postcommand | Supported Framework | Supported Platform |
|-----------------------------------------------|-------------------------|-----------------------------------------------|-----------------|-------------|--------------------------------------------------|--------------------|
| Static Console Template Publish Startup | staticconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows |
| Static Console Template Publish SizeOnDisk | staticconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| Static Console Template Build SizeOnDisk | staticconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| Static VB Console Template Publish Startup | staticvbconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows |
| Static VB Console Template Publish SizeOnDisk | staticvbconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| Static VB Console Template Build SizeOnDisk | staticvbconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| Static Winforms Template Publish Startup | staticwinformstemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp2.1;netcoreapp3.1 | Windows |
| Static Winforms Template Publish SizeOnDisk | staticwinformstemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1 | Windows;Linux |
| Static Winforms Template Build SizeOnDisk | staticwinformstemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1 | Windows;Linux |
| New Console Template Publish Startup | emptyconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows |
| New Console Template Publish SizeOnDisk | emptyconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| New Console Template Build SizeOnDisk | emptyconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| New VB Console Template Publish Startup | emptyvbconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows |
| New VB Console Template Publish SizeOnDisk | emptyvbconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| New VB Console Template Build SizeOnDisk | emptyvbconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp2.1;netcoreapp3.1;net5.0;net6.0 | Windows;Linux |
| Static Console Template Publish Startup | staticconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp3.1;net6.0;net7.0 | Windows |
| Static Console Template Publish SizeOnDisk | staticconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| Static Console Template Build SizeOnDisk | staticconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| Static VB Console Template Publish Startup | staticvbconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp3.1;net6.0;net7.0 | Windows |
| Static VB Console Template Publish SizeOnDisk | staticvbconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| Static VB Console Template Build SizeOnDisk | staticvbconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| Static Winforms Template Publish Startup | staticwinformstemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp3.1 | Windows |
| Static Winforms Template Publish SizeOnDisk | staticwinformstemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp3.1 | Windows;Linux |
| Static Winforms Template Build SizeOnDisk | staticwinformstemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp3.1 | Windows;Linux |
| New Console Template Publish Startup | emptyconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp3.1;net6.0;net7.0 | Windows |
| New Console Template Publish SizeOnDisk | emptyconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| New Console Template Build SizeOnDisk | emptyconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| New VB Console Template Publish Startup | emptyvbconsoletemplate | pre.py publish -f TFM -c Release | test.py startup | post.py | netcoreapp3.1;net6.0;net7.0 | Windows |
| New VB Console Template Publish SizeOnDisk | emptyvbconsoletemplate | pre.py publish -f TFM -c Release /<-r RID> | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
| New VB Console Template Build SizeOnDisk | emptyvbconsoletemplate | pre.py build -f TFM -c Release | test.py sod | post.py | netcoreapp3.1;net6.0;net7.0 | Windows;Linux |
## Relevant Links

Просмотреть файл

@ -59,7 +59,7 @@ In order to build or run the benchmarks you will need the **.NET Core command-li
### Using .NET Cli
To build the benchmarks you need to have the right `dotnet cli`. This repository allows to benchmark .NET Core 2.1, 3.1, 5.0 and 6.0 so you need to install all of them.
To build the benchmarks you need to have the right `dotnet cli`. This repository allows you to benchmark .NET Core 3.1, .NET 6.0 and .NET 7.0 so you need to install all of them.
All you need to do is run the following command:
@ -70,8 +70,8 @@ dotnet build -c Release
If you don't want to install all of them and just run the benchmarks for selected runtime(s), you need to manually edit the [MicroBenchmarks.csproj](../src/benchmarks/micro/MicroBenchmarks.csproj) file.
```diff
-<TargetFrameworks>netcoreapp3.1;net5.0;net6.0</TargetFrameworks>
+<TargetFrameworks>net6.0</TargetFrameworks>
-<TargetFrameworks>netcoreapp3.1;net6.0;net7.0</TargetFrameworks>
+<TargetFrameworks>net7.0</TargetFrameworks>
```
The alternative is to set `PERFLAB_TARGET_FRAMEWORKS` environment variable to selected Target Framework Moniker.
@ -81,7 +81,7 @@ The alternative is to set `PERFLAB_TARGET_FRAMEWORKS` environment variable to se
If you don't want to install `dotnet cli` manually, we have a Python 3 script which can do that for you. All you need to do is to provide the frameworks:
```cmd
py .\scripts\benchmarks_ci.py --frameworks net6.0
py .\scripts\benchmarks_ci.py --frameworks net7.0
```
## Running the Benchmarks
@ -91,7 +91,7 @@ py .\scripts\benchmarks_ci.py --frameworks net6.0
To run the benchmarks in interactive mode you have to execute `dotnet run -c Release -f $targetFrameworkMoniker` in the folder with benchmarks project.
```cmd
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net6.0
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net7.0
Available Benchmarks:
#0 Burgers
#1 ByteMark
@ -122,37 +122,37 @@ The glob patterns are applied to full benchmark name: namespace.typeName.methodN
- Run all the benchmarks from BenchmarksGame namespace:
```cmd
dotnet run -c Release -f net6.0 --filter BenchmarksGame*
dotnet run -c Release -f net7.0 --filter BenchmarksGame*
```
- Run all the benchmarks with type name Richards:
```cmd
dotnet run -c Release -f net6.0 --filter *.Richards.*
dotnet run -c Release -f net7.0 --filter *.Richards.*
```
- Run all the benchmarks with method name ToStream:
```cmd
dotnet run -c Release -f net6.0 --filter *.ToStream
dotnet run -c Release -f net7.0 --filter *.ToStream
```
- Run ALL benchmarks:
```cmd
dotnet run -c Release -f net6.0 --filter *
dotnet run -c Release -f net7.0 --filter *
```
- You can provide many filters (logical disjunction):
```cmd
dotnet run -c Release -f net6.0 --filter System.Collections*.Dictionary* *.Perf_Dictionary.*
dotnet run -c Release -f net7.0 --filter System.Collections*.Dictionary* *.Perf_Dictionary.*
```
- To print a **joined summary** for all of the benchmarks (by default printed per type), use `--join`:
```cmd
dotnet run -c Release -f net6.0 --filter BenchmarksGame* --join
dotnet run -c Release -f net7.0 --filter BenchmarksGame* --join
```
Please remember that on **Unix** systems `*` is resolved to all files in current directory, so you need to escape it `'*'`.
@ -161,10 +161,10 @@ Please remember that on **Unix** systems `*` is resolved to all files in current
To print the list of all available benchmarks you need to pass `--list [tree/flat]` argument. It can also be combined with `--filter` option.
Example: Show the tree of all the benchmarks from System.Threading namespace that can be run for .NET 6.0:
Example: Show the tree of all the benchmarks from System.Threading namespace that can be run for .NET 7.0:
```cmd
dotnet run -c Release -f net6.0 --list tree --filter System.Threading*
dotnet run -c Release -f net7.0 --list tree --filter System.Threading*
```
```log
@ -259,7 +259,7 @@ If you want to disassemble the benchmarked code, you need to use the [Disassembl
You can do that by passing `--disassm` to the app or by using `[DisassemblyDiagnoser(printAsm: true, printSource: true)]` attribute or by adding it to your config with `config.With(DisassemblyDiagnoser.Create(new DisassemblyDiagnoserConfig(printAsm: true, recursiveDepth: 1))`.
Example: `dotnet run -c Release -f net6.0 -- --filter System.Memory.Span<Int32>.Reverse -d`
Example: `dotnet run -c Release -f net7.0 -- --filter System.Memory.Span<Int32>.Reverse -d`
```assembly
; System.Runtime.InteropServices.MemoryMarshal.GetReference[[System.Byte, System.Private.CoreLib]](System.Span`1<Byte>)
@ -285,30 +285,30 @@ M00_L00:
The `--runtimes` or just `-r` allows you to run the benchmarks for **multiple Runtimes**.
Available options are: Mono, CoreRT, net461, net462, net47, net471, net472, netcoreapp3.1, net5.0 and net6.0.
Available options are: Mono, CoreRT, net461, net462, net47, net471, net472, netcoreapp3.1, net6.0 and net7.0.
Example: run the benchmarks for .NET 5.0 and 6.0:
Example: run the benchmarks for .NET 6.0 and 7.0:
```cmd
dotnet run -c Release -f net5.0 --runtimes net5.0 net6.0
dotnet run -c Release -f net6.0 --runtimes net6.0 net7.0
```
**Important: The host process needs to be the lowest common API denominator of the runtimes you want to compare!** In this case, it was `net5.0`.
**Important: The host process needs to be the lowest common API denominator of the runtimes you want to compare!** In this case, it was `net6.0`.
## Regressions
To perform a Mann–Whitney U Test and display the results in a dedicated column you need to provide the Threshold for Statistical Test via `--statisticalTest` argument. The value can be relative (5%) or absolute (10ms, 100ns, 1s)
Example: run Mann–Whitney U test with relative ratio of 5% for `BinaryTrees_2` for .NET 5.0 (base) vs .NET 6.0 (diff). .NET Core 5.0 will be baseline because it was first.
Example: run Mann–Whitney U test with relative ratio of 5% for `BinaryTrees_2` for .NET 6.0 (base) vs .NET 7.0 (diff). .NET 6.0 will be baseline because it was first.
```cmd
dotnet run -c Release -f net6.0 --filter *BinaryTrees_2* --runtimes net5.0 net6.0 --statisticalTest 5%
dotnet run -c Release -f net7.0 --filter *BinaryTrees_2* --runtimes net6.0 net7.0 --statisticalTest 5%
```
| Method | Toolchain | Mean | MannWhitney(5%) |
|-------------- |-------------- |---------:|---------------- |
| BinaryTrees_2 | net5.0 | 124.4 ms | Base |
| BinaryTrees_2 | net6.0 | 153.7 ms | Slower |
| BinaryTrees_2 | net6.0 | 124.4 ms | Base |
| BinaryTrees_2 | net7.0 | 153.7 ms | Slower |
**Note:** to compare the historical results you need to use [Results Comparer](../src/tools/ResultsComparer/README.md)
@ -329,24 +329,21 @@ Please use this option only when you are sure that the benchmarks you want to ru
It's possible to benchmark private builds of [dotnet/runtime](https://github.com/dotnet/runtime) using CoreRun.
```cmd
dotnet run -c Release -f net6.0 --coreRun $thePath
dotnet run -c Release -f net7.0 --coreRun $thePath
```
**Note:** You can provide more than 1 path to CoreRun. In such case, the first path will be the baseline and all the benchmarks are going to be executed for all CoreRuns you have specified.
**Note:** If `CoreRunToolchain` detects that you have some older version of dependencies required to run the benchmarks in CoreRun folder, it's going to overwrite them with newer versions from the published app. It's going to do that in a shadow copy of the folder with CoreRun, so your configuration remains untouched.
If you are not sure which assemblies gets loaded and used you can use the following code to find out:
If you are not sure which assemblies are loaded and used you can use the following code to find out:
```cs
[GlobalSetup]
public void PrintInfo()
{
var coreFxAssemblyInfo = FileVersionInfo.GetVersionInfo(typeof(Regex).GetTypeInfo().Assembly.Location);
var coreClrAssemblyInfo = FileVersionInfo.GetVersionInfo(typeof(object).GetTypeInfo().Assembly.Location);
Console.WriteLine($"// CoreFx version: {coreFxAssemblyInfo.FileVersion}, location {typeof(Regex).GetTypeInfo().Assembly.Location}, product version {coreFxAssemblyInfo.ProductVersion}");
Console.WriteLine($"// CoreClr version {coreClrAssemblyInfo.FileVersion}, location {typeof(object).GetTypeInfo().Assembly.Location}, product version {coreClrAssemblyInfo.ProductVersion}");
var systemPrivateCoreLib = FileVersionInfo.GetVersionInfo(typeof(object).Assembly.Location);
Console.WriteLine($"// System.Private.CoreLib version {systemPrivateCoreLib.FileVersion}, location {typeof(object).Assembly.Location}, product version {systemPrivateCoreLib.ProductVersion}");
}
```
@ -355,10 +352,10 @@ public void PrintInfo()
You can also use any dotnet cli to build and run the benchmarks.
```cmd
dotnet run -c Release -f net6.0 --cli "C:\Projects\performance\.dotnet\dotnet.exe"
dotnet run -c Release -f net7.0 --cli "C:\Projects\performance\.dotnet\dotnet.exe"
```
This is very useful when you want to compare different builds of .NET Core SDK.
This is very useful when you want to compare different builds of .NET.
### Private CLR Build
@ -367,7 +364,7 @@ It's possible to benchmark a private build of .NET Runtime. You just need to pas
So if you made a change in CLR and want to measure the difference, you can run the benchmarks with:
```cmd
dotnet run -c Release -f net472 -- --clrVersion $theVersion
dotnet run -c Release -f net48 -- --clrVersion $theVersion
```
More info can be found [here](https://github.com/dotnet/BenchmarkDotNet/issues/706).
@ -377,5 +374,5 @@ More info can be found [here](https://github.com/dotnet/BenchmarkDotNet/issues/7
To run benchmarks with private CoreRT build you need to provide the `IlcPath`. Example:
```cmd
dotnet run -c Release -f net6.0 -- --ilcPath C:\Projects\corert\bin\Windows_NT.x64.Release
dotnet run -c Release -f net7.0 -- --ilcPath C:\Projects\corert\bin\Windows_NT.x64.Release
```

Просмотреть файл

@ -99,7 +99,7 @@ During the port from xunit-performance to BenchmarkDotNet, the namespaces, type
Please remember that you can filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](./benchmarkdotnet.md#Filtering-the-Benchmarks)):
```cmd
dotnet run -c Release -f net6.0 --filter System.Memory*
dotnet run -c Release -f net7.0 --filter System.Memory*
```
(Run the above command on `src/benchmarks/micro/MicroBenchmarks.csproj`.)
@ -119,8 +119,8 @@ C:\Projects\runtime> build -c Release
Every time you want to run the benchmarks against local build of [dotnet/runtime](https://github.com/dotnet/runtime) you need to provide the path to CoreRun:
```cmd
dotnet run -c Release -f net6.0 --filter $someFilter \
--coreRun C:\Projects\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe
dotnet run -c Release -f net7.0 --filter $someFilter \
--coreRun C:\Projects\runtime\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe
```
**Note:** BenchmarkDotNet expects a path to `CoreRun.exe` file (`corerun` on Unix), not to `Core_Root` folder.
@ -134,7 +134,7 @@ C:\Projects\runtime\src\libraries\System.Text.RegularExpressions\src> dotnet msb
**Note:** the exception to this rule are libraries that **are not part of the shared SDK**. The `build` script of the runtime repo does not copy them to the CoreRun folder so you need to do it on your own:
```cmd
cp artifacts\bin\runtime\net6.0-Windows_NT-Release-x64\Microsoft.Extensions.Caching.Memory.dll artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\
cp artifacts\bin\runtime\net7.0-Windows_NT-Release-x64\Microsoft.Extensions.Caching.Memory.dll artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\
```
Of course only if you want to benchmark these specific libraries. If you don't, the default versions defined in [MicroBenchmarks.csproj](../src/benchmarks/micro/MicroBenchmarks.csproj) project file are going to get used.
@ -146,9 +146,9 @@ Preventing regressions is a fundamental part of our performance culture. The che
**Before introducing any changes that may impact performance**, you should run the benchmarks that test the performance of the feature that you are going to work on and store the results in a **dedicated** folder.
```cmd
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net6.0 \
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net7.0 \
--artifacts "C:\results\before" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe" \
--filter System.IO.Pipes*
```
@ -161,9 +161,9 @@ After you introduce the changes and rebuild the part of [dotnet/runtime](https:/
```cmd
C:\Projects\runtime\src\libraries\System.IO.Pipes\src> dotnet msbuild /p:Configuration=Release
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net6.0 \
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net7.0 \
--artifacts "C:\results\after" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe" \
--filter System.IO.Pipes*
```
@ -188,7 +188,7 @@ No Slower results for the provided threshold = 2% and noise filter = 0.3ns.
To run the benchmarks against the latest .NET Core SDK you can use the [benchmarks_ci.py](../scripts/benchmarks_ci.py) script. It's going to download the latest .NET Core SDK(s) for the provided framework(s) and run the benchmarks for you. Please see [Prerequisites](./prerequisites.md#python) for more.
```cmd
C:\Projects\performance> py scripts\benchmarks_ci.py -f net6.0 \
C:\Projects\performance> py scripts\benchmarks_ci.py -f net7.0 \
--bdn-arguments="--artifacts "C:\results\latest_sdk"" \
--filter System.IO.Pipes*
```
@ -208,7 +208,7 @@ The real performance investigation starts with profiling. We have a comprehensiv
To profile the benchmarked code and produce an ETW Trace file ([read more](./benchmarkdotnet.md#Profiling)):
```cmd
dotnet run -c Release -f net6.0 --profiler ETW --filter $YourFilter
dotnet run -c Release -f net7.0 --profiler ETW --filter $YourFilter
```
The benchmarking tool is going to print the path to the `.etl` trace file. You should open it with PerfView or Windows Performance Analyzer and start the analysis from there. If you are not familiar with PerfView, you should watch [PerfView Tutorial](https://channel9.msdn.com/Series/PerfView-Tutorial) by @vancem first. It's an investment that is going to pay off very quickly.
@ -225,7 +225,7 @@ If profiling using the `--profiler ETW` is not enough, you should use a differen
BenchmarkDotNet has some extra features that might be useful when doing performance investigation:
- You can run the benchmarks against [multiple Runtimes](./benchmarkdotnet.md#Multiple-Runtimes). It can be very useful when the regression has been introduced between .NET Core releases, for example: between net5.0 and net6.0.
- You can run the benchmarks against [multiple Runtimes](./benchmarkdotnet.md#Multiple-Runtimes). It can be very useful when the regression has been introduced between .NET Core releases, for example: between net6.0 and net7.0.
- You can run the benchmarks using provided [dotnet cli](./benchmarkdotnet.md#dotnet-cli). You can download few dotnet SDKs, unzip them and just run the benchmarks to spot the version that has introduced the regression to narrow down your investigation.
- You can run the benchmarks using few [CoreRuns](./benchmarkdotnet.md#CoreRun). You can build the latest [dotnet/runtime](https://github.com/dotnet/runtime) in Release, create a copy of the folder with CoreRun and use git to checkout an older commit. Then rebuild [dotnet/runtime](https://github.com/dotnet/runtime) and run the benchmarks against the old and new builds. This can narrow down your investigation to the commit that has introduced the bug.
@ -276,7 +276,7 @@ Because the benchmarks are not in the [dotnet/runtime](https://github.com/dotnet
The first thing you need to do is send a PR with the new API to the [dotnet/runtime](https://github.com/dotnet/runtime) repository. Once your PR gets merged and a new NuGet package is published to the [dotnet/runtime](https://github.com/dotnet/runtime) NuGet feed, you should remove the Reference to a `.dll` and install/update the package consumed by [MicroBenchmarks](../src/benchmarks/micro/MicroBenchmarks.csproj). You can do this by running the following script locally:
```cmd
/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter $YourFilter -f net6.0
/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter $YourFilter -f net7.0
```cmd
This script will try to pull the latest .NET Core SDK from [dotnet/runtime](https://github.com/dotnet/runtime) nightly build, which should contain the new API that you just merged in your first PR, and use that to build MicroBenchmarks project and then run the benchmarks that satisfy the filter you provided.

Просмотреть файл

@ -26,7 +26,7 @@ and measures the sizes of the `pub\` directory and its children with [SizeOnDisk
### Prerequisites
- python3 or newer
- dotnet runtime 5.0 or newer
- dotnet runtime 6.0 or newer
### Step 1 Initialize Environment
@ -84,7 +84,7 @@ Same instruction of [Step 4 in Scenario Tests Guide](scenarios-workflow.md#step-
For the purpose of quick reference, the commands can be summarized into the following matrix:
| Scenario | Asset Directory | Precommand | Testcommand | Postcommand | Supported Framework | Supported Platform |
|-------------------------------------|-----------------|-------------------------------------------------------------|-------------------------------------------------------------------|-------------|---------------------|--------------------|
| SOD - New Blazor Template - Publish | blazor | pre.py publish --msbuild "/p:_TrimmerDumpDependencies=true" | test.py sod --scenario-name "SOD - New Blazor Template - Publish" | post.py | net6.0 | Windows;Linux |
| SOD - New Blazor Template - Publish | blazor | pre.py publish --msbuild "/p:_TrimmerDumpDependencies=true" | test.py sod --scenario-name "SOD - New Blazor Template - Publish" | post.py | net7.0 | Windows;Linux |
## Relevant Links

Просмотреть файл

@ -71,7 +71,7 @@ C:\Projects\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\dotnet.exe
* `CoreRun` and all `System.XYZ.dlls` that can be used to run the code that you want to profile. Example:
```log
C:\Projects\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe
C:\Projects\runtime\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe
```
* But the dotnet/runtime build only produces the artifacts necessary for a _runtime_, not for an _sdk_. Visual Studio will require a full SDK to be able to compile your console app from the next step. One way to convert your generated _runtime_ into a full _sdk_, is to navigate to the `runtime\.dotnet\` folder, copy the `packs` and `sdk` folders located inside, and then paste them inside `runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\`.

Просмотреть файл

@ -103,7 +103,7 @@ Most (if not all) of these fields can be retrieved from your machine's _Task Man
You should have run `py . setup` already.
You can write a *benchfile* to specify tests to be run. You then run these to create *tracefiles*.
You can write a _benchfile_ to specify tests to be run. You then run these to create _tracefiles_.
You then analyze the trace files to produce a result.
The benchfiles can exist anywhere. This example will use the local directory `bench` which is in `.gitignore` so you can use it for scratch.
@ -158,7 +158,7 @@ This is explained in the following sections.
### Running the Entire Suite
To run all the tests at once, you ask the infra to perform a *suite-run*. This
To run all the tests at once, you ask the infra to perform a _suite-run_. This
functionality also allows you to run as many scenarios/tests as you'd like in a
bundle, which you specify in the suite yaml file (more information on this later).
@ -180,16 +180,16 @@ bench_files:
command_groups: {}
```
GC Benchmarking Infra will read one by one each of the specified files under *bench_files*,
GC Benchmarking Infra will read one by one each of the specified files under _bench_files_,
and run their specified executables accordingly. If any test fails, Infra will
proceed to run the next one and will display a summary of the encountered problems
at the end of the run.
The *command_groups* tag is used to store sets of other commands you might want to run in bulk,
The _command_groups_ tag is used to store sets of other commands you might want to run in bulk,
rather than individually. For simplicity, it is left empty in this example.
When *GCPerfSim* is modified, it is important to run the full suite of default
scenarios with both, the original and the modified versions of *GCPerfSim*. You
When _GCPerfSim_ is modified, it is important to run the full suite of default
scenarios with both, the original and the modified versions of _GCPerfSim_. You
only need to make sure to keep a copy before rebuilding it, and then specify
both dll's under the `test_executables` group in the `yaml` file. This is to
ensure no regressions have occurred and the tool continues to work properly.
@ -198,7 +198,7 @@ For full information regarding suites, check the full documentation [here](docs/
### Running a Single Scenario
Let's run *low_memory_container* for this example.
Let's run _low_memory_container_ for this example.
```sh
py . run bench/suite/low_memory_container.yaml
@ -227,7 +227,7 @@ To fix either of these, specify `dotnet_path` and `dotnet_trace_path` in `option
Note that if you recently built coreclr, that probably left a `dotnet` process open that `run` will ask you to kill. Just do so and run again with `--overwrite`.
This simple scenario should take under 2 minutes. Other ones require more time.
We aim for an individual test to take about 20 seconds and this does 2 iterations for each of the 2 *coreclrs*.
We aim for an individual test to take about 20 seconds and this does 2 iterations for each of the 2 _coreclrs_.
Running this produced a directory called `bench/suite/low_memory_container.yaml.out`.
This contains a trace file (and some other small files) for each of the tests. (If you had specified `collect: none` in `options:` in the benchfile, there would be no trace file and the other files would contain all information.)
@ -386,7 +386,7 @@ In many cases, all you need to use the infra is to manually modify a benchfile,
Analysis commands are based on metrics.
A metric is the name of a measurement we might take. The 'metric' is the *name* of the measurement, not the metric itself. Length is a metric, 3 meters is a 'metric value'.
A metric is the name of a measurement we might take. The 'metric' is the _name_ of the measurement, not the metric itself. Length is a metric, 3 meters is a 'metric value'.
A run-metric is the name a measurement of some property of an entire run of a test. For example, `FirstToLastGCSeconds` is the metric that measures the time a test took. Another example is `PauseDurationMSec_Mean` which is the mean pause duration of a GC. Since getting the average requires looking at every GC, it is considered a metric of the whole run, not a single-gc-metric.

Просмотреть файл

@ -9,7 +9,7 @@ We're going to see how changing gen0size affects performance. We'll start by cre
```yaml
vary: config
test_executables:
defgcperfsim: /performance/artifacts/bin/GCPerfSim/release/netcoreapp5.0/GCPerfSim.dll
defgcperfsim: /performance/artifacts/bin/GCPerfSim/release/net7.0/GCPerfSim.dll
coreclrs:
a:
core_root: ./coreclr

Просмотреть файл

@ -158,7 +158,7 @@ Running this yields the following output:
## Numeric Analysis
Numeric Analysis is a feature that allows you to use the `pandas` library to
analyze GC metrics from various runs of a *GCPerfSim* test. It reads all these
analyze GC metrics from various runs of a _GCPerfSim_ test. It reads all these
metrics numbers from all the iterations for the test run, and builds a list
with them, ready for `pandas` to consume and interpret it. Some of the main
tasks you can do with your data are, but not limited to:
@ -176,10 +176,10 @@ For the full pandas documentation, you can check their [website](https://pandas.
### Requirements
First, run any *GCPerfSim* test you want to analyze multiple times. It all depends
First, run any _GCPerfSim_ test you want to analyze multiple times. It all depends
on your goal for how many, but when working with statistics, the more the merrier.
Once your tests are done running, open up `jupyter_notebook.py` in *VSCode* and
Once your tests are done running, open up `jupyter_notebook.py` in VS Code and
run the first cell for general setup. Once that is done, there is a basic
working template at the end of the notebook.

Просмотреть файл

@ -42,8 +42,8 @@ Adding the new _GCPerfSim_ build, the `yaml` file would look like this:
```yml
vary: executable
test_executables:
orig_gcperfsim: C:\repos\gcperfsim-backup\GCPerfSim\release\netcoreapp5.0\GCPerfSim.dll
mod_gcperfsim: C:\repos\performance\artifacts\bin\GCPerfSim\release\netcoreapp5.0\GCPerfSim.dll
orig_gcperfsim: C:\repos\gcperfsim-backup\GCPerfSim\release\net7.0\GCPerfSim.dll
mod_gcperfsim: C:\repos\performance\artifacts\bin\GCPerfSim\release\net7.0\GCPerfSim.dll
coreclrs:
a:
core_root: C:\repos\core_root

Просмотреть файл

@ -12,38 +12,38 @@ To learn more about designing benchmarks, please read [Microbenchmark Design Gui
## Quick Start
The first thing that you need to choose is the Target Framework. Available options are: `netcoreapp3.1|net5.0|net6.0|net461`. You can specify the target framework using `-f|--framework` argument. For the sake of simplicity, all examples below use `net6.0` as the target framework.
The first thing that you need to choose is the Target Framework. Available options are: `netcoreapp3.1|net6.0|net7.0|net461`. You can specify the target framework using `-f|--framework` argument. For the sake of simplicity, all examples below use `net7.0` as the target framework.
The following commands are run from the `src/benchmarks/micro` directory.
To run the benchmarks in Interactive Mode, where you will be asked which benchmark(s) to run:
```cmd
dotnet run -c Release -f net6.0
dotnet run -c Release -f net7.0
```
To list all available benchmarks ([read more](../../../docs/benchmarkdotnet.md#Listing-the-Benchmarks)):
```cmd
dotnet run -c Release -f net6.0 --list flat|tree
dotnet run -c Release -f net7.0 --list flat|tree
```
To filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](../../../docs/benchmarkdotnet.md#Filtering-the-Benchmarks)):
```cmd
dotnet run -c Release -f net6.0 --filter *Span*
dotnet run -c Release -f net7.0 --filter *Span*
```
To profile the benchmarked code and produce an ETW Trace file ([read more](../../../docs/benchmarkdotnet.md#Profiling)):
```cmd
dotnet run -c Release -f net6.0 --filter $YourFilter --profiler ETW
dotnet run -c Release -f net7.0 --filter $YourFilter --profiler ETW
```
To run the benchmarks for multiple runtimes ([read more](../../../docs/benchmarkdotnet.md#Multiple-Runtimes)):
```cmd
dotnet run -c Release -f net5.0 --filter * --runtimes net5.0 net6.0
dotnet run -c Release -f net6.0 --filter * --runtimes net6.0 net7.0
```
## Private Runtime Builds
@ -51,19 +51,19 @@ dotnet run -c Release -f net5.0 --filter * --runtimes net5.0 net6.0
If you contribute to [dotnet/runtime](https://github.com/dotnet/runtime) and want to benchmark **local builds of .NET Core** you need to build [dotnet/runtime](https://github.com/dotnet/runtime) in Release (including tests - so a command similar to `build clr+libs+libs.tests -rc release -lc release`) and then provide the path(s) to CoreRun(s). Provided CoreRun(s) will be used to execute every benchmark in a dedicated process:
```cmd
dotnet run -c Release -f net6.0 --filter $YourFilter \
--corerun C:\git\runtime\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe
dotnet run -c Release -f net7.0 --filter $YourFilter \
--corerun C:\git\runtime\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe
```
To make sure that your changes don't introduce any regressions, you can provide paths to CoreRuns with and without your changes and use the Statistical Test feature to detect regressions/improvements ([read more](../../../docs/benchmarkdotnet.md#Regressions)):
```cmd
dotnet run -c Release -f net6.0 \
dotnet run -c Release -f net7.0 \
--filter BenchmarksGame* \
--statisticalTest 3ms \
--coreRun \
"C:\git\runtime_upstream\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe" \
"C:\git\runtime_fork\artifacts\bin\testhost\net6.0-windows-Release-x64\shared\Microsoft.NETCore.App\6.0.0\CoreRun.exe"
"C:\git\runtime_upstream\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe" \
"C:\git\runtime_fork\artifacts\bin\testhost\net7.0-windows-Release-x64\shared\Microsoft.NETCore.App\7.0.0\CoreRun.exe"
```
If you **prefer to use dotnet cli** instead of CoreRun, you need to pass the path to cli via the `--cli` argument.