* Move master -> main

* Change main back to master to reflect actual usage
This commit is contained in:
Drew Scoggins 2021-03-24 11:32:40 -07:00 коммит произвёл GitHub
Родитель a2292dc926
Коммит 86223c2ad5
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
16 изменённых файлов: 41 добавлений и 38 удалений

Просмотреть файл

@ -19,7 +19,7 @@ Finding these benchmarks in a separate repository might be surprising. Performan
This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community. For more information, see the [.NET Foundation Code of Conduct](https://dotnetfoundation.org/code-of-conduct).
[public_build_icon]: https://dev.azure.com/dnceng/public/_apis/build/status/dotnet/performance/performance-ci?branchName=master
[public_build_status]: https://dev.azure.com/dnceng/public/_build/latest?definitionId=271&branchName=master
[internal_build_icon]: https://dev.azure.com/dnceng/internal/_apis/build/status/dotnet/performance/dotnet-performance?branchName=master
[internal_build_status]: https://dev.azure.com/dnceng/internal/_build/latest?definitionId=306&branchName=master
[public_build_icon]: https://dev.azure.com/dnceng/public/_apis/build/status/dotnet/performance/performance-ci?branchName=main
[public_build_status]: https://dev.azure.com/dnceng/public/_build/latest?definitionId=271&branchName=main
[internal_build_icon]: https://dev.azure.com/dnceng/internal/_apis/build/status/dotnet/performance/dotnet-performance?branchName=main
[internal_build_status]: https://dev.azure.com/dnceng/internal/_build/latest?definitionId=306&branchName=main

Просмотреть файл

@ -3,7 +3,7 @@ resources:
- container: ubuntu_x64_build_container
image: microsoft/dotnet-buildtools-prereqs:ubuntu-16.04-c103199-20180628134544
# Trigger builds for PR's targeting master
# Trigger builds for PR's targeting main
pr:
branches:
include:

Просмотреть файл

@ -3,12 +3,12 @@ An introduction of how to run scenario tests can be found in [Scenarios Tests Gu
- [Basic Startup Scenarios](#basic-startup-scenarios)
- [Basic Size On Disk Scenarios](#basic-size-on-disk-scenarios)
## Basic Startup Scenarios
Startup is a performance metric that measures the time to main (from process start to Main method) of a running application. [Startup Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/Startup) is a test harness that meausres throughputs in general, and the "TimeToMain" parser of it supports this metric and it's used in all of the **Basic Startup Scenarios**.
Startup is a performance metric that measures the time to main (from process start to Main method) of a running application. [Startup Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/Startup) is a test harness that meausres throughputs in general, and the "TimeToMain" parser of it supports this metric and it's used in all of the **Basic Startup Scenarios**.
[Scenarios Tests Guide](./scenarios-workflow.md) already walks through **startup time of an empty console template** as an example. For other startup scenarios, refer to [Command Matrix](#command-matrix).
## Basic Size On Disk Scenarios
Size On Disk, as the name suggests, is a metric that recursively measures the sizes of a directory and its children. [4Disk Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/4Disk) is the test harness that provides this functionality and it's used in all of the **Basic Size On Disk Scenarios**.
Size On Disk, as the name suggests, is a metric that recursively measures the sizes of a directory and its children. [4Disk Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/4Disk) is the test harness that provides this functionality and it's used in all of the **Basic Size On Disk Scenarios**.
We will walk through **Self-Contained Empty Console App Size On Disk** scenario as an example.
### Step 1 Initialize Environment
@ -29,7 +29,7 @@ Now run the test:
```
python3 test.py sod
```
[Size On Disk Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/4Disk) checks the default `pub\` directory and shows the sizes of the directory and its children:
[Size On Disk Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/4Disk) checks the default `pub\` directory and shows the sizes of the directory and its children:
```
[2020/09/29 04:21:35][INFO] ----------------------------------------------
[2020/09/29 04:21:35][INFO] Initializing logger 2020-09-29 04:21:35.865708

Просмотреть файл

@ -12,7 +12,7 @@ and then
```
dotnet publish -c Release -o pub
```
and measures the sizes of the `pub\` directory and its children with [SizeOnDisk Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/SizeOnDisk).
and measures the sizes of the `pub\` directory and its children with [SizeOnDisk Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/SizeOnDisk).
**For more information about scenario tests in general, an introduction of how to run scenario tests can be found in [Scenario Tests Guide](link). The current document has specific instruction to run blazor scenario tests.**
### Prerequisites
@ -27,14 +27,14 @@ Run precommand to create and publish a new blazorwasm template:
cd blazor
python3 pre.py publish --msbuild "/p:_TrimmerDumpDependencies=true"
```
Now there should be source code of the blazorwasm project under `app\` and published output under `pub\`. The `--msbuild "/p:_TrimmerDumpDependencies=true"` argument is optional and can be added to generate [linker dump](https://github.com/mono/linker/blob/master/src/analyzer/README.md) from the build, which will be saved to `blazor\app\obj\<Configuration>\<Runtime>\linked\linker-dependencies.xml.gz`.
Now there should be source code of the blazorwasm project under `app\` and published output under `pub\`. The `--msbuild "/p:_TrimmerDumpDependencies=true"` argument is optional and can be added to generate [linker dump](https://github.com/mono/linker/blob/main/src/analyzer/README.md) from the build, which will be saved to `blazor\app\obj\<Configuration>\<Runtime>\linked\linker-dependencies.xml.gz`.
### Step 3 Run Test
Run testcommand to measure the size on disk of the published output:
```
py -3 test.py sod --scenario-name "SOD - New Blazor Template - Publish"
```
In the command, `sod` refers to the "Size On Disk" metric and [SizeOnDisk Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/SizeOnDisk) will be used for this scenario. Note that `--scenario-name` is optional and the value can be changed for your own reference.
In the command, `sod` refers to the "Size On Disk" metric and [SizeOnDisk Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/SizeOnDisk) will be used for this scenario. Note that `--scenario-name` is optional and the value can be changed for your own reference.
The test output should look like the following:
```
@ -54,7 +54,7 @@ The test output should look like the following:
|72.000 count |72.000 count |72.000 count
[2020/09/25 11:24:46][INFO] Synthetic Wire Size - .br
```
[SizeOnDisk Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/SizeOnDisk) recursively measures the size of each folder and its children under the specified directory. In addition to the folders and files (path-like counters such as `pub\wwwroot\_framework\blazor.webassembly.js.gz` ), it also generates aggregate counters for each file type (such as `Aggregate - .dll`). For this **New Blazorwasm Template Size On Disk** scenario, Counter names starting with ` Synthetic Wire Size` is a unique counter type for blazorwasm, which simulates the size of files actually transferred over the wire when the webpage loads.
[SizeOnDisk Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/SizeOnDisk) recursively measures the size of each folder and its children under the specified directory. In addition to the folders and files (path-like counters such as `pub\wwwroot\_framework\blazor.webassembly.js.gz` ), it also generates aggregate counters for each file type (such as `Aggregate - .dll`). For this **New Blazorwasm Template Size On Disk** scenario, Counter names starting with ` Synthetic Wire Size` is a unique counter type for blazorwasm, which simulates the size of files actually transferred over the wire when the webpage loads.
### Step 4 Run Postcommand
Same instruction of [Step 4 in Scenario Tests Guide](scenarios-workflow.md#step-4-run-postcommand).
@ -67,5 +67,5 @@ For the purpose of quick reference, the commands can be summarized into the foll
## Relevant Links
- [Blazorwasm](https://github.com/dotnet/aspnetcore/tree/master/src/Components)
- [Blazorwasm](https://github.com/dotnet/aspnetcore/tree/main/src/Components)
- [IL Linker](https://github.com/mono/linker)

Просмотреть файл

@ -14,13 +14,13 @@ An introduction of how to run scenario tests can be found in [Scenarios Tests Gu
- clean state of the test machine (anti-virus scan is off and no other user program's running -- to minimize the influence of environment on the test)
### 1. Generate Core_Root
These performance tests use the built runtime test directory [Core_Root](https://github.com/dotnet/runtime/blob/master/docs/workflow/testing/using-corerun.md) for the crossgen tool itself and other runtime assmblies as compilation input. Core_Root is an intermediate output from the runtime build, which contains runtime assemblies and tools.
These performance tests use the built runtime test directory [Core_Root](https://github.com/dotnet/runtime/blob/main/docs/workflow/testing/using-corerun.md) for the crossgen tool itself and other runtime assmblies as compilation input. Core_Root is an intermediate output from the runtime build, which contains runtime assemblies and tools.
You can skip this step if you already have Core_Root. To generate Core_Root directory, first clone [dotnet/runtime repo](https://github.com/dotnet/runtime) and run:
```
src\tests\build.cmd Release <arch> generatelayoutonly
```
[the instruction of building coreclr tests](https://github.com/dotnet/runtime/blob/master/docs/workflow/testing/coreclr/windows-test-instructions.md), which creates Core_Root directory.
[the instruction of building coreclr tests](https://github.com/dotnet/runtime/blob/main/docs/workflow/testing/coreclr/windows-test-instructions.md), which creates Core_Root directory.
If the build's successful, you should have Core_Root with the path like:
```
@ -30,7 +30,7 @@ If the build's successful, you should have Core_Root with the path like:
Same instruction of [Scenario Tests Guide - Step 1](./scenarios-workflow.md#step-1-initialize-environment).
## Crossgen Throughput Scenario
**Crossgen Throughput** is a scenario test that measures the throughput of [crossgen compilation](https://github.com/dotnet/runtime/blob/master/docs/workflow/building/coreclr/crossgen.md). To be more specific, our test *implicitly* calls
**Crossgen Throughput** is a scenario test that measures the throughput of [crossgen compilation](https://github.com/dotnet/runtime/blob/main/docs/workflow/building/coreclr/crossgen.md). To be more specific, our test *implicitly* calls
```
.\crossgen.exe <assembly to compile>
```
@ -48,7 +48,7 @@ Now run the test, in our example we use `System.Private.Xml.dll` under Core_Root
```
python3 test.py crossgen --core-root <path to core_root>\Core_Root --single System.Private.Xml.dll
```
This will run the test harness [Startup Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/Startup), which runs crossgen compilation in several iterations and measures its throughput. The result will be something like this:
This will run the test harness [Startup Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/Startup), which runs crossgen compilation in several iterations and measures its throughput. The result will be something like this:
```
[2020/09/25 09:54:48][INFO] Parsing traces\Crossgen Throughput - System.Private.Xml.etl
@ -85,13 +85,13 @@ For scenario which compiles a **single assembly**, we use `System.Private.Xml.dl
python3 test.py crossgen2 --core-root <path to core_root>\Core_Root --single System.Private.Xml.dll
```
For scenario which does **composite compilation**, we try to compile the majority of runtime assemblies represented by [framework-r2r.dll.rsp](https://github.com/dotnet/performance/blob/master/src/scenarios/crossgen2/framework-r2r.dll.rsp):
For scenario which does **composite compilation**, we try to compile the majority of runtime assemblies represented by [framework-r2r.dll.rsp](https://github.com/dotnet/performance/blob/main/src/scenarios/crossgen2/framework-r2r.dll.rsp):
```
python3 test.py crossgen2 --core-root <path to core_root>\Core_Root --composite <repo root>/src/scenarios/crossgen2/framework-r2r.dll.rsp
```
Note that for the composite scenario, the command line can exceed the maximum length if it takes a list of paths to assemblies, so an `.rsp` file is used to avoid it. `--composite <rsp file>` option refers to a rsp file that contains a list of assemblies to compile. A sample file [framework-r2r.dll.rsp](https://github.com/dotnet/performance/blob/master/src/scenarios/crossgen2/framework-r2r.dll.rsp) can be found under `crossgen2\` folder.
Note that for the composite scenario, the command line can exceed the maximum length if it takes a list of paths to assemblies, so an `.rsp` file is used to avoid it. `--composite <rsp file>` option refers to a rsp file that contains a list of assemblies to compile. A sample file [framework-r2r.dll.rsp](https://github.com/dotnet/performance/blob/main/src/scenarios/crossgen2/framework-r2r.dll.rsp) can be found under `crossgen2\` folder.
The test command runs the test harness [Startup Tool](https://github.com/dotnet/performance/tree/master/src/tools/ScenarioMeasurement/Startup), which runs crossgen2 compilation in several iterations and measures its throughput. The result should partially look like:
The test command runs the test harness [Startup Tool](https://github.com/dotnet/performance/tree/main/src/tools/ScenarioMeasurement/Startup), which runs crossgen2 compilation in several iterations and measures its throughput. The result should partially look like:
```
[2020/09/25 10:25:09][INFO] Merging traces\Crossgen2 Throughput - Single - System.Private.perflabkernel.etl,traces\Crossgen2 Throughput - Single - System.Private.perflabuser.etl...
[2020/09/25 10:25:11][INFO] Trace Saved to traces\Crossgen2 Throughput - Single - System.Private.etl
@ -157,4 +157,4 @@ For the purpose of quick reference, the commands can be summarized into the foll
| Crossgen2 Size on Disk | crossgen2 | pre.py crossgen2 --core-root \<path to Core_Root> --single \<assembly name> | test.py sod --dirs crossgen.out | post.py | N/A | Windows-x64;Linux |
## Relevant Links
[Crossgen2 Compilation Structure Enhancements](https://github.com/dotnet/runtime/blob/master/docs/design/features/crossgen2-compilation-structure-enhancements.md)
[Crossgen2 Compilation Structure Enhancements](https://github.com/dotnet/runtime/blob/main/docs/design/features/crossgen2-compilation-structure-enhancements.md)

Просмотреть файл

@ -363,7 +363,7 @@ Dictionary<TKey, TValue> Dictionary<TKey, TValue>(int count)
As of today, the `T` can be: `byte`, `char`, `int`, `double`, `bool` and `string`. Extending `T` to more types is very welcomed!
**Note:** `ValuesGenerator` is simply always creating a new instance of `Random` with a constant seed. It's a crucial component and its correctness is verified using [Unit Tests](https://github.com/dotnet/performance/blob/master/src/tests/harness/BenchmarkDotNet.Extensions.Tests/UniqueValuesGeneratorTests.cs).
**Note:** `ValuesGenerator` is simply always creating a new instance of `Random` with a constant seed. It's a crucial component and its correctness is verified using [Unit Tests](https://github.com/dotnet/performance/blob/main/src/tests/harness/BenchmarkDotNet.Extensions.Tests/UniqueValuesGeneratorTests.cs).
**Note:** Please don't use `Random` directly in the benchmarks, do use `ValuesGenerator` and extend it with missing features when needed.

Просмотреть файл

@ -36,7 +36,7 @@
**This doc explains how to profile local [dotnet/runtime](https://github.com/dotnet/runtime) builds and it's targetted at [dotnet/runtime](https://github.com/dotnet/runtime) repository contributors.**
Before you start any performance investigation, you need to [build](#Build) [dotnet/runtime](https://github.com/dotnet/runtime) in **Release**, create a small [repro](#Repro) app and change the default [project settings](#Project-Settings). If you want to profile a BenchmarkDotNet test (like those in this repo), [BenchmarkDotNet has built-in profiling option](https://github.com/dotnet/performance/blob/master/docs/benchmarkdotnet.md#profiling) to collect trace.
Before you start any performance investigation, you need to [build](#Build) [dotnet/runtime](https://github.com/dotnet/runtime) in **Release**, create a small [repro](#Repro) app and change the default [project settings](#Project-Settings). If you want to profile a BenchmarkDotNet test (like those in this repo), [BenchmarkDotNet has built-in profiling option](https://github.com/dotnet/performance/blob/main/docs/benchmarkdotnet.md#profiling) to collect trace.
The next step is to choose the right profiler depending on the OS:
@ -44,7 +44,7 @@ The next step is to choose the right profiler depending on the OS:
* [Visual Studio Profiler](#Visual-Studio-Profiler) allows for [CPU](#CPU-Investigation) and [memory](#Allocation-Tracking) profiling. It's intuitive to use and you should **use it be default**.
* [PerfView](#PerfView) is the ultimate .NET Profiler but it has a high entry cost. If Visual Studio Profiler is not enough, you should switch to [PerfView](#PerfView).
* Linux
* [dotnet trace](https://github.com/dotnet/diagnostics/blob/master/documentation/dotnet-trace-instructions.md) works on every OS, it's easy to use and it should be your **default choice** on Unix systems.
* [dotnet trace](https://github.com/dotnet/diagnostics/blob/main/documentation/dotnet-trace-instructions.md) works on every OS, it's easy to use and it should be your **default choice** on Unix systems.
* [PerfCollect](#PerfCollect) is a simple, yet very powerful script that allows for profiling native parts of .NET Core. You should use it if `dotnet trace` can not handle your case.
If you clearly need information on CPU instruction level, then depending on the hardware you should use [Intel VTune](#VTune) or [AMD uProf](https://developer.amd.com/amd-uprof/).
@ -600,7 +600,7 @@ PerfCollect is a simple, yet very powerful script that allows for profiling .NET
In contrary to `dotnet trace` it gives you native call stacks which are very useful when you need to profile native parts of [dotnet/runtime](https://github.com/dotnet/runtime).
It has it's own excellent [documentation](https://github.com/dotnet/runtime/blob/master/docs/project/linux-performance-tracing.md) (a **highly recommended read**), the goal of this doc is not to duplicate it, but rather show **how to profile local [dotnet/runtime](https://github.com/dotnet/runtime) build running on a Linux VM from a Windows developer machine**. We need two OSes because as of today only PerfView is capable of opening a `PerfCollect` trace file.
It has it's own excellent [documentation](https://github.com/dotnet/runtime/blob/main/docs/project/linux-performance-tracing.md) (a **highly recommended read**), the goal of this doc is not to duplicate it, but rather show **how to profile local [dotnet/runtime](https://github.com/dotnet/runtime) build running on a Linux VM from a Windows developer machine**. We need two OSes because as of today only PerfView is capable of opening a `PerfCollect` trace file.
## Preparing Your Machine

Просмотреть файл

@ -10,8 +10,8 @@ parameters:
WorkItemDirectory: '' # optional -- a payload directory to zip up and send to Helix; requires WorkItemCommand; incompatible with XUnitProjects
CorrelationPayloadDirectory: '' # optional -- a directory to zip up and send to Helix as a correlation payload
IncludeDotNetCli: false # optional -- true will download a version of the .NET CLI onto the Helix machine as a correlation payload; requires DotNetCliPackageType and DotNetCliVersion
DotNetCliPackageType: '' # optional -- either 'sdk' or 'runtime'; determines whether the sdk or runtime will be sent to Helix; see https://raw.githubusercontent.com/dotnet/core/master/release-notes/releases.json
DotNetCliVersion: '' # optional -- version of the CLI to send to Helix; based on this: https://raw.githubusercontent.com/dotnet/core/master/release-notes/releases.json
DotNetCliPackageType: '' # optional -- either 'sdk' or 'runtime'; determines whether the sdk or runtime will be sent to Helix;
DotNetCliVersion: '' # optional -- version of the CLI to send to Helix; based on this:
EnableXUnitReporter: false # optional -- true enables XUnit result reporting to Mission Control
WaitForWorkItemCompletion: true # optional -- true will make the task wait until work items have been completed and fail the build if work items fail. False is "fire and forget."
IsExternal: false # [DEPRECATED] -- doesn't do anything, jobs are external if HelixAccessToken is empty and Creator is set

Просмотреть файл

@ -17,7 +17,7 @@ For more information refer to: benchmarking-workflow.md
../docs/benchmarking-workflow.md
- or -
https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow.md
https://github.com/dotnet/performance/blob/main/docs/benchmarking-workflow.md
'''
from argparse import ArgumentParser, ArgumentTypeError

Просмотреть файл

@ -2,6 +2,10 @@ from argparse import ArgumentParser
class ChannelMap():
channel_map = {
'main': {
'tfm': 'net6.0',
'branch': 'master'
},
'master': {
'tfm': 'net6.0',
'branch': 'master'

Просмотреть файл

@ -67,7 +67,7 @@ class FrameworkAction(Action):
To run CoreRT benchmarks we need to run the host BDN process as latest
.NET Core the host process will build and run CoreRT benchmarks
'''
return ChannelMap.get_target_framework_moniker("master") if framework == 'corert' else framework
return ChannelMap.get_target_framework_moniker("main") if framework == 'corert' else framework
@staticmethod
def get_target_framework_monikers(frameworks: list) -> list:
@ -854,7 +854,7 @@ def __process_arguments(args: list):
dest='channels',
required=False,
nargs='+',
default=['master'],
default=['main'],
choices= ChannelMap.get_supported_channels(),
help='Download DotNet Cli from the Channel specified.'
)

Просмотреть файл

@ -8,7 +8,7 @@ parser.add_argument('--branch-name', type=str, dest='branch',
args = parser.parse_args()
if not args.branch == "master":
if not args.branch == "main":
print("##vso[task.setvariable variable=DotnetVersion;isSecret=false;isOutput=false]")
else:
if not os.path.exists('eng/Versions.props'):

Просмотреть файл

@ -6,8 +6,8 @@ Command examples in this document use Bash/PowerShell syntax. If using Window's
The general workflow when using the GC infra is:
* For testing your changes to coreclr, get a master branch build of coreclr, and also your own build.
(You can of course use any version of coreclr, not just master.
* For testing your changes to coreclr, get a main branch build of coreclr, and also your own build.
(You can of course use any version of coreclr, not just main.
You can also only test with a single coreclr.)
* Write a benchfile. (Or generate default ones with `suite-create` as in the tutorial.) This will reference the coreclrs and list the tests to be run.
* Run the benchfile and collect traces.
@ -61,7 +61,7 @@ _ARM NOTE_: Skip this step. Visual Studio and its build tools are not supported
### Other setup
You should have `dotnet` installed.
On non-Windows systems, you'll need [`dotnet-trace`](https://github.com/dotnet/diagnostics/blob/master/documentation/dotnet-trace-instructions.md) to generate trace files from tests.
On non-Windows systems, you'll need [`dotnet-trace`](https://github.com/dotnet/diagnostics/blob/main/documentation/dotnet-trace-instructions.md) to generate trace files from tests.
On non-Windows systems, to run container tests, you'll need `cgroup-tools` installed.
You should have builds of coreclr available for use in the next step.

Просмотреть файл

@ -65,6 +65,6 @@ On Windows, this process is almost the same as for any other architecture. Speci
On Linux, there are additional steps to take. You need to have a `ROOTFS_DIR` and specify the `--cross` (`-cross` for `build-test.sh`) flag when calling the build scripts.
Detailed instructions on how to generate the _ROOTFS_ on Linux and how to cross-build can be found [here](https://github.com/dotnet/runtime/blob/master/docs/workflow/building/coreclr/cross-building.md).
Detailed instructions on how to generate the _ROOTFS_ on Linux and how to cross-build can be found [here](https://github.com/dotnet/runtime/blob/main/docs/workflow/building/coreclr/cross-building.md).
Another alternative is to use Docker containers. They allow for a more straightforward and less complicated setup and you can find the instructions to use them [here](https://github.com/dotnet/runtime/blob/master/docs/workflow/building/coreclr/linux-instructions.md). They allow for both, normal and cross building.
Another alternative is to use Docker containers. They allow for a more straightforward and less complicated setup and you can find the instructions to use them [here](https://github.com/dotnet/runtime/blob/main/docs/workflow/building/coreclr/linux-instructions.md). They allow for both, normal and cross building.

Просмотреть файл

@ -3,7 +3,6 @@
// See the LICENSE file in the project root for more information.
// Adapted from binary-trees C# .NET Core #6 program
// https://salsa.debian.org/benchmarksgame-team/benchmarksgame/-/blob/master/public/download/benchmarksgame-sourcecode.zip
// Best-scoring C# .NET Core version as of 2020-08-12
// The Computer Language Benchmarks Game

Просмотреть файл

@ -10,7 +10,7 @@ function Print-Usage(){
Write-Host "Choose ONE of the following commands:"
Write-Host ".\init.ps1 # sets up PYTHONPATH only; uses default dotnet in PATH"
Write-Host ".\init.ps1 -DotnetDirectory <custom dotnet root directory; ex: 'C:\Program Files\dotnet\'> # sets up PYTHONPATH; uses the specified dotnet"
Write-Host ".\init.ps1 -Channel <channel to download new dotnet; ex: 'master'> # sets up PYTHONPATH; downloads dotnet from the specified channel or branch to <repo root>\tools\ and uses it\n For a list of channels, check <repo root>\scripts\channel_map.py"
Write-Host ".\init.ps1 -Channel <channel to download new dotnet; ex: 'master'> # sets up PYTHONPATH; downloads dotnet from the specified channel or branch to <repo root>\tools\ and uses it\n For a list of channels, check <repo root>\scripts\channel_map.py"
Exit 1
}