machinelearning/test/Microsoft.ML.PerformanceTests
Eric StJohn f22b60aa9a
Packaging cleanup (#6939)
* Packaging cleanup

Originally I was just trying to remove mentions of snupkg, but then
things got a bit carried away. :)

This is trying to remove as much duplication and dead code related to
packaging that I can.

* Apply code review feedback

* Suppress copying indirect references

* Remove unwanted bundled files from AutoML

* Remove leading slash

* Refactor model download

* Correct the packaging path of native symbols

* Rename NoTargets projects from csproj to proj

* Fix build issues around model download and respond to feedback

* Remove NoTargets file extension enforcement

* Rename proj to CSProj, include in SLN

I'd like to ensure all our projects are included in the SLN and don't
rely on separate build steps.

VS prefers *.csproj in the sln so I renamed things back to csproj.

* Respond to PR feedback
2024-02-27 16:05:43 -08:00
..
Harness [main] Update dependencies from dotnet/arcade (#6703) 2024-01-02 15:46:00 -08:00
Helpers Merge arcade to master (#5525) 2020-12-02 17:13:27 -08:00
Numeric dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
Text dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
BenchmarkBase.cs Merge arcade to master (#5525) 2020-12-02 17:13:27 -08:00
CacheDataViewBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
FeaturizeTextBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
HashBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
ImageClassificationBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
KMeansAndLogisticRegressionBench.cs Changes some of the CPU Math implemenation from our current version to use the new TensorPrimitives package. (#6875) 2023-11-14 22:46:15 -07:00
Microsoft.ML.PerformanceTests.csproj Packaging cleanup (#6939) 2024-02-27 16:05:43 -08:00
PredictionEngineBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
Program.cs Changes some of the CPU Math implemenation from our current version to use the new TensorPrimitives package. (#6875) 2023-11-14 22:46:15 -07:00
README.md Attempt to retarget tests to .NET 6.0 (#6367) 2022-10-18 08:29:37 -07:00
RffTransform.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
ShuffleRowsBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
StochasticDualCoordinateAscentClassifierBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
TextLoaderBench.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00
TextPredictionEngineCreation.cs dotnet format/spellchecking (#5988) 2021-11-04 21:48:50 -07:00

README.md

ML.NET Benchmarks/Performance Tests

This project contains performance benchmarks.

Run the Performance Tests

Pre-requisite: In order to fetch dependencies which come through Git submodules the following command needs to be run before building:

git submodule update --init

Pre-requisite: On a clean repo with initialized submodules, build.cmd at the root installs the right version of dotnet.exe and builds the solution. You need to build the solution in Release.

build.cmd -configuration Release
  1. Navigate to the performance tests directory (machinelearning\test\Microsoft.ML.PerformanceTests)

  2. Run the benchmarks in Release:

    build.cmd -configuration Release -performanceTest

Authoring new benchmarks

  1. The type which contains benchmark(s) has to be a public, non-sealed, non-static class.
  2. Put the initialization logic into a separate public method with [GlobalSetup] attribute. You can use Target property to make it specific for selected benchmark. Example: [GlobalSetup(Target = nameof(MakeIrisPredictions))].
  3. Put the benchmarked code into a separate public method with [Benchmark] attribute. If the benchmark method computes some result, please return it from the benchmark. Harness will consume it to avoid dead code elimination.
  4. If given benchmark is a Training benchmark, please apply [Config(typeof(TrainConfig))] to the class. It will tell BenchmarkDotNet to run the benchmark only once in a dedicated process to mimic the real-world scenario for training.

Examples:

public class NonTrainingBenchmark
{
    [GlobalSetup(Target = nameof(TheBenchmark))]
    public void Setup() { /* setup logic goes here */ }

    [Benchmark]
    public SomeResult TheBenchmark() { /* benchmarked code goes here */  }
}

[Config(typeof(TrainConfig))]
public class TrainingBenchmark

Running the BenchmarksProjectIsNotBroken test

If your build is failing in the build machines, in the release configuration due to the BenchmarksProjectIsNotBroken test failing, you can debug this test locally by:

1- Building the solution in the release mode locally

build.cmd -configuration Release -performanceTest

2- Changing the configuration in Visual Studio from Debug -> Release 3- Changing the annotation in the BenchmarksProjectIsNotBroken to replace BenchmarkTheory with Theory, as below.

[Theory]
[MemberData(nameof(GetBenchmarks))]
public void BenchmarksProjectIsNotBroken(Type type)

4- Restart Visual Studio 5- Proceed to running the tests normally from the Test Explorer view.