Update RFCs to include state and comply to markdown linter.
Create new GitHub issue kind.
This commit is contained in:
Amaury Levé 2022-10-17 17:35:18 +02:00 коммит произвёл GitHub
Родитель 074aa5d8ba
Коммит 08a5ee36ad
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
11 изменённых файлов: 302 добавлений и 85 удалений

43
.github/ISSUE_TEMPLATE/rfc.md поставляемый Normal file
Просмотреть файл

@ -0,0 +1,43 @@
---
name: MSTest RFC
about: Request for Comments
title: ''
labels: []
---
# RFC NNN - (Fill me in with a feature name)
- [ ] Approved in principle
- [ ] Under discussion
- [ ] Implementation
- [ ] Shipped
## Summary
One paragraph explanation of the feature.
## Motivation
Why are we doing this? What use cases does it support? What is the expected outcome?
## Detailed design
This is the bulk of the RFC. Explain the design in enough detail for somebody familiar
with testing to understand, and for somebody familiar with MSTest to implement.
This should get into specifics and corner-cases, and include examples of how the feature is used.
## Drawbacks
Why should we *not* do this?
## Alternatives
What other designs have been considered? What is the impact of not doing this?
## Compatibility
Is this a breaking change?
### Unresolved questions
What parts of the design are still TBD?

Просмотреть файл

@ -1,21 +1,31 @@
# RFC 001- Framework Extensibility for Trait Attributes
# RFC 001 - Framework Extensibility for Trait Attributes
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework extensibility for attributes that are traits for a test.
## Motivation
It is a requirement for teams to have trait attributes which are strongly typed over a test method as opposed to just a `KeyvaluePair<string, string>`. This allows teams to have a standard across the team which is less error prone and more natural to specify. Users can also filter tests based on the values of these attributes.
It is a requirement for teams to have trait attributes which are strongly typed over a test method as opposed to just a `KeyValuePair<string, string>`. This allows teams to have a standard across the team which is less error prone and more natural to specify. Users can also filter tests based on the values of these attributes.
## Detailed Design
### Requirements
1. One should be able to have a strongly typed trait attribute on a test method.
2. One should be able to filter in VS IDE based on this trait attribute.
3. One should be able to see this information in Test Reporting.
4. This extensibility should also guarantee that attributes in MSTest V1 which are brought into MSTest V2 with this extensibility model should be source compatible.
### Proposed solution
The test framework currently has a TestProperty attribute which can be used to define custom traits as a `KeyValuePair<string, string>`. The definition of this attribute is as below:
```csharp
[AttributeUsage(AttributeTargets.Method, AllowMultiple = true)]
public sealed class TestPropertyAttribute : Attribute
@ -45,11 +55,12 @@ public sealed class TestPropertyAttribute : Attribute
/// </summary>
public string Value { get; }
}
```
```
This TestProperty is also filled into the TestPlatform's TestCase object which makes it available for reporting in the various loggers that can be plugged into the TestPlatform.
This TestProperty is also filled into the TestPlatform's TestCase object which makes it available for reporting in the various loggers that can be plugged into the TestPlatform.
To provide extension writers with the ability to have strongly typed attributes to achieve what TestProperty above achieves, the proposal is to make TestProperty a non-sealed class allowing classes to extend it like below:
```csharp
[AttributeUsage(AttributeTargets.Method, AllowMultiple = true)]
public class CustomTraitPropertyAttribute : TestPropertyAttribute
@ -66,7 +77,9 @@ public class CustomTraitPropertyAttribute : TestPropertyAttribute
}
}
```
And test methods would be decorated in a much more convenient form as below:
```csharp
[TestMethod]
[CustomTraitProperty(234)]
@ -75,16 +88,19 @@ public void TestMethod()
}
```
Users can then filter tests in VS IDE Test Explorer based on this Test Property. The query string that would filter the test above would look like
```
Users can then filter tests in VS IDE Test Explorer based on this Test Property. The query string that would filter the test above would look like
```shell
Trait:"CustomTraitProperty" Trait:"234"
```
This would provide extension writers with the same level of functionality that TestProperty already has with filtering and reporting plus the added ease of having a strongly typed trait.
### Requirements from Test Platform
1. Custom attributes should also show up in the trx logger. This is a bigger change since it might require changes in the trx schema.
## Open Questions
## Unresolved questions
1. Should this attribute be opened up to be at a class and assembly level like TestCategory Attribute?
2. Is filtering needed at the console level for these extension attributes? Currently the TestCaseFilter switch at the console level only supports - TestCategory, Priority, FullyQualifiedName, Name, ClassName. This is a finite list. In order to support filtering for extended attributes this list needs to be made dynamic.
2. Is filtering needed at the console level for these extension attributes? Currently the TestCaseFilter switch at the console level only supports - TestCategory, Priority, FullyQualifiedName, Name, ClassName. This is a finite list. In order to support filtering for extended attributes this list needs to be made dynamic.

Просмотреть файл

@ -1,24 +1,35 @@
# RFC 002- Framework Extensibility for Custom Assertions
# RFC 002 - Framework Extensibility for Custom Assertions
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework extensibility for extending the Assert class to add custom assertions.
## Motivation
Often times, the default set of assertion APIs are not sufficient to satisfy a wide range of requirements for unit test writers. In most of these situations users end up having utility methods to address this need reducing discoverability in a test suite. If the test framework provides an extensibility in the assertion infrastructure itself, custom assertion functionality can be
* Easily accessed
* Easily organized and
* Possibly shared with the community.
Often times, the default set of assertion APIs are not sufficient to satisfy a wide range of requirements for unit test writers. In most of these situations users end up having utility methods to address this need reducing discoverability in a test suite. If the test framework provides an extensibility in the assertion infrastructure itself, custom assertion functionality can be
- Easily accessed
- Easily organized and
- Possibly shared with the community.
## Detailed Design
### Requirements
1. Custom assertions should be easily pluggable into the test frameworks assertion infrastructure.
2. Users of custom assertions should be able to acquire and use them with ease.
### Proposed solution
Here is a solution that is both easily pluggable and acquirable:
The test frameworks Assertion class should be a non-static singleton with a C# Property('That') for accessing the instance:
```csharp
public class Assert
{
@ -33,6 +44,7 @@ public class Assert
```
Extension writers can then add C# extension methods for the Assertion class like below:
```csharp
public static class SampleAssertExtensions
{
@ -48,6 +60,7 @@ public static class SampleAssertExtensions
```
And consumers of this extension can consume it in their test code with the below simple syntax:
```csharp
using SampleAssertExtensionsNamespace;
@ -59,21 +72,27 @@ public void TestMethod
```
#### Benefits for custom assertion writers
1. Leverages the default C# constructs - No new interfaces/objects to understand and extend.
2. Extensions can be organized under a verb. For instance assertions expecting exceptions can be organized under the Throws verb like
```csharp
Assert.That.Throws.InnerException
Assert.That.Throws.SystemException
Assert.That.Throws.ExceptionWithMessage
```
3. Ability to create a chain of assertions in a single assert. For instance
```csharp
Assert.That.Throws.InnerException
Assert.That.Throws.SystemException
Assert.That.Throws.ExceptionWithMessage
```
3. Ability to create a chain of assertions in a single assert. For instance
```csharp
Assert.That.IsNotNull(animal).And.IsOfType<Cat>(animal)
```
#### Benefits for custom assertion consumers
1. Easily discoverable - Intellisense shows up in most IDEs ensuring discoverability for these custom assertions since they are all rooted under the in-box Assert class.
2. Readable - Using linq type expressions enhances readability.
## Open questions
## Unresolved questions
1. How important are combined asserts in a single Assert statement (`Assert.That.Something.And.Something`) and how much of this should be available in-box?

Просмотреть файл

@ -1,20 +1,31 @@
# RFC 003- Framework Extensibility for Custom Test Execution
# RFC 003 - Framework Extensibility for Custom Test Execution
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This document deals with how test runs can be customized using the MSTest V2 Framework extensibility.
## Motivation
The default workflow for running tests in MSTest V2 involves creating an instance of a TestClass and invoking a TestMethod in it. There are multiple instances where this workflow is required to be tweaked so specific tests are runnable. Some tests require to be run on a UI Thread, some others need to be parameterized. This requires that the Test Framework provide extensibility points so that test authors have the ability to run their tests differently.
The default workflow for running tests in MSTest V2 involves creating an instance of a TestClass and invoking a TestMethod in it. There are multiple instances where this workflow is required to be tweaked so specific tests are runnable. Some tests require to be run on a UI Thread, some others need to be parameterized. This requires that the Test Framework provide extensibility points so that test authors have the ability to run their tests differently.
## Detailed Design
The execution flow can broadly be extended at two levels:
1. Test Method level
2. Test Class level
The sections below details how one can customize execution at these two points.
### Test Method level
Customizing test method level execution is simple - Extend the `TestMethodAttribute`. The `TestMethodAttribute` has the following signature:
```csharp
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class TestMethodAttribute : Attribute
@ -108,7 +119,7 @@ public interface ITestMethod
From a test authors perspective, the test method would now be adorned with the type that extends `TestMethodAttribute` to light up the extended functionality.
Let us take a very simple example to apply this extensibility on - the task is to validate the stability of a test scenario, that is ensure that the test for that scenario passes always when run 'n' number of times.
Let us take a very simple example to apply this extensibility on - the task is to validate the stability of a test scenario, that is ensure that the test for that scenario passes always when run 'n' number of times.
We start by declaring an `IterativeTestMethodAttribute` that extends `TestMethodAttribute`. We then override `TestMethodAttribute.Execute()` to run the test 'n' number of times.
```csharp
@ -151,7 +162,8 @@ public class LongRunningScenarios()
```
### Test Class level
Scaling up the test method level extensibility gets one to a position of customizing execution of all test methods under a unit, which in this case is a TestClass. One can do so by extending the `TestClassAttribute`.
Scaling up the test method level extensibility gets one to a position of customizing execution of all test methods under a unit, which in this case is a TestClass. One can do so by extending the `TestClassAttribute`.
```csharp
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
@ -195,7 +207,7 @@ public class IterativeTestClassAttribute : TestClassAttribute
}
```
The Test Method level extensibility workflow then kicks in when running all test methods in the class ensuring that each method is run 'n' number of times. A point to note from the code sample is that one can have a method level value for 'n' that overrides the class level value. This is possible because the `GetTestMethodAttribute` conditionally returns a new `IterativeTestMethodAttribute` only if the attribute is not already of that type. So if a method is already adorned with an `IterativeTestMethodAttribute` then the stabilityThreshold on the method take precedence over the class. Thus, one can choose how each individual method in the unit is executed.
The Test Method level extensibility workflow then kicks in when running all test methods in the class ensuring that each method is run 'n' number of times. A point to note from the code sample is that one can have a method level value for 'n' that overrides the class level value. This is possible because the `GetTestMethodAttribute` conditionally returns a new `IterativeTestMethodAttribute` only if the attribute is not already of that type. So if a method is already adorned with an `IterativeTestMethodAttribute` then the stabilityThreshold on the method take precedence over the class. Thus, one can choose how each individual method in the unit is executed.
From a test authors perspective, the test class would now be adorned with a `IterativeTestClassAttribute` instead.
@ -217,6 +229,7 @@ public class LongRunningScenarios()
}
```
## Open questions
1. There can only be one extension that is in control of the execution flow in this model. Should this change to allow the execution flow through multiple extensions? How would that look like?
2. Would a similar model work for extensions that want to hook into Initialize/Cleanup functionality?
## Unresolved questions
1. There can only be one extension that is in control of the execution flow in this model. Should this change to allow the execution flow through multiple extensions? How would that look like?
2. Would a similar model work for extensions that want to hook into Initialize/Cleanup functionality?

Просмотреть файл

@ -1,19 +1,29 @@
# RFC 004 - In-assembly Parallel Execution
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Motivation
The key motivation is to complete the execution of a suite of tests, within a single container, faster.
Coarse-grained parallelization is already supported by vstest, and is available to all test frameworks. That works by launching test execution on each available core as a distinct process, and handing it a container worth of tests (assembly, DLL, or relevant artifact containing the tests to execute) to execute. The unit of isolation is a process. The unit of scheduling is a test container. You can read more about that in our [blogpost](https://blogs.msdn.microsoft.com/visualstudioalm/2016/10/10/parallel-test-execution/).
This document is about providing __finer-grained control__ over parallel execution __via in-assembly parallel execution of tests__ – it enables running tests within an assembly in parallel.
## Requirements:
1. **Easy onboarding** - it should be possible to enable parallel execution for existing MSTest V2 code. For e.g. there might be 10s of 100s of test projects participating in a test run - insisting that all of them make changes to their source code to enable parallelism is a barrier to onboarding the feature.
2. **Fine grained control** - there might still be certain assemblies, or test classes or test methods within the assembly, that might not be ready for execution in parallel. It should be possible for such artifacts to opt-out of parallel execution. Conversely, there might be only a few assemblies that want to opt-in to parallel execution - that should also be possible.
3. **Override** - Parallel execution will have an impact on data collectors. Since test execution will be in parallel, the start/end events marking the execution of a particular test might get interleaved with those of any other test that might be executing in parallel. Therefore it should be possible for a feature that requires data collection to override and OFF all parallel execution. An example of a feature that might want to do this would be TIA (Test Impact Analysis).
4. **Test lifecycle semantics** - we will need to clarify the semantics to the various xxxInitialize/xxxCleanup methods.
## Requirements
1. __Easy onboarding__ - it should be possible to enable parallel execution for existing MSTest V2 code. For e.g. there might be 10s of 100s of test projects participating in a test run - insisting that all of them make changes to their source code to enable parallelism is a barrier to onboarding the feature.
2. __Fine grained control__ - there might still be certain assemblies, or test classes or test methods within the assembly, that might not be ready for execution in parallel. It should be possible for such artifacts to opt-out of parallel execution. Conversely, there might be only a few assemblies that want to opt-in to parallel execution - that should also be possible.
3. __Override__ - Parallel execution will have an impact on data collectors. Since test execution will be in parallel, the start/end events marking the execution of a particular test might get interleaved with those of any other test that might be executing in parallel. Therefore it should be possible for a feature that requires data collection to override and OFF all parallel execution. An example of a feature that might want to do this would be TIA (Test Impact Analysis).
4. __Test lifecycle semantics__ - we will need to clarify the semantics to the various xxxInitialize/xxxCleanup methods.
## Approach
The simplest way to enable in-assembly parallel execution is to enable it globally for all MSTest V2 test assemblies using a .runsettings file as follows:
```xml
<RunSettings>
<!-- MSTest adapter -->
@ -25,9 +35,11 @@ The simplest way to enable in-assembly parallel execution is to enable it global
</MSTest>
</RunSettings>
```
From the CLI these values can be provided using the "--" syntax.
This is as if every assembly were annotated with the following:
```csharp
[assembly: Parallelize(Workers = 4, Scope = ExecutionScope.ClassLevel)]
```
@ -35,23 +47,28 @@ This is as if every assembly were annotated with the following:
Parallel execution will be realized by spawning the appropriate number of worker threads (4), and handing them tests at the specified scope.
There will be 3 scopes of parallelization supported:
- ClassLevel - each thread of execution will be handed a TestClass worth of tests to execute. Within the TestClass, the test methods will execute serially. This will be the default - tests within a class might have interdependency, and we don't want to be too aggressive.
- MethodLevel - each thread of execution will be handed TestMethods to execute.
- Custom - the user will provide plugins implementing the required execution semantics. This will be covered in a separate RFC.
- Custom - the user will provide plugins implementing the required execution semantics. This will be covered in a separate RFC.
The value for the number of worker threads to spawn to execute tests can be set using a single assembly level attribute that will take a parameter whose values can be as follows:
- 0 - Auto configure; use as many tests as possible based on CPU and core count.
- n - The number n of threads to spawn to executes tests.
An assembly/Class/Method can explicitly opt-out of parallelization using an attribute that will indicate that it may not be run in parallel with any other tests. The attribute does not take any arguments, and may be added at the method, class, or assembly level.
```csharp
[DoNotParallelize]
```
When used at the assembly level, all tests within the assembly will be executed serially.
When used at the Class level, all tests within the class will be executed serially after the parallel execution of all other tests is completed.
When used at the Method level, the test method will be executed serially after the parallel execution of all other tests is completed.
Finally, just as in-assembly parallel execution can be enabled globally via the .runsettings file, it can be also be disabled globally as follows:
```xml
<RunSettings>
<!-- Configurations that affect the Test Framework -->
@ -62,12 +79,15 @@ Finally, just as in-assembly parallel execution can be enabled globally via the
```
Test lifecyle method semantics
- AssemblyInitialize/Cleanup shall be run only once per assembly (irrespective of parallel or not).
- ClassInitialize/Cleanup shall be run only once per class (irrespective of parallel or not).
- TestInitialize/Cleanup shall be run only once per method.
## Conditioning in-assembly parallel execution - composition rules
In-assembly parallel execution can be conditioned using the following means:
1. as annotations in source code (as described in this document).
2. as configuration properties set via a .runsettings file [[see here for more]](https://github.com/Microsoft/vstest-docs/blob/main/docs/configure.md).
3. by passing runsettings arguments via the command line [[see here for more]](https://github.com/Microsoft/vstest-docs/blob/main/docs/RunSettingsArguments.md).
@ -75,7 +95,9 @@ In-assembly parallel execution can be conditioned using the following means:
(3) overrides (2) which in turn overrides (1). The ```[DoNotParallelize]``` annotation may be applied only to source code, and hence remains unaffected by these rules - thus, even if in-assembly parallel execution in conditioned via (2) or (3), specific program elements can still opt-out safely.
### Example
Consider an assembly UTA1.dll that has a 2 test classes TC1 and TC2 as follows:
```csharp
[assembly: Parallelize(Workers = 3, Scope = ExecutionScope.ClassLevel)]
@ -96,6 +118,7 @@ public class TC2
```
Furthermore, consider the following test.runsettings file:
```xml
<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
@ -108,14 +131,21 @@ Furthermore, consider the following test.runsettings file:
</MSTest>
</RunSettings>
```
Here is the effective conditioning for the following sample invocations:
1. ```vstest.console.exe uta1.dll```: Workers = 3, Scope = ExecutionScope.ClassLevel. TC2 is opted out.
2. ```vstest.console.exe uta1.dll /settings:test.runsettings```: Workers = 4, Scope = ExecutionScope.ClassLevel. TC2 is opted out.
3. ```vstest.console.exe uta1.dll /settings:test.runsettings -- MSTest.Parallelize.Workers=4 MSTest.Parallelize.Scope=MethodLevel```: Workers = 4, Scope = ExecutionScope.MethodLevel. TC2 is opted out.
4. ```vstest.console.exe uta1.dll -- RunConfiguration.DisableParallelization=true```: globally disables in-assembly parallel execution.
## Notes
1. It will up to the user to ensure that the tests are parallel-ready before enabling parallel test execution.
2. Features that rely on data collectors will need to globally turn OFF parallel execution. They can do so by either crafting a .runsettings file as shown above, or by passing the "--"syntax from the CLI. For e.g. the VSTest task with Test Impact Analysis ON will need to do this when invoking vstest runner.
3. Diagnosing test failures during parallel execution will require appropriately formatted logging. The adapter should take care to straighten out the logs and emit them appropriately formatted.
4. Execution of data driven tests will not be parallelized - i.e. parallelizing over DataRow attributes is not supported.
## Unresolved questions
None.

Просмотреть файл

@ -1,21 +1,31 @@
# RFC 005- Framework Extensibility for Custom Test Data Source
# RFC 005 - Framework Extensibility for Custom Test Data Source
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework extensibility for specifying custom data source for data driven tests.
## Motivation
Often times, custom data sources are required for data driven tests. User should be able to leverage test framework extensibility to provide custom data sources for test execution.
## Detailed Design
### Requirements
1. A custom data source can be used by multiple test cases.
1. A custom data source can be used by multiple test cases.
2. A test case can have multiple data sources.
### Proposed solution
Here is a solution for using custom data source in data driven tests.
The test framework should define an interface class `ITestDataSource` which can be extended to get data from custom data source.
```csharp
public interface ITestDataSource
{
@ -32,6 +42,7 @@ public interface ITestDataSource
```
Here is how the test methods are decorated with concrete implementation of `ITestDataSource`:
```csharp
public class CustomTestDataSourceAttribute : Attribute, ITestDataSource
{
@ -66,10 +77,12 @@ public void TestMethod1(int a, int b, int c)
Assert.AreEqual(0, c % 3);
}
```
In a similar way, multiple test methods can be decorated with same data source.
A test method can also be decorated with multiple data sources.
Users can customize the display name of tests in test results by overriding `GetDisplayName()` method.
```csharp
public override string GetDisplayName(MethodInfo methodInfo, object[] data)
{
@ -78,18 +91,26 @@ public override string GetDisplayName(MethodInfo methodInfo, object[] data)
```
The display name of tests in the above example would appear as :
```
```shell
MyFavMSTestV2Test (1,2,3)
MyFavMSTestV2Test (4,5,6)
```
### Discovery of `ITestDataSource` attributes
### Discovery of `ITestDataSource` attributes
The MSTest v2 framework, on discovering a `TestMethod`, probes additional attributes. On finding attributes inheriting from `ITestDataSource`, the framework invokes `GetData()` to fetch test data and iteratively invokes the test method with the test data as arguments.
### Benefits of using `ITestDataSource`
1. Users can extend `ITestDataSource` to support custom data sources.
2. Multiple tests can reuse the test data defined in the same data source.
3. A test case can use multiple test data sources.
### Remarks
When implementing a custom `ITestDataSource` (attribute), the `GetData()` method should not return an empty sequence, otherwise the test(s) using this data source attribute will always fail.
## Unresolved questions
None.

Просмотреть файл

@ -1,20 +1,30 @@
# RFC 006- DynamicData Attribute for Data Driven Tests
# RFC 006 - DynamicData Attribute for Data Driven Tests
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework attribute "DynamicData" for data driven tests where test data can be declared as properties or in methods and can be shared between more than one test cases.
## Motivation
Often times, data driven tests use shared test data that can be declared as properties or in methods. User can use `DataRow` for declaring inline data, but it can't be shared. Test framework should provide feature so that test data can be declared as property or in method and can be easily used by multiple tests.
## Detailed Design
### Requirements
1. Test data can be declared as properties or in methods and can be reused by multiple test cases.
### Proposed solution
Here is a solution that meets the above requirements:
A static property or a static method having test data should be declared as below:
```csharp
[TestClass]
public class UnitTests
@ -69,6 +79,7 @@ In case, the property or method exists in a class other than the test class, an
[DynamicData("ReusableTestDataMethod", typeOf(UnitTests), DynamicDataSourceType.Method)]
```
Please note that Enum `DynamicDataSourceType` is used to specify whether test data source is a property or method.
Data source is considered as property by default.
@ -84,12 +95,17 @@ public static string GetCustomDynamicDataDisplayName(MethodInfo methodInfo, obje
[DynamicData("ReusableTestDataProperty", DynamicDataDisplayName = "GetCustomDynamicDataDisplayName")]
```
`DynamicDataDisplayNameDeclaringType` should be used in cases where the dynamic data display name method exists in a class other than the test class
`DynamicDataDisplayNameDeclaringType` should be used in cases where the dynamic data display name method exists in a class other than the test class
```csharp
[DynamicData("ReusableTestDataMethod", DynamicDataDisplayName = "GetCustomDynamicDataDisplayName", DynamicDataDisplayNameDeclaringType = typeOf(UnitTests))]
```
### Benefits of using DynamicData attribute
1. More than one tests can use the same test data, if required.
2. Changes in the shared test data can be scoped to single place.
## Unresolved questions
None.

Просмотреть файл

@ -1,9 +1,16 @@
# RFC 007- DataSource Attribute Vs ITestDataSource
# RFC 007 - DataSource Attribute Vs ITestDataSource
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework attribute "DataSource" for data driven tests where test data can be present in an excel file, xml file, sql database or OleDb. You can refer documentation [here](https://docs.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.testtools.unittesting.datasourceattribute) for more details.
This details the MSTest V2 framework attribute "DataSource" for data driven tests where test data can be present in an excel file, xml file, sql database or OleDb. You can refer documentation [here](https://docs.microsoft.com/dotnet/api/microsoft.visualstudio.testtools.unittesting.datasourceattribute) for more details.
## Motivation
At present, there are two codeflows for data-driven tests, one for DataSource Attribute and another for DataRow & DynamicData Attributes. This aims to have one common codeflow for handling data-driven tests.
Also, currently DataSource Attribute does not follow Test Framework's custom data source extensibility (i.e. `ITestDataSource`) and we want to modify DataSource Attribute implementation so that it follows framework's data source extensibility model. Presently, DataSource Attribute consumes test data via TestContext object whereas `ITestDataSource` consumes test data via Testmethod parameters. We will not be changing how data is consumed in DataSource Attribute, purely because of back compatibility reasons. So, DataSource Attribute will not exactly be extended from Test Framework's `ITestDataSource` but this is an attempt to bring the DataSource Attribute implementation closer to how it would have looked if it would have extended from Test Framework's data source extensibility.
@ -11,71 +18,81 @@ Also, currently DataSource Attribute does not follow Test Framework's custom dat
## Detailed Design
### Requirements
1. DataSource Attribute and ITestDataSource should have a common code flow.
2. DataSource Attribute should provide the data for that invocation in the TestContext object.
3. Design should be extensible to support in-assembly parallelization on a data source.
### Proposed solution
The test adapter should define an interface class `ITestDataSource` (on similar lines of framework's ITestDataSource interface)which will be extended to get data from data source.
The test adapter should define an interface class `ITestDataSource` (on similar lines of framework's ITestDataSource interface) which will be extended to get data from data source.
```csharp
namespace Microsoft.VisualStudio.TestPlatform.MSTestAdapter.PlatformServices.Interface
{
/// <summary>
/// Interface that provides values from data source when data driven tests are run.
/// </summary>
public interface ITestDataSource
{
/// <summary>
/// Gets the test data from custom test data source and sets dbconnection in testContext object.
/// </summary>
/// <param name="testMethodInfo">
/// The info of test method.
/// </param>
/// <param name="testContext">
/// Test Context object
/// </param>
/// <returns>
/// Test data for calling test method.
/// </returns>
IEnumerable<object> GetData(UTF.ITestMethod testMethodInfo, ITestContext testContext);
}
/// <summary>
/// Interface that provides values from data source when data driven tests are run.
/// </summary>
public interface ITestDataSource
{
/// <summary>
/// Gets the test data from custom test data source and sets dbconnection in testContext object.
/// </summary>
/// <param name="testMethodInfo">
/// The info of test method.
/// </param>
/// <param name="testContext">
/// Test Context object
/// </param>
/// <returns>
/// Test data for calling test method.
/// </returns>
IEnumerable<object> GetData(UTF.ITestMethod testMethodInfo, ITestContext testContext);
}
}
```
```
There is no change in how DataSource Attribute will be consumed. Test methods can be decorated as they were decorated earlier like this:
```csharp
[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.XML", "MyFile.xml", "MyTable", DataAccessMethod.Sequential)]
public void MyTestMethod()
{
var v = testContext.DataRow[0];
Assert.AreEqual(v, "3");
var v = testContext.DataRow[0];
Assert.AreEqual(v, "3");
}
```
The display name of tests in the above example would appear like they used to be as :
```
```shell
MyTestMethod (Data Row 0)
MyTestMethod (Data Row 1)
```
### Behaviour Changes in DataSource Attributes
Presently, TestFrameworks's `Execute()` is called once for a data-driven TestMethod, which in-turn takes care of running test for all DataRows. This will be changed to calling TestFramework's `Execute()` for each DataRow. i.e. the logic of executing data-driven tests will be moved out from framework to adapter.
### Differences between DataSource Attribute and ITestDataSource
| DataSource           | ITestDataSource                                       |
|---------------------------------------------------|--------------------------------------------------------|
| Test authors consume data via TestContext | Test authors consume data via Testmethod parameters    |
| TestMethod does not require to have parameters    | TestMethod is required to have parameters |
Note :
Test authors should not expect data to be set in TestContext for attributes inheriting from `ITestDataSource`. Going forward, data should only be consumed from Testmethod parameters for data-driven tests.
Test authors should not expect data to be set in TestContext for attributes inheriting from `ITestDataSource`. Going forward, data should only be consumed from Testmethod parameters for data-driven tests.
### Support Scenarios
Following scenarios will not supported in case of DataSource Attributes :
* Multiple DataSource Attributes on a TestMethod will not be supported.
* DataSource Attribute and DataRow Attribute should not be given together for a TestMethod. If in case both are given, DataSource Attribute will take precedence and will be used as a DataSource for that test, provided that TestMethod doesn't take any parameters.
* DataSource Attribute will not be open for extensibility.
- Multiple DataSource Attributes on a TestMethod will not be supported.
- DataSource Attribute and DataRow Attribute should not be given together for a TestMethod. If in case both are given, DataSource Attribute will take precedence and will be used as a DataSource for that test, provided that TestMethod doesn't take any parameters.
- DataSource Attribute will not be open for extensibility.
## Open Questions
None.
## Unresolved questions
None.

Просмотреть файл

@ -1,12 +1,20 @@
# RFC 008 - Test case timeout via runsettings
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Motivation
User should be able to configure global test case timeout for all the test cases part of the run. 
User should be able to configure global test case timeout for all the test cases part of the run.
### Proposed solution
Make test case timeout configurable via TestTimeout tag which is part of the adapter node in the runsettings.
Here is a sample runsettings: 
Here is a sample runsettings:
```xml
<Runsettings> 
<MSTestV2> 
@ -15,7 +23,12 @@ Here is a sample runsettings: 
</Runsettings> 
```
### Honoring the settings 
- If no settings are provided in runsettings, default timeout is set to 0. 
- Timeout specified via Timeout attribute on TestMethod takes precedence over the global timeout specified via runsettings. 
- For all the test methods that do not have Timeout attribute, timeout will be based on the timeout specified via runsettings.
### Honoring the settings
- If no settings are provided in runsettings, default timeout is set to 0.
- Timeout specified via Timeout attribute on TestMethod takes precedence over the global timeout specified via runsettings.
- For all the test methods that do not have Timeout attribute, timeout will be based on the timeout specified via runsettings.
## Unresolved questions
None.

Просмотреть файл

@ -1,29 +1,43 @@
# RFC 009- Deployment Item Attribute for Net Core
# RFC 009 - Deployment Item Attribute for Net Core
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Summary
This details the MSTest V2 framework attribute `DeploymentItem` for copying files or folders specified as deployment items to the deployment directory. Deployment directory is where all the deployment items are present along with TestSource dll.
## Motivation
Many a times, a test author takes dependency on certain files(like .dll, .xml, .txt etc.) and requires those files to be present in test run location at the time of test execution. Instead of manually copying the files, he/she can leverage the 'DeploymentItem' attribute provided by MSTest adapter to deploy those files to the test run location/deployment directory.
## Constructors
### public DeploymentItemAttribute (string path)
#### Parameters
&nbsp;&nbsp;&nbsp;&nbsp; `path` <br/>
&nbsp;&nbsp;&nbsp;&nbsp; The file or directory to deploy. The path is either absolute or relative to build output directory.
### public DeploymentItemAttribute (string path, string outputDirectory)
#### Parameters
&nbsp;&nbsp;&nbsp;&nbsp; `path` <br/>
&nbsp;&nbsp;&nbsp;&nbsp; The file or directory to deploy. The path is relative to the deployment directory. <br/>
&nbsp;&nbsp;&nbsp;&nbsp; `outputDirectory` <br/>
&nbsp;&nbsp;&nbsp;&nbsp; The path of the directory inside the deployment directory to which the items are to be copied. All files and directories identified by `path` will be copied to this directory.
## Features
1. It can be used either on TestClass or on TestMethod.
2. Users can have multiple instances of the attribute to specify more than one item.
## Example
```csharp
[TestClass]
[DeploymentItem(@"C:\classLevelDepItem.xml")] //absolute path
@ -40,8 +54,10 @@ Many a times, a test author takes dependency on certain files(like .dll, .xml, .
```
## Behavior Changes wrt FullFramework
1. In FullFramework, tests run from a newly created deployment directory ProjectRoot\TestResults\Deploy_*username* *timestamp*\Out. In NetCore, tests run from build output directory.
1. In FullFramework, tests run from a newly created deployment directory ProjectRoot\TestResults\Deploy_*username* *timestamp*\Out. In NetCore, tests run from build output directory.
2. Dependencies of DeploymentItem are by default deployed in FullFramework. Users can override this by specifying `DeployTestSourceDependencies` as false in RunSettings. Since dependencies of deployment items are not deployed in NetCore, `DeployTestSourceDependencies` setting will not be honored.
## Limitations
1. Error messages are supported only in English language yet. It'll be fixed as part of https://github.com/Microsoft/testfx/issues/591.
## Unresolved questions
None.

Просмотреть файл

@ -1,13 +1,21 @@
# RFC 010 - Map not runnable tests to failed via runsettings
- [x] Approved in principle
- [x] Under discussion
- [x] Implementation
- [x] Shipped
## Motivation
Some tests which have incompatible signature and cannot be executed are skipped with warnings being thrown.
These tests will now be marked failed since these are not even being executed. User should be able to configure to not fail a test if it is not runnable in accordance with maintaining backward compatibility.
### Proposed solution
Make this setting configurable via MapNotRunnableToFailed tag which is part of the adapter node in the runsettings.
Here is a sample runsettings: 
Here is a sample runsettings:
```xml
<Runsettings> 
<MSTestV2> 
@ -16,7 +24,12 @@ Here is a sample runsettings: 
</Runsettings> 
```
### Honoring the settings 
- If no settings are provided in runsettings, default MapNotRunnableToFailed is set to true. 
### Honoring the settings
- If no settings are provided in runsettings, default MapNotRunnableToFailed is set to true.
This has been kept to fail the tests which cannot be executed and avoid silent failures.
- The setting can be overridden by specifying `<MapNotRunnableToFailed>false</MapNotRunnableToFailed>` in the adapter settings.
## Unresolved questions
None.