* Refactor plugin creation to separate framework models

* Add building-a-table docs

* Link to docs overview from README

* Fix header in Creating-a-table

* Remove unused file

* Fix typo

* Update simple next steps

* Consistent periods in list

* Compatability typos fix

* Update known-driver-compat overview

* Address comments

* Exposes typo
This commit is contained in:
Luke Bordonaro 2022-02-01 16:06:56 -08:00 коммит произвёл GitHub
Родитель 94f415a556
Коммит 3ed9796244
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
17 изменённых файлов: 900 добавлений и 788 удалений

Просмотреть файл

@ -17,9 +17,9 @@ tabular data from arbitrary data sources such as Common Trace Format (`.ctf`) fi
feature-rich data-processing pipeline
These two functionalities are not mutually exclusive, and plugins may access data in another plugin's (or, commonly, its own)
data processing pipeline when creating tables for a given data source.
data-processing pipeline when creating tables for a given data source.
For help with getting started and developing SDK plugins, refer to the [documentation folder](./documentation).
For help with getting started and developing SDK plugins, refer to our [documentation](./documentation/Overview.md).
## In this Repository
* `devel-template`: a work-in-progress .NET template for creating SDK plugins

Просмотреть файл

@ -2,7 +2,7 @@
This document outlines the architecture of the Microsoft Performance Toolkit SDK.
For more detailed information on how to create your own project using the SDK, please view [Creating an SDK Plugin C# Project](../Using-the-SDK/Creating-your-project.md).
For more detailed information on how to create your own project using the SDK, please view [Creating an SDK Plugin](../Using-the-SDK/Creating-your-plugin.md).
## High-Level Interfaces
@ -76,9 +76,9 @@ that work together to
A plugin is an **abstract collection of these objects** which can be bundled together for distribution and loaded as a single
unit by the SDK driver. *A plugin can be, and often is, made up of multiple assemblies*.
:warning: Note that while a single assembly *can* define more than one `ProcessingSource`, __it is highly recommended that an assembly only contains
a single `ProcessingSource`.__ Tables, data cookers, and custom data processors are almost always associated with a single `ProcessingSource`.
It is best therefore to package __only one__ `ProcessingSource` and all of its associated classes in a single binary.
> :warning: Note that while a single assembly *can* define more than one `ProcessingSource`, __it is highly recommended that an assembly only contains
> a single `ProcessingSource`.__ Tables, data cookers, and custom data processors are almost always associated with a single `ProcessingSource`.
> It is best therefore to package __only one__ `ProcessingSource` and all of its associated classes in a single binary.
The diagram below demonstrates how these objects work together to acheive this high-level interface:
@ -98,9 +98,9 @@ The diagram below demonstrates how these objects work together to acheive this h
For implementation details on how to create a simple plugin containing one `ProcessingSource`, its associated `CustomDataProcessor` and a
`Table`, please view [Using the SDK/Creating A Simple SDK Plugin](../Using-the-SDK/Creating-a-simple-sdk-plugin.md).
For implementation details on how to create a plugin containing `DataCooker`s, please view [Using the SDK/Creating a Data Processing Pipeline](../Using-the-SDK/Creating-a-pipeline.md)
For implementation details on how to create a plugin containing `DataCooker`s, please view [Using the SDK/Creating a Data-Processing Pipeline](../Using-the-SDK/Creating-a-pipeline.md)
# Next Steps
Now that we understand the high-level overview of the SDK architecture, the next step is to better understand how the data processing
pipeline works. Continue reading at [Architecture/The Data Processing Pipeline](./The-Data-Processing-Pipeline.md).
pipeline works. Continue reading at [Architecture/The Data-Processing Pipeline](./The-Data-Processing-Pipeline.md).

Просмотреть файл

@ -1,4 +1,4 @@
# The Data Processing Pipeline
# The Data-Processing Pipeline
Within a plugin, a `ProcessingSource` delegates the task of processing data sources to a `CustomDataProcessor`.
In order to minimize the overhead of accessing persistent data storage, a well-designed plugin aims to have its
@ -10,11 +10,11 @@ if the SDK provided a way to
4) Allow external binaries, such as other plugins, access to the output of these transformations (as in *extend* the plugin defining the transformation)
The SDK allows a plugin to achieve these goals, while minimizing the number of times data sources need to be directly accessed,
by facilitating and allowing the creation of a __data processing pipeline__.
by facilitating and allowing the creation of a __data-processing pipeline__.
# Pipeline Components
At the highest level, when a plugin wishes to create a data processing pipeline, it will define two types of
At the highest level, when a plugin wishes to create a data-processing pipeline, it will define two types of
components: __source parsers__ and __data cookers__.
## Source Parsers
@ -106,7 +106,7 @@ the `CompositeDataCooker` can query the `DataOutput`s it needs to perform *its*
# Putting Everything Together
By combining `SourceParser`s, `SourceDataCooker`s, and `CompositeDataCooker`s, a plugin can create arbitrarily
complex and extensible data processing pipelines. For example, here is a pipeline a plugin may create to
complex and extensible data-processing pipelines. For example, here is a pipeline a plugin may create to
modularize and promote the extensibility of three tables:
<img src=".attachments/complex_pipeline.svg" width="100%">
@ -128,5 +128,5 @@ and do whatever it wishes with it.
# Next Steps
Now that we understand at a high level how a data processing pipeline works, we can now begin creating our own
SDK plugins. To get started, view [Using the SDK/Creating an SDK Plugin C# Project](../Using-the-SDK/Creating-your-project.md).
Now that we understand at a high level how a data-processing pipeline works, we can now begin creating our own
SDK plugins. To get started, view [Using the SDK/Creating an SDK Plugin](../Using-the-SDK/Creating-your-plugin.md).

Просмотреть файл

@ -1,128 +0,0 @@
# Components
This file outlines and provides a brief description of the various components that make up the SDK.
Any types defined in the SDK that you can use to develop an SDK plugin will be highlighted in `code formatting`.
Any references to other high-level components (but not neccessarily types) will be highlighted in **bold**.
It is recommended to read the (Architecture)[./Architecture] documentation before reading this file to
understand how different components work together at a higher level.
----
## Processing Sources
Processing sources are the entry points the SDK runtime uses to interact with your plugin when a user wants to
open a file/data source for analysis. They provide the SDK with
1) Information on what types of data sources your plugin supports
2) A way to obtain a **Custom Data Processor** that implements logic for processing the data sources your plugin supports
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Custom Data Processors
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Tables
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Source Parsers
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Data Cookers
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Data Outputs
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Data Cooker Paths
<details>
<summary>Click to Expand</summary>
Under Construction
</details>
----
## Extended Tables
Extended tables are similar to regular **Table**s in function, but differ in the way they get data. Regular tables
are "hooked up" to processors by the SDK and are expected to be built by the **Custom Data Processor**'s `BuildTableCore` method
when their `TableDescriptor`s are passed in as a parameter.
Extended tables make use of the SDK's *Data Processing Pipeline* to programmatically gain access to data exposed by
**Data Cooker**s. This means
1) **Extended Table**s do not get passed into any **Custom Data Processor**'s `BuildTableCore` method
2) **Extended Table**s rely on the SDK to know when its required data is available from all of its required **Data Cooker**s
3) **Extended Table**s are responsible for "building itself" (populating an `ITableBuilder`)
To accomplish this third task, **Extended Table**s must implement a method with the following signature:
```C#
public static void BuildTable(ITableBuilder tableBuilder, IDataExtensionRetrieval tableData)
```
The SDK will invoke this method when all of the **Extended Table**'s required data is available from all of its required **Data Cooker**s. Data
can be gathered from required **Data Cooker**s' **Data Output**s through the `IDataExtensionRetrieval` passed in to this method.
To designate a table as an **Extended Table**, pass in the `requiredDataCookers` parameter to the table's `TableDescriptor` static property's constructor.
This parameter must enumerate all of the `DataCookerPath`s for the **Data Cooker**s the table depends upon.
Because **Data Cookers** allow programmatic access to its **Data Output**s across plugin boundaries, an **Extended Table** can require data from
a data cooker from another plugin - even one which it does not have the source code for. If a developer knows the `string`s which make up a
**Data Cooker**'s path, it can create a new `DataCookerPath` that instructs the SDK to pipe its data to tables defined in his/her plugin.

Просмотреть файл

@ -1,6 +1,8 @@
# Abstract
This folder contains, for known SDK drivers such as Windows Performance Analyzer, a mapping between SDK versions and all versions of the driver that can load plugins compiled against that SDK version.
When creating an SDK plugin, refer to the compatibility lists for the programs that will load it. For example, when creating a plugin that will only be used with Windows Performance Analyzer, reference [its compatibility list](./WPA.md).
# Known Drivers
The current compatibility lists being maintained are:

Просмотреть файл

@ -21,18 +21,12 @@ The SDK is published as a NuGet package under the name [Microsoft.Performance.SD
Since it is hosted on [NuGet.org](https://www.nuget.org/), it can be added to a `csproj` with no additional configuration by using
the Visual Studio NuGet Package Manager, `dotnet.exe`, or `nuget.exe`.
# Creating Your First Plugin
# Creating A Plugin
Refer to [Creating your first plugin](Using-the-SDK/Creating-your-plugin.md)
# Recommended Reading Order
To best understand how the SDK works and how to develop SDK plugins, it is recommended to read documentation in the following order:
1) [Architecture/Overview](./Architecture/Overview.md) to understand at a high level the various systems the SDK provides
2) [Architecture/The Data Processing Pipeline](./Architecture/The-Data-Processing-Pipeline.md) to understand how to systematically process data that
2) [Architecture/The Data-Processing Pipeline](./Architecture/The-Data-Processing-Pipeline.md) to understand how to systematically process data that
can be used by tables
3) [Using the SDK/Creating an SDK Plugin C# Project](Using-the-SDK/Creating-your-project.md) to get your developer environment ready to create an SDK plugin
4) [Using the SDK/Creating a Simple SDK Plugin](Using-the-SDK/Creating-a-simple-sdk-plugin.md) to see how to create a basic plugin that can
take in a specific data source and output structured tables
5) [Using the SDK/Creating a Data Processing Pipeline](Using-the-SDK/Creating-a-pipeline.md) to see how to create a data processing pipeline that
exposes data that can be consumed by your tables and other plugins
6) [Using the SDK/Creating an Extended Table](Using-the-SDK/Creating-an-extended-table.md) to see how to use data cookers to obtain the data to display
inside of a table
3) [Creating your first plugin](Using-the-SDK/Creating-your-plugin.md) to learn how to create an SDK plugin

Просмотреть файл

@ -0,0 +1,19 @@
# Troubleshooting
This document outlines steps for troubleshooting common issues that arise when developing SDK plugins.
* [WPA-Related Issues](#wpa-related-issues)
* [WPA does not load my plugin](#wpa-does-not-load-my-plugin)
## WPA-Related Issues
### WPA does not load my plugin
There are many reasons this could happen. Here are some things to check:
* Check WPA's diagnostic console to see if there were any errors when loading your plugin. From within WPA, select `Window` -> `Diagnostic Console`
* Ensure Visual Studio is correctly setup to load your plugin when debugging. Follow [these steps](./Using-the-SDK/Creating-your-plugin.md#setup-for-debugging-using-wpa)
* Ensure your plugin builds successfully. In Visual Studio, select `Build` -> `Rebuild Solution`. Or, from a command prompt in your plugin's folder, run `dotnet build`
* Ensure the `-addsearchdir` path in your debug profile created during [these steps](./Using-the-SDK/Creating-your-plugin.md#setup-for-debugging-using-wpa) is correct. Manually navigate to the folder used for `-addsearchdir` and verify your DLLs are there
* If your `-addsearchdir` path contains spaces, ensure the path is surrounded by quotes (`"`)
* Ensure the SDK version your plugin uses is [compatible with your version of WPA](./Known-SDK-Driver-Compatibility/WPA.md). To find your WPA version, select `Help` -> `About Windows Performance Analyzer`

Просмотреть файл

@ -0,0 +1,3 @@
# Custom Table Discovery
** Coming soon **

Просмотреть файл

@ -0,0 +1,112 @@
# Building a Table
Table construction happens by interacting with an instance of an `ITableBuilder`. Broadly, `Tables` consist of 3 parts: columns, which are composed of `ColumnConfigurations` and `Projections`, and `TableConfigurations`. An `ITableBuilder` has methods that accept all three of these objects.
* [Column](#column)
* [ColumnConfiguration](#columnconfiguration)
* [Projection](#projection)
* [Combining ColumnConfiguration and Projections](#combining-columnconfiguration-and-projections)
* [TableConfiguration](#tableconfiguration)
* [ColumnRole](#columnrole)
## Column
A column is a conceptual (`ColumnConfiguration`, `Projection`) pair that defines data inside a [Table](#table).
### ColumnConfiguration
A `ColumnConfiguration` contains metadata information about a column. For example, it contains the column's name and description, along with `UIHints` that help GUI viewers know how to render the column.
Typically, `ColumnConfigurations` are stored as `static` fields on the `Table` class for the `Table` being built. For example,
```cs
[Table]
public sealed class WordTable
{
...
private static readonly ColumnConfiguration lineNumberColumn = new ColumnConfiguration(
new ColumnMetadata(new Guid("75b5adfe-6eee-4b95-b530-94cc68789565"), "Line Number"),
new UIHints
{
IsVisible = true,
Width = 100,
});
private static readonly ColumnConfiguration wordCountColumn = new ColumnConfiguration(
new ColumnMetadata(new Guid("d1c800e5-2d19-4474-8dad-0ebc7caff3ab"), "Number of Words"),
new UIHints
{
IsVisible = true,
Width = 100,
});
}
```
### Projection
A column's `Projection` is a function that maps a row index for a column to a piece of data. Projections are normally constructed at runtime by a `Build` method, since they depend on the data inside the final table. The SDK offers many helper methods for constructing `Projections`, such as
* `Projection.Index(IReadOnlyList<T> data)`, which projects a column index `i` to `data[i]`
* `Projection.Compose(this IProjection<T1, T2>, Func<T2, TResult>)` which composes one `IProjection` with another method
For example, suppose we have a collection of `LineItem` objects that each have two properties: `LineNumber` and `Words`. We can use the above helper methods to create the following projections:
```cs
var baseProjection = Projection.Index(myLineItemCollection);
var lineNumberProjection = baseProjection.Compose(lineItem => lineItem.LineNumber);
var wordCountProjection = baseProjection.Compose(lineItem => lineItem.Words.Count());
```
## Combining ColumnConfiguration and Projections
If we have the `ColumnConfigurations` and `Projections` above, we can add them to the table we're building by calling `ITableBuilder.AddColumn`:
```cs
tableBuilder.AddColumn(this.lineNumberColumn, lineNumberProjection);
tableBuilder.AddColumn(this.wordCountColumn, wordCountProjection);
```
Note that _every_ column a table provides must be added through a call to `ITableBuilder.AddColumn`, even if they're not used in a `TableConfiguration` (see below).
## TableConfiguration
Some tables may have _many_ columns available. In these situations, it is not useful for a user to be shown every single column at once. A `TableConfiguration` describes groupings of columns that should be used together, along with metadata information such as `ColumnRoles`. Every `Table` must provide at least one `TableConfiguration`.
For example, here is a `TableConfiguration` that contains both of columns above:
```cs
var tableConfig = new TableConfiguration("All Data")
{
Columns = new[]
{
this.lineNumberColumn,
this.wordCountColumn,
},
};
```
You also specify [Special Columns](../Glossary.md##special-columns) in a `TableConfiguration`.
Once a `TableConfiguration` is created, it is added to the table we're building by calling `ITableBuilder.AddTableConfiguration`:
```cs
tableBuilder.AddTableConfiguration(tableConfig);
```
It is also recommended to set a default `TableConfiguration`:
```cs
tableBuilder.SetDefaultTableConfiguration(tableConfig);
```
### ColumnRole
A `ColumnRole` is metadata information about a column that defines the column's role in data presentation. For example, a column that contains a `Timestamp` whose value indicates when an event occured relative to the start of a `DataSource`](#datasource)` may be marked as a "start time" Column:
```cs
tableConfig.AddColumnRole(ColumnRole.StartTime, relativeTimestampColumnConfiguration);
```
# More Information
There are many other things you can do with your `Tables`, such as adding `TableCommands` or configuring its default layout style. For more information, refer to the [Advanced Topics](./Advanced/Overview.md).

Просмотреть файл

@ -1,49 +1,24 @@
# Creating a Data Processing Pipeline
# Creating a Data-Processing Pipeline
## Overview
This document assumes you have already created a `ProcessingSource` and your plugin is using the data-processing pipeline plugin framework. For more details, refer to [Creating an SDK Plugin](./Creating-your-plugin.md).
A Data Processing Pipeline (DPP) allows you to compose components together that
map data between inputs and outputs. A DPP is comprised of the following:
1) one __source parser__
2) one or more `DataCooker`s (cookers)
> :exclamation: Before reading this document, please read [the overview of the Data-Processing Pipelines' architecture](../Architecture/The-Data-Processing-Pipeline.md). This tutorial assumes you are familiar with the concepts of `SourceParsers`, `SourceDataCookers`, `CompositeDataCookers`, and `DataOutputs`.
For a real world example of a pipeline, see the LTTng SDK plugin, found on GitHub [here](https://github.com/microsoft/Microsoft-Performance-Tools-Linux)
---
A cooker is either:
1) A `SourceCooker`
2) A `CompositeCooker`
The data-processing pipeline plugin framework is centered around a `SourceParser` and one-or-more `DataCookers`. The `CustomDataProcessor` your `ProcessingSource` creates delegates the task of parsing `DataSources` off to a `SourceParser`, who in turn emits __events__ that flow through `DataCookers`. In this framework, `Tables` are responsible for "building themselves" by querying `DataCookers` for their `DataOutputs`.
A `SourceCooker` takes data directly from a `SourceParser` to produce
`DataOutput`s, whereas a `CompositeCooker` takes data from other cookers to produce
`DataOutput`s. A `CompositeCooker` may depend on both __source__ *and/or*
__composite__ cookers.
This document is outlined into 4 distinct steps
* [Creating a SourceParser](#creating-a-sourceparser)
* [Creating a CustomDataProcessorWithSourceParser](#creating-a-customdataprocessorwithsourceparser)
* [Linking Your CustomDataProcessor to Your ProcessingSource](#linking-your-customdataprocessor-to-your-processingsource)
* [Creating a DataCooker](#creating-a-datacooker)
The following illustrates these concepts:
## Creating a SourceParser
<img src=".attachments/dpp.svg" width="500">
All cookers expose zero or more __data outputs__. A `DataOutput` exposes data that
can be consumed by users of the DPP or other cookers in the DPP.
## Source Parsers
__Source parsers__ parse data from a data source into data that can be
manipulated by your application. For example, a source parser may parse an ETW
`.etl` file into a stream of `Event` objects. The following are required in order to implement a
source parser:
1) A `ProcessingSource`
2) A class implementing `SourceParserBase`
3) A `CustomDataProcessor` implementing `CustomDataProcessorBaseWithSourceParser`
A `ProcessingSource` is required in order to expose your data, whether
you are using a DPP or not. The `ProcessingSource` is used as the entry point for
creating the `CustomDataProcessor` that processes your data. Please see
[here](/Creating-a-simple-sdk-plugin) for more on `ProcessingSource`s.
The source parser implements the actual logic of parsing the raw data
into the initial object stream. This source parser is passed to the `CustomDataProcessor`
so that the SDK can use it when it comes time for the `CustomDataProcessor` to
process the data sources.
To begin, we must create a `SourceParser`. A `SourceParser` parses data from a data source into data that can be
manipulated by your application. For example, a `SourceParser` may parse an ETW
`.etl` file into a stream of `Event` objects.
A source parser will inherit from `SourceParserBase`:
````cs
@ -53,294 +28,262 @@ public abstract class SourceParserBase<T, TContext, TKey>
}
````
where `T` is the type of objects being parsed, `TContext` is an arbitrary type
where you can store metadata about the parsing, and `TKey` is how the data type `T` is keyed.
Source parsers expose an `Id` property that is used to identify itself.
This property is used in __paths__ to the data exposed by the Source Parser.
The sections on Cookers will go into more detail about these Paths.
The following snippet outlines these three components working together to implement
what is required:
````cs
[ProcessingSource(
// Id here,
// Name here,
// Description here
)]
// Other attributes here
public sealed class SampleProcessingSource
: ProcessingSource
{
protected override ICustomDataProcessor CreateProcessorCore(
IEnumerable<IDataSource> dataSources,
IProcessorEnvironment processorEnvironment,
ProcessorOptions options)
{
// The parser has a custom constructor to store
// the data sources it will need to parse
var parser = new SampleSourceParser(dataSources);
return new SampleProcessor(
parser,
options,
this.ApplicationEnvironment,
processorEnvironment,
this.AllTables,
this.MetadataTables);
}
...
}
public sealed class SampleProcessor
: CustomDataProcessorBaseWithSourceParser<SampleDataObject, SampleContext, int>
{
public SampleProcessor(
ISourceParser<SampleDataObject, SampleContext, int> sourceParser,
ProcessorOptions options,
IApplicationEnvironment applicationEnvironment,
IProcessorEnvironment processorEnvironment,
IReadOnlyDictionary<TableDescriptor, Action<ITableBuilder, IDataExtensionRetrieval>> allTablesMapping,
IEnumerable<TableDescriptor> metadataTables)
: base(sourceParser, options, applicationEnvironment, processorEnvironment, allTablesMapping, metadataTables)
{
}
}
public sealed class SampleSourceParser
: SourceParserBase<SampleDataObject, SampleContext, int>
{
private DataSourceInfo dataSourceInfo;
public SampleSourceParser(IEnumerable<IDataSource> dataSources)
{
...
}
// The ID of this Parser.
public override string Id => nameof(SampleSourceParser);
// Information about the Data Sources being parsed.
public override DataSourceInfo DataSourceInfo => this.dataSourceInfo;
public override void ProcessSource(
ISourceDataProcessor<SampleDataObject, SampleContext, int> dataProcessor,
ILogger logger,
IProgress<int> progress, CancellationToken cancellationToken)
{
// Enumerate your data sources, processing them into objects.
// For each object you parse, be sure to call dataProcessor.ProcessDataElement.
// for example:
// dataProcessor.ProcessDataElement(
// new SampleDataObject()
// new SampleContext(),
// cancellationToken);
//
// Also be sure to set this.dataSourceInfo in this method
}
...
}
````
## Cookers
__Cookers__ transform data from one type to another. A cooker will transform the
output from one or more sources, optionally producing new `DataOutput`s for other cookers
or end user applications to consume.
__Source cookers__ take data directly from a __source parser__ to produce `DataOutput`s.
__Composite cookers__ take data from one or more cookers (source or composite)
to create `DataOutput`s.
We use the term _cooked_ to denote Data that has been transformed via a cooker.
Cooked data is exposed via a `DataOutput`. These outputs
may be consumed directly by the user, or by other cookers. A `DataOutput` must
implement the following interface:
````cs
IKeyedDataItem<T>
````
A `DataOutput` is uniquely identified by a `DataOutputPath`.
A `DataOutputPath` has the following format:
````cs
CookerPath/DataOutputPropertyName
````
where
- `CookerPath` is the path to the cooker exposing the data.
- `DataOutputPropertyName` is the name of the property exposing the `DataOutput`.
* `T` is the keyed type of the objects being parsed/emitted
* `TContext` is an arbitrary type where you can store metadata about the parsing
* `TKey` is how the data type `T` is keyed.
A `CookerPath` has the following format:
````cs
SourceParserId/CookerId
````
where
- `SourceParserId` is the ID of the `SourceParser`.
- `CookerId` is the ID of the cooker.
A `CompositeCooker` will have an empty (`""`) `SourceParserID` as is not
tied to any particular `SourceParser`, and thus its path has the following
form:
````cs
/CookerId
````
The following snippet shows simple cookers:
````cs
// A SourceCooker
public sealed class SampleSourceCooker
: BaseSourceDataCooker<SampleDataObject, SampleContext, int>
{
public static readonly DataCookerPath DataCookerPath = new DataCookerPath(
nameof(SampleSourceParser),
nameof(SampleSourceCooker));
public SampleDataCooker()
: this(DataCookerPath)
{
}
public SampleSourceCooker()
: base(DataCookerPath)
{
this.Objects = new List<SampleDataObject>();
}
public override string Description => string.Empty;
public override ReadOnlyHashSet<int> DataKeys => new ReadOnlyHashSet<int>(new HashSet<int>(new[] { 1, }));
// Defines a DataOutput.
// The path of this output is
// SampleSourceParser/SampleSourceCooker/Objects
[DataOutput]
public List<SampleDataObject> Objects { get; }
public override DataProcessingResult CookDataElement(
SampleDataObject data,
SampleContext context,
CancellationToken cancellationToken)
{
//
// Process each data element. This method will be called once
// for each SampleDataObject emitted by the SourceParser.
//
...
//
// Return the status of processing the given data item.
//
return DataProcessingResult.Processed;
}
}
// A CompositeCooker
public sealed class SampleCompositeCooker
: CookedDataReflector,
ICompositeDataCookerDescriptor
{
public static readonly DataCookerPath DataCookerPath = new DataCookerPath(nameof(SampleCompositeCooker));
public SampleCompositeCooker()
: base(DataCookerPath)
{
this.Output = new List<Composite1Output>();
}
public string Description => "Composite Cooker";
public DataCookerPath Path => DataCookerPath;
// Defines a DataOutput.
// The path of this output is
// /SampleCompositeCooker/Objects
[DataOutput]
public List<SampleCompositeOutput> Output { get; }
// Declare all of the cookers that are used by this CompositeCooker.
public IReadOnlyCollection<DataCookerPath> RequiredDataCookers => new[]
{
// SampleSourceParser/SampleSourceCooker
SampleSourceCooker.DataCookerPath,
};
public void OnDataAvailable(IDataExtensionRetrieval requiredData)
{
//
// Query data as appropriate and populate the Output property.
//
...
//
// There is no need to return a status, as Composite Cookers
// run after all Source Cookers have run.
//
}
}
````
To get data from a `DataCooker`, the cooker must be __queried__ using an `IDataExtensionRetrieval`,
such as the one passed into `OnDataAvailable` in the above `CompositeCooker`. Since the
`CompositeCooker` depends on `SampleSourceCooker`, we can query its `Objects` property as follows:
Let's create a `LineItem` class that will get emitted by the `SourceParser` class we will create. Each `LineItem` is an object representing information about a line in one of the `mydata*.txt` files opened by the user. Since `LineItem` gets emitted, it needs to be an `IKeyedDataType`. We'll use the line number as the key.
```cs
public void OnDataAvailable(IDataExtensionRetrieval requiredData)
public class LineItem : IKeyedDataType<int>
{
var data =
requiredData.QueryOutput<List<SampleDataObject>>(new DataOutputPath(SampleSourceCooker.DataCookerPath, "Objects"));
private int lineNumber;
... // other fields
//
// Process this data and populate the Output property.
//
public LineItem(lineNumber, ...)
{
this.lineNumber = lineNumber;
...
}
...
public int GetKey()
{
return this.lineNumber;
}
}
```
Source parsers expose an `Id` property that is used to identify itself.
This property is used by `SourceDataCookers` to access the data emitted by the `SourceParser`.
Let's create a `SampleSourceParser` that emits `LineItem` events.
```cs
public sealed class SampleSourceParser
: SourceParserBase<LineItem, SampleContext, int>
{
private SampleContext context;
private IEnumerable<IDataSource> dataSources;
private DataSourceInfo dataSourceInfo;
public SampleSourceParser(IEnumerable<IDataSource> dataSources)
{
this.context = new SampleContext();
// Store the datasources so we can parse them later
this.dataSources = dataSources;
}
// The ID of this Parser.
public override string Id => nameof(SampleSourceParser);
// Information about the Data Sources being parsed.
public override DataSourceInfo DataSourceInfo => this.dataSourceInfo;
public override void ProcessSource(
ISourceDataProcessor<SampleDataObject, SampleContext, int> dataProcessor,
ILogger logger,
IProgress<int> progress, CancellationToken cancellationToken)
{
var totalNumberLines = GetLineCount(this.dataSources);
var linesProcessed = 0;
foreach (var dataSource in this.dataSources)
{
if (!(dataSource is FileDataSource fileDataSource))
{
continue;
}
using (StreamReader reader = GetStreamReader(fileDataSource))
{
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
LineItem lineItem = ParseLineItem(line);
dataProcessor.ProcessDataElement(lineItem, this.context, cancellationToken);
linesProcessed++;
progress.Report((double)linesProcessed / (double)totalNumberLines * 100);
}
}
}
this.dataSourceInfo = new DataSourceInfo(...)
}
}
```
## Extensibility
For brevity, this example refers to several ficticious methods and objects such as `GetLineCount`, `ParseLineItem`, and `SampleContext`. The import part, however, is the call to `dataProcessor.ProcessDataElement`. This method is what emits the `LineItem` we parsed. The emitted event will get sent through the data-processing pipeline into any `SourceCookers` hooked up to this `SourceParser`.
In addition to using your own cookers to create a DPP, the SDK allows you to use cookers
authored by other people. In order to use a cooker from another plugin, all you
need to do is declare the path to the cooker as a dependency, the same as you
would for your own.
Lastly, the parameters to `DataSourceInfo`'s constructor are omitted here, since their calculations are highly dependant on the actual data being processed. For more help, refer to the [SqlPluginWithProcessingPipeline sample](../../samples/SqlPlugin/SqlPluginWithProcessingPipeline).
## Using the Pipeline
## Creating a CustomDataProcessorWithSourceParser
You may use the `Engine` to programmatically access your DPP. For example,
````cs
var engine = new Engine();
Now that we have a `SourceParser`, we can create a `CustomDataProcessor` that uses it. In this framework, our `CustomDataProcessor` does not do much work. Instead, it delegates the task of parsing our `DataSources` to the `SourceParser` we just created.
engine.EnableCooker(
"SampleSourceParser/SampleSourceCooker"
);
Since we're using the data-processing pipeline framework, we need to extend `CustomDataProcessorWithSourceParser<T, TContext, TKey>`. These generic parameters must be the same as the ones used in our `SourceParser`.
engine.AddDataSource(
// Data Source
);
```cs
public sealed class SampleProcessor
: CustomDataProcessorWithSourceParser<SampleDataObject, SampleContext, int>
{
public SampleProcessor(
ISourceParser<SampleDataObject, SampleContext, int> sourceParser,
ProcessorOptions options,
IApplicationEnvironment applicationEnvironment,
IProcessorEnvironment processorEnvironment)
: base(sourceParser, options, applicationEnvironment, processorEnvironment)
{
}
}
```
var results = engine.Process();
Passing the `sourceParser` parameter down to the base constructor hooks up our `SourceParser` to the data-processing pipeline we're creating. That's all the work we need to do for this `CustomDataProcessor`!
// Note that we can call QueryOutput using the string path instead of creating a new
// DataOutputPath object
var sample = results.QueryOutput<List<SampleDataObject>>("SampleSourceParser/SampleSourceCooker/Objects");
````
## Linking Your CustomDataProcessor to Your ProcessingSource
Now that we have a finished `CustomDataProcessor`, let's go back to the `ProcessingSource` we created and fill in `CreateProcessorCore`.
```cs
[ProcessingSource(...)]
[FileDataSource(...)]
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
protected override bool IsDataSourceSupportedCore(IDataSource dataSource)
{
...
}
protected override ICustomDataProcessor CreateProcessorCore(
IEnumerable<IDataSource> dataSources,
IProcessorEnvironment processorEnvironment,
ProcessorOptions options)
{
var parser = new SampleSourceParser(dataSources);
return new SampleProcessor(
parser,
options,
this.ApplicationEnvironment,
processorEnvironment);
}
}
```
## Creating a DataCooker
`DataCookers` transform data from one type to another. A cooker will transform the
output from one or more sources, optionally producing new `DataOutput`s for other cookers
or end user applications to consume.
`SourceCookers` take events emitted from a `SourceParser` to produce `DataOutputs`.
`CompositeCookers` take data from one or more cookers (source or composite)
to create `DataOutputs`.
Let's create a `SourceDataCooker` that takes events from our `SampleSourceParser`.
```cs
public sealed class SampleSourceCooker
: SourceDataCooker<LineItem, SampleContext, int>
{
public static readonly DataCookerPath DataCookerPath =
DataCookerPath.ForSource(nameof(SampleSourceParser), nameof(SampleSourceCooker));
public SampleSourceCooker()
: base(DataCookerPath)
{
this.CookedLineItems = new List<CookedLineItem>();
}
public override string Description => "My awesome SourceCooker!";
public override ReadOnlyHashSet<int> DataKeys =>
new ReadOnlyHashSet<int>(new HashSet<int>(new[] { 1, }));
// Defines a DataOutput
[DataOutput]
public List<CookedLineItem> CookedLineItems { get; }
public override DataProcessingResult CookDataElement(
LineItem data,
SampleContext context,
CancellationToken cancellationToken)
{
//
// Process each data element. This method will be called once
// for each SampleDataObject emitted by the SourceParser.
//
var cookedLineItem = TransformLineItem(data);
this.CookedLineItem.Add(cookedLineItem)
//
// Return the status of processing the given data item.
//
return DataProcessingResult.Processed;
}
}
```
This `SourceDataCooker` will transform each `LineObject` emitted by our `SourceParser` into a `CookedLineItem` and expose all of the cooked data through the `CookedLineItems` `DataOutput`.
If we wanted to further cook each `CookedLineItem` into a `FurtherCookedLineItem`, we can create a `CompositeDataCooker`:
```cs
// A CompositeCooker
public sealed class SampleCompositeCooker
: CookedDataReflector,
ICompositeDataCookerDescriptor
{
public static readonly DataCookerPath DataCookerPath =
DataCookerPath.ForComposite(nameof(SampleCompositeCooker));
public SampleCompositeCooker()
: base(DataCookerPath)
{
this.FurtherCookedLineItems = new List<FurtherCookedLineItem>();
}
public string Description => "Composite Cooker";
public DataCookerPath Path => DataCookerPath;
// Defines a DataOutput
[DataOutput]
public List<SampleCompositeOutput> FurtherCookedLineItems { get; }
// Declare all of the cookers that are used by this CompositeCooker.
public IReadOnlyCollection<DataCookerPath> RequiredDataCookers => new[]
{
SampleSourceCooker.DataCookerPath,
};
public void OnDataAvailable(IDataExtensionRetrieval requiredData)
{
var cookedLineItems =
requiredData.QueryOutput<List<CookedLineItem>>(new DataOutputPath(SampleSourceCooker.DataCookerPath, nameof(SampleSourceCooker.CookedLineItems)));
this.FurtherCookedLineItems = FurtherCookLineItems(cookedLineItems);
}
}
```
Here, we declare that this `CompositeCooker` depends on data from the `SampleSourceCooker` we created above. __The SDK will ensure that every required cooker has finished processing its data before `CookedDataReflector.OnDataAvailable` is called__.
To get data from a `DataCooker`, the cooker must be __queried__ using the `IDataExtensionRetrieval` passed into `OnDataAvailable`. In the example above, we query `SampleSourceCooker` for its `CookedLineItems` `DataOutput`.
The LTTng Plugin Repository has many examples of using the Engine in this capacity.
[LTTngUnitTest](https://github.com/microsoft/Microsoft-Performance-Tools-Linux/blob/develop/LTTngDataExtUnitTest/LTTngUnitTest.cs) makes
use of the engine to add files, enable Cookers, and query their Cooked Data.
# Next Steps
Now that we've seen how to create a data processing pipeline inside your SDK plugins,
we can see how to connect DPPs up to the tables your plugin defines.
Continue reading at [Using the SDK/Creating an Extended Table](./Creating-an-extended-table.md)
Now that we've created a data-processing pipeline we can create a `Table` that uses data exposed by our `DataCookers` to build itself. Continue reading at [Using the SDK/Creating an Table](./Creating-a-table.md)

Просмотреть файл

@ -1,130 +1,30 @@
# Creating a Simple SDK Plugin
An SDK plugin uses the SDK to create a mapping between arbitrary data sources (e.g. files with
specific formats) and
* zero or more __tables__
* zero or more __data outputs__
This document assumes you have already created a `ProcessingSource` and your plugin is using the simple plugin framework. For more details, refer to [Creating an SDK Plugin](./Creating-your-plugin.md).
---
A `ProcessingSource` (PS) acts as the entry point for the SDK runtime to discover and create these mappings.
An SDK plugin may contain more than one PS, but it is __highly recommended__ to only include one PS per
assembly inside a plugin.
The simple plugin framework is centered around the `CustomDataProcessor` class. Your plugin must create a concrete implementation of the abstract `CustomDataProcessor` class provided by the SDK.
A standard PS will utilize a `CustomDataProcessor` (CDP) to process the data sources that the SDK runtime provides it.
In this framework, your `CustomDataProcessor` is responsible for
1. Processing the `DataSource(s)` opened by a user that your `ProcessingSource` supports (i.e. the `DataSource(s)` that passed the `IsDataSourceSupportedCore` check)
2. Building all of the `Tables` your `ProcessingSource` discovers
A plugin may also define __tables__ that can be used by viewers such as Windows Performance Analyzer (WPA) to display the
processed data in an organized, interactable collection of rows. Tables are discovered and passed into CDPs by the SDK
runtime.
> :information_source: When using the simple plugin framework, a `Table` is referred to as a `Simple Table`.
The SDK also supports advanced data processing pipelines through data cookers and extensions, but these topics are
covered in the advanced documentaton. Refer to [Overview](../Overview.md) for more information.
This document is outlined into 4 distinct steps
* [Creating a CustomDataProcessor](#creating-a-customdataprocessor)
* [Creating a Simple Table](#creating-a-simple-table)
* [Building Your Simple Table](#building-your-simple-table)
* [Linking Your CustomDataProcessor to Your ProcessingSource](#linking-your-customdataprocessor-to-your-processingsource)
In this tutorial, we will explore creating a simple SDK plugin that has one PS, one CDP, and one table.
Refer to [the following sample](../../samples/SimpleDataSource/SampleAddIn.csproj) for source code that implements the steps outlined in this file.
## Creating a CustomDataProcessor
## Requirements of a Simple Plugin
To create a simple plugin, you must:
1. Create a public class that implements the abstract class `Processing`. This is your Processing Source
(PS)
2. Create a public class that implements the abstract class `CustomDataProcessorBase`. This your Custom Data Processor
(CDP)
3. Create one or more data table classes. These classes must:
- Be public
- Be decorated with `TableAttribute`
- Expose a static public field or property named "TableDescriptor" of type `TableDescriptor` which provides information
about the table. If these requirements are not met, the SDK runtime will not be able to find and pass your tables
to your CDP
## Implementing a ProcessingSource Class
1. Create a public class that extends the abstract class `ProcessingSource`. Note that this example is decorated with
two attributes: `ProcessingSourceAttribute` and `FileDataSourceAttribute`. The former is used by the SDK runtime to
locate `ProcessingSource`s. The latter identifies a type of data source that the PS can consume.
```cs
[ProcessingSource(
"{F73EACD4-1AE9-4844-80B9-EB77396781D1}", // The GUID must be unique for your ProcessingSource. You can use
// Visual Studio's Tools -> Create Guid… tool to create a new GUID
"Simple Data Source", // The ProcessingSource MUST have a name
"A data source to count words!")] // The ProcessingSource MUST have a description
[FileDataSource(
".txt", // A file extension is REQUIRED
"Text files")] // A description is OPTIONAL. The description is what appears in the
// file open menu to help users understand what the file type actually
// is.
public class SimpleProcessingSource : ProcessingSource
{
}
```
2. Overwrite the `SetApplicationEnvironmentCore` method. The `IApplicationEnvironment` parameter is stored in the base
class' `ApplicationEnvironment` property.
:information_source: In the future, this method will not need to be overridden as nothing needs to be done here
```cs
protected override void SetApplicationEnvironmentCore(IApplicationEnvironment applicationEnvironment)
{
}
```
3. Overwrite the `IsFileSupportedCore` method. This is where your class will determine if a given file contains data
appropriate to your PS. For example, if your PS consumes `.xml` files, not all `.xml` files will be valid for your
PS. Use this method as an opportunity to filter out the files that aren't consumable by this PS.
:warning: This method will be changed before 1.0 release - more details to come in patch notes.
```cs
protected override bool IsFileSupportedCore(string path)
{
//
// This method is called for every file whose filetype matches the one declared in the FileDataSource attribute. It may be useful
// to peek inside the file to truly determine if you can support it, especially if your PS supports a common
// filetype like .txt or .csv.
// For this sample, we'll always return true for simplicity.
//
return true;
}
```
4. Overwrite the `CreateProcessorCore` method. When the SDK needs to process files your PS supports, it will obtain an
instance of your CDP by calling this method. This is also where your PS learns about the files, passed as
`IDataSource`s, that it will need to process.
```cs
protected override ICustomDataProcessor CreateProcessorCore(
IEnumerable<IDataSource> dataSources,
IProcessorEnvironment processorEnvironment,
ProcessorOptions options)
{
//
// Create a new instance of a class implementing ICustomDataProcessor here to process the specified data
// sources.
// Note that you can have more advanced logic here to create different processors if you would like based
// on the file, or any other criteria.
// You are not restricted to always returning the same type from this method.
//
return new SimpleCustomDataProcessor(
dataSources.Select(x => x.GetUri().LocalPath).ToArray(), // Map each IDataSource to its file path
options,
this.applicationEnvironment,
processorEnvironment,
this.AllTables,
this.MetadataTables);
}
```
Now let's implement the `SimpleCustomDataProcessor` being returned above.
## Implementing the Custom Data Processor Class
1. Create a public class that extends the abstract class `CustomDataProcessorBase`.
1. Create a public class that extends the abstract class `CustomDataProcessor`.
```cs
public sealed class SimpleCustomDataProcessor
: CustomDataProcessorBase
: CustomDataProcessor
{
}
```
@ -132,47 +32,66 @@ Now let's implement the `SimpleCustomDataProcessor` being returned above.
2. Create a constructor that calls into the base class.
```cs
public SimpleCustomDataProcessor(
string[] filePaths,
ProcessorOptions options,
IApplicationEnvironment applicationEnvironment,
IProcessorEnvironment processorEnvironment,
IReadOnlyDictionary<TableDescriptor, Action<ITableBuilder, IDataExtensionRetrieval>> allTablesMapping,
IEnumerable<TableDescriptor> metadataTables)
: base(options, applicationEnvironment, processorEnvironment, allTablesMapping, metadataTables)
public sealed class SimpleCustomDataProcessor
: CustomDataProcessor
{
//
// Store the file paths for all of the data sources this processor will eventually
// need to process in a field for later
//
private string[] filePaths;
this.filePaths = filePaths;
public SimpleCustomDataProcessor(
string[] filePaths,
ProcessorOptions options,
IApplicationEnvironment applicationEnvironment,
IProcessorEnvironment processorEnvironment)
: base(options, applicationEnvironment, processorEnvironment)
{
//
// Store the file paths for all of the data sources this processor will eventually
// need to process in a field for later
//
this.filePaths = filePaths;
}
}
```
This example assumes that we're using our `SimpleCustomDataProcessor` with the `ProcessingSource` created in [Creating an SDK Plugin](./Creating-your-plugin.md) that advertises support for `.txt` files beginning with `mydata`. Because of this, we are passing in the `filePaths` for all of the `mydata*.txt` files opened by the user.
3. Implement `ProcessAsyncCore`. This method will be called to process data sources passed into the Custom Data
Processor. Typically the data in the data source is parsed and converted from some raw form into something more
3. Implement `ProcessAsyncCore`. This method will be called to process data sources passed into your `CustomDataProcessor`. Typically the data in the data source is parsed and converted from some raw form into something more
relevant and easily accessible to the processor.
In this simple sample, a comma-delimited text file is parsed into event structures and stored for later use. In a
In this example, we're calling a `ParseFiles` method that converts lines in each file opened to ficticious `LineItem` objects. In a
more realistic case, processing would probably be broken down into smaller units. For example, there might be logic
for parsing operating system processes and making that data queryable by time and or memory layout.
This method is also typically where `this.dataSourceInfo` would be set (see below).
```cs
protected override Task ProcessAsyncCore(
IProgress<int> progress,
CancellationToken cancellationToken)
public sealed class SimpleCustomDataProcessor
: CustomDataProcessor
{
...
private string[] filePaths;
private LineItem[] lineItems;
public SimpleCustomDataProcessor(...) : base(...)
{
...
}
protected override Task ProcessAsyncCore(
IProgress<int> progress,
CancellationToken cancellationToken)
{
this.lineItems = ParseFiles(this.filePaths, progress, cancellationToken);
}
}
```
We pass `progress` and `cancellationToken` into the ficticious `ParseFiles` method so it can use them. It is good practice to report parsing progress back to the `IProgress<int>` passed into `ProcessAsyncCore`. For example, `ParseFiles` could begin by quickly getting a combined line count of all files being processed, and then report what % of lines have been processed after each line of a file is parsed.
4. Override the `GetDataSourceInfo` method. `DataSourceInfo` provides the driver some information about the data source
to provide a better user experience. It is expected that this method will not be called before
`ProcessAsync`/`ProcessAsyncCore` because the data necessary to create a `DataSourceInfo` object might not be
`ProcessAsyncCore` because the data necessary to create a `DataSourceInfo` object might not be
available beforehand.
A `DataSourceInfo` contains three important pieces of information:
@ -181,96 +100,150 @@ Now let's implement the `SimpleCustomDataProcessor` being returned above.
- The UTC wallclock time of the start of the sources being processed
```cs
public override DataSourceInfo GetDataSourceInfo()
public sealed class SimpleCustomDataProcessor
: CustomDataProcessor
{
return this.dataSourceInfo;
private string[] filePaths;
private LineItem[] lineItems;
private DataSourceInfo dataSourceInfo;
public SimpleCustomDataProcessor(...) : base(...)
{
...
}
protected override Task ProcessAsyncCore(
IProgress<int> progress,
CancellationToken cancellationToken)
{
...
this.dataSourceInfo = new DataSourceInfo(...)
}
public override DataSourceInfo GetDataSourceInfo()
{
return this.dataSourceInfo;
}
}
```
5. Override the `BuildTableCore` method. This method is responsible for instantiating a given table. This method is
called for each table identified by the SDK runtime as part of this PS (see more on this in the "Create Tables" section).
The table to build is identified by the `TableDescriptor` passed in as a parameter to this method. If the CDP isn't interested
in the given table, it may return immediately.
The parameters to `DataSourceInfo`'s constructor typically are created while parsing `DataSources`. For more help, refer to the [SimpleDataSource sample](../../samples/SimpleDataSource/README.md).
```cs
protected override void BuildTableCore(
TableDescriptor tableDescriptor,
Action<ITableBuilder, IDataExtensionRetrieval> buildTableAction,
ITableBuilder tableBuilder)
{
...
}
```
How a table is created is left up to the plugin author. The sample for this documentation uses reflection to instantiate an instance of the table class.
Our `SimpleCustomDataProcessor` is now ready to process the `DataSources` opened by a user. The last step is ensuring our `SimpleCustomDataProcessor` can build tables discovered by our `ProcessingSource`. However, before implementing this, we must first create a `Table`.
After the table object is created, the `ITableBuilder` parameter is used to populate the table with columns and row data. In this example,
the build logic exists in a `Build` method on the table, which is called by this method.
## Creating a Simple Table
## Create Tables
For our `SimpleCustomDataProcessor` to build a table, the table must first be discovered by our `ProcessingSource`. By default, a `ProcessingSource` will "discover" each `Simple Table` defined in its assembly that meet the following criteria:
- The class is public and concrete (not `abstract`)
- The class is decorated with `TableAttribute`
- The class exposes a `static public` property named "TableDescriptor" of type `TableDescriptor`
Simple processing sources provide tables as output. A table is set of columns and rows grouped together to provide output for related data.
Tables must be created (built) to be a plugin output.
> :information_source: In a simple framework plugin, each `ProcessingSource` is responsible for "discovering" the `Simple Tables` that will be built by its `CustomDataProcessor`. By default, a `ProcessingSource` will "discover" all `Tables` defined in its assembly. _Most plugins do not need to override this behavior_. However, if you do wish to override this behavior, refer to [Custom Table Discovery](./Advanced/Custom-Table-Discovery.md).
Here are the requirements for a table to be discovered by the SDK runtime:
- Be public and concrete
- Be decorated with `TableAttribute`
- Expose a static public field or property named "TableDescriptor" of type `TableDescriptor` which provides information
about the table
If these requirements are not met, the SDK runtime will not be able to find and pass your tables to your CDP.
Here's an example from the simple plugin example, where `TableBase` is an abstract class the simple plugin defines:
Let's define a table `WordTable` that will eventually have one row for each distinct word in the `DataSources` processed.
```cs
[Table]
public sealed class WordTable
: TableBase
{
public static TableDescriptor TableDescriptor => new TableDescriptor(
Guid.Parse("{E122471E-25A6-4F7F-BE6C-E62774FD0410}"), // The GUID must be unique across all tables
"Word Stats", // The Table must have a name
"Statistics for words", // The Table must have a description
"Words"); // A category is optional. It useful for grouping
// different types of tables in the viewer's UI.
public static TableDescriptor TableDescriptor =>
new TableDescriptor(
Guid.Parse("{E122471E-25A6-4F7F-BE6C-E62774FD0410}"), // The GUID must be unique across all tables
"Word Stats", // The Table must have a name
"Statistics for words", // The Table must have a description
"Words"); // A category is optional. It useful for grouping
// different types of tables in the SDK Driver's UI.
}
```
## Building Your Simple Table
Sometime after `ProcessAsyncCore` finishes (at least for typical SDK drivers who ask the SDK to build tables
only *after* asking it to process sources), the SDK runtime will call `BuildTableCore` on your CDP for each table
"hooked up" to it. This method receives the `TableDescriptor` of the table it must create, and is responsible for
populating the `ITableBuilder` with the columns that make up the table being constructed. This is most easily done by
delegating the work of populating the `ITableBuilder` to an instance of the class/table described by the
`TableDescriptor`.
Now that we have a `Table` defined, we can override the `BuildTableCore` method of our `CustomDataProcessor`. This method is responsible for instantiating a given table. This method is
called for each table discovered by our `ProcessingSource`.
The table to build is identified by the `TableDescriptor` passed in as a parameter to this method. If the CDP isn't interested
in the given table, it may return immediately.
Note that tables are "hooked up" to CDPs automatically by the SDK. **Every class that meets the requirements listed above
is automatically hooked up to any CDPs declared in that table's assembly.**
In the sample code, the `WordTable` has a `Build` method that is called by `SimpleCustomDataProcessor`. The key code in
this method uses the `ITableBuilder` to generate the table.
To build a `Table`, the `CustomDataProcessor` uses the `ITableBuilder` passed into `BuildTableCore`. Typically, the task of interacting with the `ITableBuilder` and building the `Table` is delegated to the `Table`'s class.
```cs
public override void Build(ITableBuilder tableBuilder)
public sealed class SimpleCustomDataProcessor
: CustomDataProcessor
{
...
private string[] filePaths;
private LineItem[] lineItems;
private DataSourceInfo dataSourceInfo;
tableBuilder.AddTableConfiguration(config)
.SetDefaultTableConfiguration(config)
.SetRowCount(allWords.Count)
.AddColumn(FileColumn, fileProjection)
.AddColumn(WordColumn, wordProjection)
.AddColumn(CharacterCountColumn, charCountProjection)
.AddColumn(TimeColumn, timeProjection);
public SimpleCustomDataProcessor(...) : base(...)
{
...
}
protected override Task ProcessAsyncCore(
IProgress<int> progress,
CancellationToken cancellationToken)
{
...
}
public override DataSourceInfo GetDataSourceInfo()
{
...
}
protected override void BuildTableCore(
TableDescriptor tableDescriptor,
ITableBuilder tableBuilder)
{
switch (tableDescriptor.Guid)
{
case var g when (g == WordTable.TableDescriptor.Guid):
new WordTable(this.lineItems).Build(tableBuilder);
break;
default:
break;
}
}
}
```
The `TableConfiguration` passed into `ITableBuilder.AddTableConfiguration` and `ITableBuilder.SetDefaultConfiguration` details how the table should appear.
In this plugin framework, how a table is created is left up to the plugin author. In this example, we are using pattern matching to determine the `Table` attempting to be built. When we're asked to build the `WordTable`, we first create a new instance that has a reference to all of the parsed `LineItem` objects and then ask that instance to build itself with the `tableBuilder` parameter.
Here, we are calling a ficticious `Build` method on our `WordTable`. For documentation on interacting with an `ITableBuilder,` refer to [Building a Table](./Building-a-table.md).
`ITableBuilder.SetRowCount` establishes the number of rows for the table, and returns an `ITableBuilderWithRowCount`.
## Linking Your CustomDataProcessor to Your ProcessingSource
Each call to `ITableBuilderWithRowCount.AddColumn` establishes a column on the table. Each column requires a `ColumnConfiguration`
to describe the column, and an `IProjection<int, T>` which map a given row index for that column to a piece of data.
Now that our `CustomDataProcessor` is finished and we have a `Table` to build, the final step is linking our `SimpleCustomDataProcessor` to `MyProcessingSource`.
```cs
[ProcessingSource(...)]
[FileDataSource(...)]
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
protected override bool IsDataSourceSupportedCore(IDataSource dataSource)
{
...
}
protected override ICustomDataProcessor CreateProcessorCore(
IEnumerable<IDataSource> dataSources,
IProcessorEnvironment processorEnvironment,
ProcessorOptions options)
{
return new SimpleCustomDataProcessor(
dataSources.Select(ds => ds as FileDataSource).Select(fds => fds.FullPath),
options,
this.ApplicationEnvironment,
processorEnvironment);
}
}
```
With `CreateProcessorCore` implemented, our plugin is done and ready to use!
# Video Walkthrough
@ -278,5 +251,4 @@ A video tutorial of making a simple SDK plugin can be found in the [SQL plugin s
# Next Steps
Now that we've seen how to create a simple SDK plugin, let us learn how to create a data processing pipeline
within the plugins you create. Continue reading at [Using the SDK/Creating a Data Processing Pipeline](./Creating-a-pipeline.md)
Now that we've seen how to create a simple SDK plugin, let's see how we could have created this same plugin with the data-processing pipeline framework. Continue reading at [Using the SDK/Creating a Data-Processing Pipeline](./Creating-a-pipeline.md)

Просмотреть файл

@ -0,0 +1,109 @@
# Creating a Table
This document assumes you have already created a `ProcessingSource` and data-processing pipeline. For more details, refer to [Creating an SDK Plugin](./Creating-a-pipeline.md).
---
A `Table` leverages data from one or more `DataCooker`s,
including cookers that may not necessarily be shipped with said `Table`, to build itself. If a plugin
exposes data through cookers, then you can author a table to leverage said data.
Creating a table involves two keys steps:
* [Declaring the Table](#declaring-the-table)
* [Integrating the Table with our Data-Processing Pipeline](#integrating-the-table-with-our-data-processing-pipeline)
## Declaring the Table
For our `Table` to work, it must be discovered by the SDK runtime. To do this, the runtime will look for classes that:
- Are public and concrete (not `abstract`)
- Are decorated with `TableAttribute`
- Exposes a `static public` property named "TableDescriptor" of type `TableDescriptor`
Let's define a table `WordTable` that will eventually have one row for each distinct word in the `DataSources` processed.
```cs
[Table]
public sealed class WordTable
{
public static TableDescriptor TableDescriptor =>
new TableDescriptor(
Guid.Parse("{E122471E-25A6-4F7F-BE6C-E62774FD0410}"), // The GUID must be unique across all tables
"Word Stats", // The Table must have a name
"Statistics for words", // The Table must have a description
"Words"); // A category is optional. It useful for grouping
// different types of tables in the SDK Driver's UI.
}
```
## Integrating the Table with our Data-Processing Pipeline
Our `WordTable` is going to receive data from `DataCookers` and then use that data to build itself. To accomplish this we must
1) Declare for the cooker(s) that provide the data that the `Table` is using.
- These may be either `RequiresCookerAttribute`s or be declared in the `TableDescriptor`.
2) Add a `static void BuildTable` method that uses cooked data to build itself through an `ITableBuilder`
```cs
[Table]
[RequiresCompositeCooker(nameof(SampleCompositeCooker)]
public sealed class WordTable
{
public static TableDescriptor TableDescriptor =>
new TableDescriptor(
Guid.Parse("{E122471E-25A6-4F7F-BE6C-E62774FD0410}"),
"Word Stats",
"Statistics for words",
"Words",
requiredDataCookers: new List<DataCookerPath>
{
//
// We can also list required data cookers here
// instead of using the RequiresCompositeCooker attribute above
//
});
//
// This method, with this exact signature, is required so that the runtime can
// build your table once all cookers have processed their data.
//
public static void BuildTable(
ITableBuilder tableBuilder,
IDataExtensionRetrieval requiredData
)
{
var data =
requiredData.QueryOutput<List<FurtherCookedLineItem>>(new DataOutputPath(SampleCompositeCooker.DataCookerPath, nameof(SampleSourceCooker.FurtherCookedLineItems)));
//
// Build the table using the above data and the ITableBuilder parameter
}
}
```
For documentation on interacting with an `ITableBuilder,` refer to [Building a Table](./Building-a-table.md).
There are no restrictions on the cookers which your `Table` may depend upon. For
example, your `Table` can depend solely on cookers defined in the `Table`'s assembly. Or,
your `Table` can depend on cookers from multiple plugins. As long as you have cooked data, you
can create a table.
In short, as long as the SDK runtime has loaded your table and all of the data cookers (and their dependencies) your table requires your table will be available to use.
With the `WordTable` finished, our plugin is done and ready to use!
# Examples
For some real-world examples of tables, see the
[tables exposed by our LTTng tools](https://github.com/microsoft/Microsoft-Performance-Tools-Linux/tree/develop/LTTngDataExtensions/Tables).
# Video Walkthrough
A video tutorial of making a data-processing pipeline and table can be found in the [SQL plugin sample](../../samples/SqlPlugin).
# Next Steps
This documentation marks the end of all necessary information to begin creating extensible, well-structured
SDK plugins. For additional resources, you may browse our [samples folder](../../samples).
For more advanced usage of the SDK, please see the following:
- [Overview of Advanced Topics](Advanced/Overview.md)

Просмотреть файл

@ -1,93 +0,0 @@
# Creating an Extended Table
An __extended table__ is a `Table` that leverages data from one or more `DataCooker`s,
including cookers that may not necessarily be shipped with said `Table`. If a plugin
exposes data through cookers, then you can author an extended table to leverage said data.
At a bare minimum, you must create a new class for your extended table.
This class must have the following items:
1) The `Table` attribute
2) A static `TableDescriptor` property
3) A static `BuildTable` method.
4) Declarations for the cooker(s) that provide the data that the `Table` is using.
- These may be either `RequiresCookerAttribute`s or be declared in the `TableDescriptor`.
````cs
// Denotes that this class exposes a Table
[Table]
//
// One or more RequiresCooker attributes specifying the
// Cookers that this Table uses for data. Alternatively,
// you may use the 'requiredDataCookers' parameter of the
// TableDescriptor constructor.
//
[RequiresCooker("Cooker/Path")]
public class SampleExtendedTable
{
//
// This property is required to define your Table. This
// tells the runtime that a Table is available, and that
// any Cookers needed by the Table are to be scheduled for
// execution.
//
public static readonly TableDescriptor TableDescriptor =
new TableDescriptor(
// Table ID
// Table Name
// Table Description,
// Table Category
requiredDataCookers: new List<DataCookerPath>
{
// Paths to the Cookers that are needed for
// this table. This is not needed if you are
// using the RequiresCooker attributes.
});
);
//
// This method, with this exact signature, is required so that the runtime can
// build your table once all cookers have processed their data.
//
public static void BuildTable(
ITableBuilder tableBuilder,
IDataExtensionRetrieval requiredData
)
{
//
// Query cooked data from requiredData, and use the
// tableBuilder to build the table.
//
}
}
````
There are no restrictions on the cookers which your `Table` may depend upon. For
example, your `Table` can depend solely on cookers defined in the `Table`'s assembly. Or,
your `Table` can depend on cookers from multiple plugins. As long as you have cooked data, you
can create an extended table.
In short, as long as long as the SDK runtime has loaded
1) Your extended table and
2) All of the data cookers (and their dependencies) your table requires
then your table will be available to use.
# Examples
For some real-world examples of extended tables, see the
[tables exposed by our LTTng tools](https://github.com/microsoft/Microsoft-Performance-Tools-Linux/tree/develop/LTTngDataExtensions/Tables).
# Video Walkthrough
A video tutorial of making a data processing pipeline and extended table can be found in the [SQL plugin sample](../../samples/SqlPlugin).
# Next Steps
This documentation marks the end of all necessary information to begin creating extensible, well-strucutred
SDK plugins. For additional resources, you may browse our [samples folder](../../samples).
For more advanced usage of the SDK, please see the following:
- [Overview of Advanced Topics](Advanced/Overview.md)

Просмотреть файл

@ -1,55 +1,75 @@
# Creating an SDK Plugin C# Project
# Creating an SDK Plugin
This document outlines how to use the Performance Toolkit SDK (SDK) to create
an SDK plugin. A plugin can be used for processing trace files to be used by
automation, trace extractors, or viewers such as Windows Performance Analyzer.
This document will cover:
1) [Requirements](#reqs)
2) [Creating your project](#createproj)
3) [Configuring your project to launch WPA with your plugin under a debugger](#configure)
automation, trace extractors, or viewers such as Windows Performance Analyzer.
Before creating a plugin, it is recommended to read [the overview of the SDK's architecture](../Architecture/Overview.md).
Creating a plugin can be outlined into 4 distinct steps:
* [Creating the Project](#creating-the-project)
* [Requirements](#requirements)
* [Creating Your Project](#creating-your-project)
* [Configuring Your Project](#configuring-your-project)
* [Add the Microsoft.Performance.SDK NuGet Package](#add-the-microsoftperformancesdk-nuget-package)
* [Picking your SDK version](#picking-your-sdk-version)
* [Install WPA for Debugging](#install-wpa-for-debugging)
* [Setup for Debugging Using WPA](#setup-for-debugging-using-wpa)
* [Creating a ProcessingSource](#creating-a-processingsource)
* [Create a ProcessingSource class](#create-a-processingsource-class)
* [Decorate your ProcessingSource with the ProcessingSourceAttribute](#decorate-your-processingsource-with-the-processingsourceattribute)
* [Decorate your ProcessingSource with a DataSourceAttribute](#decorate-your-processingsource-with-a-datasourceattribute)
* [Implement the required ProcessingSource methods](#implement-the-required-processingsource-methods)
* [Choosing a Plugin Framework](#choosing-a-plugin-framework)
* [Creating a CustomDataProcessor and Tables](#creating-a-customdataprocessor-and-tables)
---
## Creating the Project
Plugins are created as C# class libraries that get dynamically loaded by the SDK runtime. To begin your plugin creation, we will first walk through creating a C# project.
> :information_source: The SDK team is actively working on creating a dotnet template to simplify creating your project and writing the neccessary plugin boilerplate code.
For simplicity, this section will assume you are using Visual Studio. The instructions may be adapted for other editors / IDEs.
<a name="reqs"></a>
# Requirements
### Requirements
1. [Visual Studio](https://visualstudio.microsoft.com/downloads/)
2. [.NET SDK that supports .NET Standard 2.0](https://dotnet.microsoft.com/download/visual-studio-sdks)
* [See .NET Standard 2.0 support options](https://docs.microsoft.com/en-us/dotnet/standard/net-standard)
<a name="createproj"></a>
# Creating Your Project
Please refer to the links above to download and install the necessary requirements.
### Creating Your Project
1) Launch Visual Studio
2) Click "Create new project"
![VS2019_Create_New_Project.PNG](./.attachments/VS2019_CreateProject_Markup.png)
3) Select .NET Standard on the left, and choose "Class Library (.NET Standard)." Make sure that you are using .NET Standard 2.0
3) Select .NET Standard on the left, and choose "Class Library (.NET Standard)." Make sure that you are using .NET Standard 2.0, or a .NET version that supports .NET Standard 2.0 (such as .NET Core 3.1, .NET 6, etc.).
![VS2017_New_DotNetStandard_20_Project.PNG](./.attachments/VS2019_CreateProject_ClassLibrary_Markup.png)
4) Name your project
5) Click "Create"
<a name="configure"></a>
# Configuring Your Project
### Configuring Your Project
You should now have a solution with one project file.
## Add the Microsoft.Performance.SDK NuGet Package
#### Add the Microsoft.Performance.SDK NuGet Package
[This documentation](https://docs.microsoft.com/en-us/nuget/quickstart/install-and-use-a-package-in-visual-studio) describes how to add a NuGet package to a Visual Studio project. Following these instructions, add the `Microsoft.Performance.SDK` package from [nuget.org](nuget.org) to your project.
### Picking your SDK version
#### Picking your SDK version
The version of the SDK you add to your project will determine which versions of SDK drivers your plugin will work with. For example, a plugin that depends on SDK version 0.109.2 will not work a version of an SDK driver that uses SDK version 1.0.0.
To decide which version of the SDK to use, refer to the [known SDK driver compatability lists](../Known-SDK-Driver-Compatibility/Overview.md).
To decide which version of the SDK to use, refer to the [known SDK driver compatibility lists](../Known-SDK-Driver-Compatibility/Overview.md).
## Install WPA for Debugging
#### Install WPA for Debugging
One way to debug an SDK plugin project is to use WPA. Before we setup our project for this, WPA will need to be installed.
Please see [Using the SDK/Installing WPA](./Installing-WPA.md) for more information how to install WPA.
## Setup for Debugging Using WPA
#### Setup for Debugging Using WPA
1) Right click your project and select "Properties"
2) Select the "Debug" tab on the left
@ -57,13 +77,179 @@ Please see [Using the SDK/Installing WPA](./Installing-WPA.md) for more informat
3) For "Launch", select "Executable"
4) For the "Executable", place the path to the `wpa.exe` that you previously installed as part of the WPT
* Typically this might be: `C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\wpa.exe`
5) For "Command line Arguments", add `-addsearchdir <bin folder for your plugin>` (e.g. `-addsearchdir C:\MyAddIn\bin\Debug\netstandard2.1`)
5) For "Command line Arguments", add `-nodefault -addsearchdir <bin folder for your plugin>` (e.g. `-nodefault -addsearchdir C:\MyAddIn\bin\Debug\netstandard2.1`)
# Next Steps
The above changes will cause Visual Studio to launch your installed WPA with the command line arguments listed above passed in whenever you start debugging your plugin (via `Debug` => `Start Debugging`). The commandline arguments listed above tell WPA to load plugins only from the directory provided.
The project is now created and configured and it is time to start writing the SDK plugin. See one of the following documents for further help:
> :information_source: If you're developing a plugin that uses `DataCookers` from other plugins, you may wish to **not** include the `-nodefault` flag.
* [Using the SDK/Creating a Simple SDK Plugin](./Creating-a-simple-sdk-plugin.md) to see how to create a basic plugin that can take in a specific data source and output structured tables
* [Using the SDK/Creating a Data Processing Pipeline](./Creating-a-pipeline.md) to see how to create a data processing pipeline that
exposes data that can be consumed by your tables and other plugins
* [Using the SDK/Creating an Extended Table](./Creating-an-extended-table.md) to see how to use data cookers to obtain the data to display inside of a table
If you encounter issues loading your plugin in WPA, please refer to [Troubleshooting documentation](../Troubleshooting.md#wpa-related-issues).
---
## Creating a ProcessingSource
Every plugin, regardless of the plugin framework chosen below, requires at least one `ProcessingSource`. Each `ProcessingSource` is an entry point for your plugin; they are what the SDK looks for when your plugin loads.
### Create a ProcessingSource class
In the project created above, replace the default `Program` class with one that extends the `ProcessingSource` class provided by the SDK.
```cs
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
}
```
> :warning: Note that while a single assembly *can* define more than one `ProcessingSource`, __it is highly recommended that an assembly only contains
> a single `ProcessingSource`.__ Tables, data cookers, and custom data processors are almost always associated with a single `ProcessingSource`.
> It is best therefore to package __only one__ `ProcessingSource` and all of its associated classes in a single binary.
### Decorate your ProcessingSource with the ProcessingSourceAttribute
The SDK finds your `ProcessingSource` by looking for classes with a `ProcessingSourceAttribute`. This attribute gives the SDK information about your `ProcessingSource`, such as its name, description, and globally-unique id (`GUID`).
```cs
[ProcessingSource(
"{F73EACD4-1AE9-4844-80B9-EB77396781D1}", // The GUID must be unique for your ProcessingSource. You can use
// Visual Studio's Tools -> Create Guid… tool to create a new GUID
"Simple Data Source", // The ProcessingSource MUST have a name
"A data source to count words!")] // The ProcessingSource MUST have a description
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
}
```
Though your `ProcessingSource` is discoverable to the SDK, it still needs to advertise the `DataSource(s)` it supports in order to be useful.
### Decorate your ProcessingSource with a DataSourceAttribute
In order to advertise the `DataSource(s)` your `ProcessingSource` supports, we must decorate it with a `DataSourceAttribute`. `DataSourceAttribute` is an abstract class provided by the SDK, so we must decide on a concrete implementation to use.
> __The concrete implementation of the `DataSourceAttribute` you decide to use determines what kind of inputs your `ProcessingSource` will receive from users.__
We can either make our own `DataSourceAttribute`, or use one of the implementations the SDK provides. The SDK provides implementations for common `DataSources`, such as the `FileDataSourceAttribute` - an attribute that tells the SDK your `ProcessingSource` should receive files opened by a user.
For this example, let's use the `FileDataSourceAttribute`:
```cs
[ProcessingSource(
"{F73EACD4-1AE9-4844-80B9-EB77396781D1}",
"Simple Data Source",
"A data source to count words!")]
[FileDataSource(
".txt", // A file extension is REQUIRED
"Text files")] // A description is OPTIONAL. The description is what appears in the
// file open menu to help users understand what the file type actually is
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
}
```
By specifying `".txt"`, the SDK will route any `.txt` files opened by a user to our `ProcessingSource`.
### Implement the required ProcessingSource methods
There are two methods a `ProcessingSource` is required to implement: `IsDataSourceSupportedCore` and `CreateProcessorCore`.
`IsDataSourceSupportedCore` is called for each `DataSource` opened by a user that matches the `DataSourceAttribute` on your `ProcessingSource`. This is needed because your `ProcessingSource` may not be able to handle every `DataSource` opened. For example, our `ProcessingSource` advertises support for `.txt` files, but it may only be able to process files with specific contents.
For this example, let's do a basic check. We will check that the filename starts with `mydata`. This means our `ProcessingSource` could be asked to process a file named `mydata_2202.txt`, but not a file named `random_text_file.txt`.
```cs
[ProcessingSource(...)]
[FileDataSource(...)]
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
protected override bool IsDataSourceSupportedCore(IDataSource dataSource)
{
if (!(dataSource is FileDataSource fileDataSource))
{
return false;
}
return Path.GetFileName(fileDataSource.FullPath).StartsWith("mydata");
}
}
```
Lastly, we need to implement `CreateProcessorCore`. A processor is a class whose job is processing the `DataSource(s)` opened by a user that pass the the `IsDataSourceSupportedCore` implemented above. Let's create a stub for this method:
```cs
[ProcessingSource(...)]
[FileDataSource(...)]
public class MyProcessingSource : ProcessingSource
{
public MyProcessingSource() : base()
{
}
protected override bool IsDataSourceSupportedCore(IDataSource dataSource)
{
...
}
protected override ICustomDataProcessor CreateProcessorCore(
IEnumerable<IDataSource> dataSources,
IProcessorEnvironment processorEnvironment,
ProcessorOptions options)
{
//
// Create a new instance of a class implementing ICustomDataProcessor here to process the specified data
// sources.
// Note that you can have more advanced logic here to create different processors if you would like based
// on the file, or any other criteria.
// You are not restricted to always returning the same type from this method.
//
return new MyDataProcessor(...); // TODO: pass correct arguments
}
}
```
Currently, `MyDataProcessor` does not exist and we do not know which arguments to pass in. This is because the implementation of `MyDataProcessor` depends on the plugin framework you choose in the [Choosing a Plugin Framework](#choosing-a-plugin-framework) step. We will revisit this method once we have created `MyDataProcessor`.
At this point, our `ProcessingSource` is almost complete. Before we continue, however, we must decide on which plugin framework our plugin will use.
## Choosing a Plugin Framework
The SDK supports two distinct plugin frameworks:
1. The "simple" plugin framework where
1. All classes and data are self-contained in the created plugin
2. The plugin author is responsible for creating a `DataSource` processing system
2. The "data-processing pipeline" framework where
1. Data can be shared between plugins through the creation of a Data-Processing Pipeline
2. The SDK facilities efficient processing of `DataSources` through a Data-Processing Pipeline
> :warning: While the SDK fully supports "simple" plugin framework plugins, many of its features are designed with the "data-processing pipeline" framework in mind. If you choose to use the "simple" plugin framework, __you will not be able to use several SDK features (such as the ones listed below) in your plugin__. Going forward, the primary focus of new SDK features will be centered around data-processing pipelines.
It is __highly recommended__ that you use a data-processing pipeline framework when constructing your plugin. There are multiple benefits to this framework:
1. `DataSources` are processed in an efficient manner
2. Your plugin's architecture is scalable by default
3. Your plugin's architecture has strict separation of concerns
4. Your plugin can __utilize data from other plugins__
5. Your plugin can __share data with other plugins__
The simple plugin framework is primarily useful for _very rapid prototyping_ of your plugin. Once you are comfortable with the SDK and its concepts, we highly encourage switching your plugin to the data-processing pipeline framework.
## Creating a CustomDataProcessor and Tables
The last high-level components that must be created are a `CustomDataProcessor` and `Tables`. The implementation of these components depend on the plugin framework you chose above.
For a plugin following the simple plugin framework, refer to [Using the SDK/Creating a Simple SDK Plugin](./Creating-a-simple-sdk-plugin.md).
For a plugin following the data-processing pipeline framework, refer to [Using the SDK/Creating a Data-Processing Pipeline](./Creating-a-pipeline.md) and [Using the SDK/Creating a Table](./Creating-a-table.md)

Просмотреть файл

@ -1,7 +0,0 @@
# Abstract
This page outlines versions of WPA guaranteed to work with a specific version of the SDK.
# SDK Versions
Coming soon!

Просмотреть файл

@ -53,7 +53,7 @@ namespace Microsoft.Performance.SDK.Runtime
"The given directory cannot be found.");
//
// Compatability Errors
// Compatibility Errors
//
/// <summary>

Просмотреть файл

@ -11,7 +11,7 @@ namespace Microsoft.Performance.SDK.Runtime
/// <remarks>
/// This is "CustomDataSources" because processing sources were historically
/// called custom data sources. This folder name is unchanged to increase
/// backwards compatability of plugins.
/// backwards compatibility of plugins.
/// </remarks>
public const string ProcessingSourceRootFolderName = "CustomDataSources";
}