This commit is contained in:
Reiley Yang 2021-02-12 13:24:36 -08:00 коммит произвёл GitHub
Родитель 3dfebc77b5
Коммит a0777d1e77
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
23 изменённых файлов: 324 добавлений и 327 удалений

13
.github/workflows/ci.yml поставляемый
Просмотреть файл

@ -331,6 +331,19 @@ jobs:
with:
file: /home/runner/build/coverage.info
markdown-lint:
runs-on: ubuntu-latest
steps:
- name: check out code
uses: actions/checkout@v2
- name: install markdownlint-cli
run: sudo npm install -g markdownlint-cli
- name: run markdownlint
run: markdownlint .
misspell:
runs-on: ubuntu-latest
steps:

8
.markdownlint.yml Normal file
Просмотреть файл

@ -0,0 +1,8 @@
{
"default": true,
"MD029": { "style": "ordered" },
"ul-style": false, # MD004
"line-length": false, # MD013
"no-inline-html": false, # MD033
"fenced-code-language": false # MD040
}

Просмотреть файл

@ -5,18 +5,23 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Guideline to update the version:
Increment the:
- MAJOR version when you make incompatible API/ABI changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.
## Guideline to update the version
Increment the:
* MAJOR version when you make incompatible API/ABI changes,
* MINOR version when you add functionality in a backwards compatible manner, and
* PATCH version when you make backwards compatible bug fixes.
## [Unreleased]
## [0.0.1] 2020-12-16
### Added
* Trace API and SDK experimental
* OTLP Exporter
* Trace API and SDK experimental
* OTLP Exporter
### Changed
### Removed
### Removed

Просмотреть файл

@ -123,7 +123,7 @@ If a PR has been stuck (e.g. there are lots of debates and people couldn't agree
* Consolidating the perspectives and putting a summary in the PR. It is recommended to add a link into the PR description, which points to a comment with a summary in the PR conversation
* Stepping back to see if it makes sense to narrow down the scope of the PR or split it up.
If none of the above worked and the PR has been stuck for more than 2 weeks, the owner should bring it to the (OpenTelemetry C++ SIG meeting)[https://zoom.us/j/8203130519]. The meeting passcode is _77777_.
If none of the above worked and the PR has been stuck for more than 2 weeks, the owner should bring it to the [OpenTelemetry C++ SIG meeting](https://zoom.us/j/8203130519). The meeting passcode is _77777_.
## Useful Resources

Просмотреть файл

@ -1 +1 @@
## TODO
# TODO

Просмотреть файл

@ -30,7 +30,7 @@ of the current project.
* ubuntu-18.04 (Default GCC Compiler - 7.5.0)
* ubuntu-18.04 (GCC 4.8 with -std=c++11 flag)
* ubuntu-20.04 (Default GCC Compiler - 9.3.0 with -std=c++20 flags)
* macOS 10.15 (Xcode 12.2 )
* macOS 10.15 (Xcode 12.2)
* Windows Server 2019 (Visual Studio Enterprise 2019)
In general, the code shipped from this repository should build on all platforms having C++ compiler with [supported C++ standards](#supported-c-versions).

Просмотреть файл

@ -2,18 +2,22 @@
## Pre Release
1. Make sure all relevant changes for this release are included under `Unreleased` section in `CHANGELOG.md` and are in language that non-contributors to the project can understand.
1: Make sure all relevant changes for this release are included under `Unreleased` section in `CHANGELOG.md` and are in language that non-contributors to the project can understand.
2. Run the pre-release script. It creates a branch `pre_release_<new-tag>` and updates `CHANGELOG.md` with the `<new-tag>`:
```console
./buildscripts/pre_release.sh -t <new-tag>
```
3. Verify that CHANGELOG.md is updated properly:
```
git diff main
```
4. Push the changes to upstream and create a Pull Request on GitHub.
Be sure to include the curated changes from the [Changelog](./CHANGELOG.md) in the description.
2: Run the pre-release script. It creates a branch `pre_release_<new-tag>` and updates `CHANGELOG.md` with the `<new-tag>`:
```sh
./buildscripts/pre_release.sh -t <new-tag>
```
3: Verify that CHANGELOG.md is updated properly:
```sh
git diff main
```
4: Push the changes to upstream and create a Pull Request on GitHub.
Be sure to include the curated changes from the [Changelog](./CHANGELOG.md) in the description.
## Tag
@ -22,43 +26,51 @@ Once the above Pull Request has been approved and merged it is time to tag the m
***IMPORTANT***: It is critical you use the same tag that you used in the Pre-Release step!
Failure to do so will leave things in a broken state.
1. Note down the commit hash of the master branch after above PR request is merged : <commit-hash>
```
git show -s --format=%H
```
2. Create a github tag on this commit hash:
```
git tag -a "<new-tag>" -s -m "Version <new-tag>" "<commit-hash>"
1: Note down the commit hash of the master branch after above PR request is merged : <commit-hash>
3. Push tag to upstream remote
```
git push upstream
```
```sh
git show -s --format=%H
```
## Versioning:
2: Create a github tag on this commit hash:
```sh
git tag -a "<new-tag>" -s -m "Version <new-tag>" "<commit-hash>"
```
3: Push tag to upstream remote
```sh
git push upstream
```
## Versioning
Once tag is created, it's time to use that tag for Runtime Versioning
1. Create a new brach for updating version information in `./sdk/src/version.cc`.
```
git checkout -b update_version_${tag} master
```
2. Run the pre-commit script to update the version:
```console
./buildscripts/pre-commit
```
1: Create a new brach for updating version information in `./sdk/src/version.cc`.
3. Check if any changes made since last release broke ABI compatibility. If yes, update `OPENTELEMETRY_ABI_VERSION_NO` in [version.h](api/include/opentelemetry/version.h).
```sh
git checkout -b update_version_${tag} master
```
4. Push the changes to upstream and create a Pull Request on GitHub.
2: Run the pre-commit script to update the version:
5. Once changes are merged, move the tag created earlier to the new commit hash from step 4.
```sh
./buildscripts/pre-commit
```
```
git tag -f <previous-tag> <new-commit-hash>
git push --tags --force
3: Check if any changes made since last release broke ABI compatibility. If yes, update `OPENTELEMETRY_ABI_VERSION_NO` in [version.h](api/include/opentelemetry/version.h).
```
4: Push the changes to upstream and create a Pull Request on GitHub.
5: Once changes are merged, move the tag created earlier to the new commit hash from step 4.
```sh
git tag -f <previous-tag> <new-commit-hash>
git push --tags --force
```
## Release
Finally create a Release for the new <new-tag> on GitHub. The release body should include all the release notes from the Changelog for this release.

Просмотреть файл

@ -19,7 +19,8 @@ must remain backwards compatible. Internal types are allowed to break.
### ABI Stability
Refer to the [ABI Policy](./docs/abi-policy.md) for more details. To summarise
Refer to the [ABI Policy](./docs/abi-policy.md) for more details. To summarise:
* ABI stability is guaranteed for the API.
* ABI stability is not guaranteed for the SDK. In case of ABI breaking changes, instead of bumping up the major version, a new `inline namespace` version will be created, and both old API and new API would be made available simultaneously.
@ -29,10 +30,11 @@ Refer to the [ABI Policy](./docs/abi-policy.md) for more details. To summarise
* Only a single source package containing both the api and sdk for all signals will be released as part of each GitHub release.
* There will be source package releases for api and sdk. There won't be separate releases for the signals. The release version numbers for api and sdk will not be in sync with each other. As there would be more frequent changes expected in sdk than in the api.
* Experimental releases: New (unstable) telemetry signals and features will be introduced behind feature flag protected by a preprocessor macro.
```
#ifdef FEATURE_FLAG
<metrics api/sdk definitions>
#endif
```cpp
#ifdef FEATURE_FLAG
<metrics api/sdk definitions>
#endif
```
As we deliver the package in source form, and the user is responsible to build it for their platform, the user must be
@ -40,14 +42,11 @@ Refer to the [ABI Policy](./docs/abi-policy.md) for more details. To summarise
The user must enable them explicitly through their build system (CMake, Bazel or others) to use any preview features.
The guidelines in creating feature flag would be:
- Naming:
- `ENABLE_<SIGNAL>_PREVIEW` : For experimental release of signal api/sdks eg, `METRICS_PREVIEW`, `LOGGING_PREVIEW`,
- `ENABLE_<SIGNAL>_<FEATURE_NAME>_PREVIEW` : For experimental release for any feature within stable signal. For example, `TRACING_JAEGER_PREVIEW` to release the experimental Jaeger exporter for tracing.
- Cleanup: It is good practice to keep feature-flags as shortlived as possible. And, also important to keep the number of them low. They should be used such that it is easy to remove/cleanup them once the experimental feature is stable.
* Naming:
* `ENABLE_<SIGNAL>_PREVIEW` : For experimental release of signal api/sdks eg, `METRICS_PREVIEW`, `LOGGING_PREVIEW`,
* `ENABLE_<SIGNAL>_<FEATURE_NAME>_PREVIEW` : For experimental release for any feature within stable signal. For example, `TRACING_JAEGER_PREVIEW` to release the experimental Jaeger exporter for tracing.
* Cleanup: It is good practice to keep feature-flags as shortlived as possible. And, also important to keep the number of them low. They should be used such that it is easy to remove/cleanup them once the experimental feature is stable.
* New signals will be stabilized via a **minor version bump**, and are not allowed to break existing stable interfaces.
Feature flags will be removed once we have a stable implementation for the signal.
@ -58,33 +57,33 @@ Feature flags will be removed once we have a stable implementation for the signa
Purely for illustration purposes, not intended to represent actual releases:
- v0.0.1 release:
- `opentelemetry-api 0.0.1`
- Contains experimental api's of trace, resouce ( no feature flag as major version is 0 )
- No api's of logging and metrics available
- `opentelemetry-sdk 0.0.1`
- Contains experimental implementation of trace, resouce ( no feature flag as major version is 0 )
- No implemtation of logging and metrics available
- v1.0.0 release: ( with traces )
- `opentelemetry-api 1.0.0`
- Contains stable apis of trace, baggage and resource
- experimental metrics api's behind feature flag
- `opentelemetry-sdk 1.0.0`
- Contains stable implementation of trace, baggage and resource
- experimental metrics api's behind feature flag
- v1.5.0 release (with metrics)
- `opentelemetry-api 1.5.0`
- Contains stable api's of metrics, trace, baggage, resource, context modules
- experimental logging api still only behind feature flag
- `opentelemetry-sdk 1.5.0`
- Contains stable implementation of metrics, trace, baggage, resource, context modules
- experimental logging implementation still only behind feature flag
- v1.10.0 release (with logging)
- `opentelemetry-api 1.10.0`
- Contains stable api's of logging, metrics, trace, baggage, resource, context modules
- `opentelemetry-sdk 1.10.0`
- Contains stable sdk of logging, metrics, trace, baggage, resource, context modules
* v0.0.1 release:
* `opentelemetry-api 0.0.1`
* Contains experimental api's of trace, resouce ( no feature flag as major version is 0 )
* No api's of logging and metrics available
* `opentelemetry-sdk 0.0.1`
* Contains experimental implementation of trace, resouce ( no feature flag as major version is 0 )
* No implemtation of logging and metrics available
* v1.0.0 release: ( with traces )
* `opentelemetry-api 1.0.0`
* Contains stable apis of trace, baggage and resource
* experimental metrics api's behind feature flag
* `opentelemetry-sdk 1.0.0`
* Contains stable implementation of trace, baggage and resource
* experimental metrics api's behind feature flag
* v1.5.0 release (with metrics)
* `opentelemetry-api 1.5.0`
* Contains stable api's of metrics, trace, baggage, resource, context modules
* experimental logging api still only behind feature flag
* `opentelemetry-sdk 1.5.0`
* Contains stable implementation of metrics, trace, baggage, resource, context modules
* experimental logging implementation still only behind feature flag
* v1.10.0 release (with logging)
* `opentelemetry-api 1.10.0`
* Contains stable api's of logging, metrics, trace, baggage, resource, context modules
* `opentelemetry-sdk 1.10.0`
* Contains stable sdk of logging, metrics, trace, baggage, resource, context modules
### Before moving to version 1.0.0:
### Before moving to version 1.0.0
- Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
* Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

Просмотреть файл

@ -5,17 +5,22 @@ All notable changes to the api project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Guideline to update the version:
Increment the:
- MAJOR version when you make incompatible API/ABI changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.
## Guideline to update the version
Increment the:
* MAJOR version when you make incompatible API/ABI changes,
* MINOR version when you add functionality in a backwards compatible manner, and
* PATCH version when you make backwards compatible bug fixes.
## [Unreleased]
## [0.1.0] 2020-12-17
### Added
* Trace API experimental
* Trace API experimental
### Changed
### Removed

Просмотреть файл

@ -1,4 +1,4 @@
## Building and running tests as a developer
# Building and running tests as a developer
CI tests can be run on docker by invoking the script `./ci/run_docker.sh ./ci/do_ci.sh <TARGET>` where the targets are:

Просмотреть файл

@ -1,3 +1,5 @@
# Application Binary Interface (ABI) Policy
To support scenarios where OpenTelemetry implementations are deployed as binary
plugins, certain restrictions are imposed on portions of the OpenTelemetry API.
@ -9,10 +11,10 @@ types.
In the areas of the API where we need ABI stability, we use C++ as an extended
C. We assume that standard language features like inheritance follow a
consistent ABI (vtable layouts, for example, are specified by the
[Itanium ABI](https://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable)) and
can be used across compilers, but don't rely on the ABI stability of the
C++ standard library classes.
consistent ABI (vtable layouts, for example, are specified by the [Itanium
ABI](https://itanium-cxx-abi.github.io/cxx-abi/abi.html#vtable)) and can be used
across compilers, but don't rely on the ABI stability of the C++ standard
library classes.
ABI stability is not provided by the interfaces the SDK provides as
implementation hooks to vendor implementors, like exporters, processors,
@ -50,7 +52,6 @@ private:
};
```
2. Singletons defined by the OpenTelemetry API must use ABI stable types since
they could potentially be shared across multiple instrumented dynamic shared
objects (DSOs) compiled against different versions of the C++ standard
library.
Singletons defined by the OpenTelemetry API must use ABI stable types since they
could potentially be shared across multiple instrumented dynamic shared objects
(DSOs) compiled against different versions of the C++ standard library.

Просмотреть файл

@ -16,9 +16,10 @@ the API surface classes with [Abseil classes](https://abseil.io/) instead of
## Motivation
`nostd` classes in OpenTelemetry API were introduced for the following reasons:
- ABI stability: scenario where different modules are compiled with different
* ABI stability: scenario where different modules are compiled with different
compiler and incompatible standard library.
- backport of C++17 and above features to C++11 compiler.
* backport of C++17 and above features to C++11 compiler.
The need for custom `nostd` classes is significantly diminished when the SDK is
compiled with C++17 or above compiler. Only `std::span` needs to be backported.
@ -53,8 +54,9 @@ is done in a corresponding `opentelemetry/nostd/*.h` header. Users still use
whether users require ABI stability or not.
Example environments that contain the full set of standard classes:
- C++17 or above compiler, with Microsoft GSL backport of `gsl::span`
- C++20 compilers: Visual Studio 2019+, latest LLVM clang, latest gcc
* C++17 or above compiler, with Microsoft GSL backport of `gsl::span`
* C++20 compilers: Visual Studio 2019+, latest LLVM clang, latest gcc
We continue fully supporting both models (`nostd`, `stdlib`) by running CI for both.
@ -128,6 +130,7 @@ to recompile with a matching toolset. The latest version of the Microsoft Visual
C++ Redistributable package (the Redistributable) works for all of them.
Visual Studio provides 1st class debug experience for the standard library.
## Build and Test considerations
### Separate flavors of SDK build

Просмотреть файл

@ -4,7 +4,6 @@ This document outlines a proposed implementation of the OpenTelemetry Metrics AP
The design supports a minimal implementation for the library to be used by an application. However, without the reference SDK or another implementation, no metric data will be collected.
## Use Cases
A *metric* is some raw measurement about a service, captured at run-time. Logically, the moment of capturing one of these measurements is known as a *metric event* which consists not only of the measurement itself, but the time that it was captured as well as contextual annotations which tie it to the event being measured. Users can inject instruments which facilitate the collection of these measurements into their services or systems which may be running locally, in containers, or on distributed platforms. The data collected are then used by monitoring and alerting systems to provide statistical performance data.
@ -17,39 +16,36 @@ A `ValueRecorder` is commonly used to capture latency measurements. Latency meas
`Observers` are a good choice in situations where a measurement is expensive to compute, such that it would be wasteful to compute on every request. For example, a system call is needed to capture process CPU usage, therefore it should be done periodically, not on each request.
## Design Tenets
* Reliability
* The Metrics API and SDK should be “reliable,” meaning that metrics data will always be accounted for. It will get back to the user or an error will be logged. Reliability also entails that the end-user application will never be blocked. Error handling will therefore not interfere with the execution of the instrumented program.
* Thread Safety
* As with the Tracer API and SDK, thread safety is not guaranteed on all functions and will be explicitly mentioned in documentation for functions that support concurrent calling. Generally, the goal is to lock functions which change the state of library objects (incrementing the value of a Counter or adding a new Observer for example) or access global memory. As a performance consideration, the library strives to hold locks for as short a duration as possible to avoid lock contention concerns. Calls to create instrumentation may not be thread-safe as this is expected to occur during initialization of the program.
* The Metrics API and SDK should be “reliable,” meaning that metrics data will always be accounted for. It will get back to the user or an error will be logged. Reliability also entails that the end-user application will never be blocked. Error handling will therefore not interfere with the execution of the instrumented program.
* Thread Safety
* As with the Tracer API and SDK, thread safety is not guaranteed on all functions and will be explicitly mentioned in documentation for functions that support concurrent calling. Generally, the goal is to lock functions which change the state of library objects (incrementing the value of a Counter or adding a new Observer for example) or access global memory. As a performance consideration, the library strives to hold locks for as short a duration as possible to avoid lock contention concerns. Calls to create instrumentation may not be thread-safe as this is expected to occur during initialization of the program.
* Scalability
* As OpenTelemetry is a distributed tracing system, it must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* As OpenTelemetry is a distributed tracing system, it must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* Security
* Currently security is not a key consideration but may be addressed at a later date.
* Currently security is not a key consideration but may be addressed at a later date.
## **Meter Interface (`MeterProvider` Class)**
The singleton global `MeterProvider` can be used to obtain a global Meter by calling `global.GetMeter(name,version)` which calls `GetMeter() `on the initialized global `MeterProvider`
The singleton global `MeterProvider` can be used to obtain a global Meter by calling `global.GetMeter(name,version)` which calls `GetMeter()` on the initialized global `MeterProvider`
**Global Meter Provider**
**Global Meter Provider:**
The API should support a global `MeterProvider`. When a global instance is supported, the API must ensure that `Meter` instances derived from the global `MeterProvider` are initialized after the global SDK implementation is first initialized.
A `MeterProvider` interface must support a `global.SetMeterProvider(MeterProvider)` function which installs the SDK implementation of the `MeterProvider` into the API
**Obtaining a Meter from MeterProvider**
**Obtaining a Meter from MeterProvider:**
**`GetMeter(name, version)` method must be supported**
* Expects 2 string arguments:
* name (required): identifies the instrumentation library.
* version (optional): specifies the version of the instrumenting library (the library injecting OpenTelemetry calls into the code)
* name (required): identifies the instrumentation library.
* version (optional): specifies the version of the instrumenting library (the library injecting OpenTelemetry calls into the code)
```cc
```cpp
# meter_provider.h
class Provider
{
@ -92,9 +88,7 @@ private:
};
```
```cc
```cpp
# meter_provider.h
class MeterProvider
{
@ -114,19 +108,15 @@ public:
};
```
Using this MeterProvider, users can obtain new Meters through the GetMeter function.
## Metric Instruments (`Meter` Class)
## **Metric Instruments (`Meter` Class)**
**Metric Events**
**Metric Events:**
This interface consists of a set of **instrument constructors**, and a **facility for capturing batches of measurements.**
```cc
```cpp
# meter.h
class Meter {
public:
@ -215,23 +205,18 @@ private:
}
```
### **Meter API Class Design Considerations**
### Meter API Class Design Considerations
According to the specification, both signed integer and floating point value types must be supported. This implementation will use short, int, float, and double types. Different constructors are used for the different metric instruments and even for different value types due to C++ being a strongly typed language. This is similar to Javas implementation of the meter class. Python gets around this by passing the value type and metric type to a single function called `create_metric`.
## **Instrument Types (`Metric` Class)**
## Instrument Types (`Metric` Class)
Metric instruments capture raw measurements of designated quantities in instrumented applications. All measurements captured by the Metrics API are associated with the instrument which collected that measurement. These instruments are also templated allowing users to decide which data type to capture. This enhances user control over the memory used by their instrument set and provides greater precision when necessary.
### Metric Instrument Data Model
Each instrument must have enough information to meaningfully attach its measured values with a process in the instrumented application. As such, metric instruments contain the following information:
* name (string) — Identifier for this metric instrument.
* description (string) — Short description of what this instrument is capturing.
* value_type (string or enum) — Determines whether the value tracked is an int64 or double.
@ -242,7 +227,6 @@ Each instrument must have enough information to meaningfully attach its measured
Metric instruments are created through instances of the `Meter` class and each type of instrument can be described with the following properties:
* Synchronicity: A synchronous instrument is called by the user in a distributed [Context](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/context/context.md) (i.e., Span context, Correlation context) and is updated once per request. An asynchronous instrument is called by the SDK once per collection interval and only one value from the interval is kept.
* Additivity: An additive instrument is one that records additive measurements, meaning the final sum of updates is the only useful value. Non-additive instruments should be used when the intent is to capture information about the distribution of values.
* Monotonicity: A monotonic instrument is an additive instrument, where the progression of each sum is non-decreasing. Monotonic instruments are useful for monitoring rate information.
@ -254,19 +238,18 @@ The following instrument types will be supported:
Each measurement taken by a Metric instrument is a Metric event which must contain the following information:
* timestamp (implicit) — System time when measurement was captured.
* instrument definition(strings) — Name of instrument, kind, description, and unit of measure
* label set (key value pairs) — Labels associated with the capture, described further below.
* resources associated with the SDK at startup
**Label Set**
**Label Set:**
A key:value mapping of some kind MUST be supported as annotation each metric event. Labels must be represented the same way throughout the API (i.e. using the same idiomatic data structure) and duplicates are dealt with by taking the last value mapping.
To maintain ABI stability, we have chosen to implement this as a KeyValueIterable type. However, due to performance concerns, we may convert to a std::string internally.
**Calling Conventions**
**Calling Conventions:**
Metric instruments must support bound instrument calling where the labels for each capture remain the same. After a call to `instrument.Bind(labels)` , all subsequent calls to `instrument.add()` will include the labels implicitly in their capture.
@ -274,8 +257,7 @@ Direct calling must also be supported. The user can specify labels with the cap
MUST support `RecordBatch` calling (where a single set of labels is applied to several metric instruments).
```cc
```cpp
# metric.h
/*
@ -338,9 +320,7 @@ public:
};
```
```cc
```cpp
template <class T>
class SynchronousInstrument: public Instrument {
public:
@ -439,11 +419,9 @@ private:
};
```
The Counter below is an example of one Metric instrument. It is important to note that in the Counters add function, it binds the labels to the instrument before calling add, then unbinds. Therefore all interactions with the aggregator take place through bound instruments and by extension, the BaseBoundInstrument Class.
```cc
```cpp
template <class T>
class BoundCounter: public BoundSynchronousInstrument{ //override bind?
public:
@ -503,9 +481,7 @@ public:
}
```
```cc
```cpp
// The above Counter and BoundCounter are examples of 1 metric instrument.
// The remaining 5 will also be implemented in a similar fashion.
class UpDownCounter: public SynchronousInstrument;
@ -520,11 +496,8 @@ class ValueObserver: public AsynchronousInstrument;
class BoundValueObserver: public AsynchronousInstrument;
```
### **Metric Class Design Considerations**:
### Metric Class Design Considerations
OpenTelemetry requires several types of metric instruments with very similar core usage, but slightly different tracking schemes. As such, a base Metric class defines the necessary functions for each instrument leaving the implementation for the specific instrument type. Each instrument then inherits from this base class making the necessary modifications. In order to facilitate efficient aggregation of labeled data, a complementary BoundInstrument class is included which attaches the same set of labels to each capture. Knowing that all data in an instrument has the same labels enhances the efficiency of any post-collection calculations as there is no need for filtering or separation. In the above code examples, a Counter instrument is shown but all 6 mandated by the specification will be supported.
A base BoundInstrument class also serves as the foundation for more specific bound instruments. It also facilitates the practice of reference counting which can determine when an instrument is unused and can improve memory optimization as inactive bound instruments can be removed for performance.

Просмотреть файл

@ -3,13 +3,13 @@
## Design Tenets
* Reliability
* The Metrics API and SDK should be “reliable,” meaning that metrics data will always be accounted for. It will get back to the user or an error will be logged. Reliability also entails that the end-user application will never be blocked. Error handling will therefore not interfere with the execution of the instrumented program. The library may “fail fast” during the initialization or configuration path however.
* Thread Safety
* As with the Tracer API and SDK, thread safety is not guaranteed on all functions and will be explicitly mentioned in documentation for functions that support concurrent calling. Generally, the goal is to lock functions which change the state of library objects (incrementing the value of a Counter or adding a new Observer for example) or access global memory. As a performance consideration, the library strives to hold locks for as short a duration as possible to avoid lock contention concerns. Calls to create instrumentation may not be thread-safe as this is expected to occur during initialization of the program.
* The Metrics API and SDK should be “reliable,” meaning that metrics data will always be accounted for. It will get back to the user or an error will be logged. Reliability also entails that the end-user application will never be blocked. Error handling will therefore not interfere with the execution of the instrumented program. The library may “fail fast” during the initialization or configuration path however.
* Thread Safety
* As with the Tracer API and SDK, thread safety is not guaranteed on all functions and will be explicitly mentioned in documentation for functions that support concurrent calling. Generally, the goal is to lock functions which change the state of library objects (incrementing the value of a Counter or adding a new Observer for example) or access global memory. As a performance consideration, the library strives to hold locks for as short a duration as possible to avoid lock contention concerns. Calls to create instrumentation may not be thread-safe as this is expected to occur during initialization of the program.
* Scalability
* As OpenTelemetry is a distributed tracing system, it must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* As OpenTelemetry is a distributed tracing system, it must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* Security
* Currently security is not a key consideration but may be addressed at a later date.
* Currently security is not a key consideration but may be addressed at a later date.
## SDK Data Path Diagram
@ -17,34 +17,31 @@
This is the control path our implementation of the metrics SDK will follow. There are five main components: The controller, accumulator, aggregators, processor, and exporter. Each of these components will be further elaborated on.
## API Class Implementations
# API Class Implementations
### MeterProvider Class
## **MeterProvider Class**
The singleton global `MeterProvider` can be used to obtain a global Meter by calling `global.GetMeter(name,version)` which calls `GetMeter()` on the initialized global `MeterProvider`.
The singleton global `MeterProvider` can be used to obtain a global Meter by calling `global.GetMeter(name,version)` which calls `GetMeter() `on the initialized global `MeterProvider`.
**Global Meter Provider**
**Global Meter Provider:**
The API should support a global `MeterProvider`. When a global instance is supported, the API must ensure that `Meter` instances derived from the global `MeterProvider` are initialized after the global SDK implementation is first initialized.
A `MeterProvider` interface must support a `global.SetMeterProvider(MeterProvider)` function which installs the SDK implementation of the `MeterProvider` into the API.
**Obtaining a Meter from MeterProvider**
**Obtaining a Meter from MeterProvider:**
**`GetMeter(name, version)` method must be supported**
* Expects 2 string arguments:
* name (required): identifies the instrumentation library.
* version (optional): specifies the version of the instrumenting library (the library injecting OpenTelemetry calls into the code).
* name (required): identifies the instrumentation library.
* version (optional): specifies the version of the instrumenting library (the library injecting OpenTelemetry calls into the code).
### Implementation
#### Implementation
The Provider class offers static functions to both get and set the global MeterProvider. Once a user sets the MeterProvider, it will replace the default No-op implementation stored as a private variable and persist for the remainder of the programs execution. This pattern imitates the TracerProvider used in the Tracing portion of this SDK.
```cc
```cpp
# meter_provider.cc
class MeterProvider
{
@ -73,16 +70,12 @@ public:
};
```
### Meter Class
## **Meter Class**
**Metric Events**
**Metric Events:**
Metric instruments are primarily defined by their name. Names MUST conform to the following syntax:
* Non-empty string
* case-insensitive
* first character non-numeric, non-space, non-punctuation
@ -96,10 +89,9 @@ Each distinctly named Meter (i.e. Meters derived from different instrumentation
**In order to achieve this, each instance of the `Meter` class will have a container storing all metric instruments that were created using that meter. This way, metric instruments created from different instantiations of the `Meter` class will never be compared to one another and will never result in an error.**
**Implementation:**
### Implementation
```cc
```cpp
# meter.h / meter.cc
class Meter : public API::Meter {
public:
@ -237,9 +229,7 @@ private:
};
```
```cc
```cpp
# record.h
/*
* This class is used to pass checkpointed values from the Meter
@ -282,37 +272,31 @@ private:
};
```
Metric instruments created from this Meter class will be stored in a map (or another, similar container [needs to be nostd]) called “metrics.” This is identical to the Python implementation and makes sense because the SDK implementation of the `Meter` class should have a function titled `collect_all()` that collects metrics for every instrument created from this meter. In contrast, Javas implementation has a `MeterSharedState` class that contains a registry (hash map) of all metric instruments spawned from this meter. However, since each `Meter` has its own unique instruments it is easier to store the instruments in the meter itself.
The SDK implementation of the `Meter` class will contain a function called `collect_all()` that will collect the measurements from each metric stored in the `metrics` container. The implementation of this class acts as the accumulator in the SDK specification.
**Pros of this implementation:**
* Different constructors and overloaded template calls to those constructors for the various metric instruments allows us to forego much of the code duplication involved in supporting various types.
* Storing the metric instruments created from this meter directly in the meter object itself allows us to implement the collect_all method without creating a new class that contains the meter state and instrument registry.
**Cons of this implementation:**
* Different constructors for the different metric instruments means less duplicated code but still a lot.
* Storing the metric instruments in the Meter class means that if we have multiple meters, metric instruments are stored in various objects. Using an instrument registry that maps meters to metric instruments resolves this. However, we have designed our SDK to only support one Meter instance.
* Storing 8 maps in the meter class is costly. However, we believe that this is ok because these maps will only need to be created once, at the instantiation of the meter class. **We believe that these maps will not slow down the pipeline in any meaningful way**
**The SDK implementation of the `Meter` class will act as the Accumulator mentioned in the SDK specification.**
## **Metric Instrument Class**
Metric instruments capture raw measurements of designated quantities in instrumented applications. All measurements captured by the Metrics API are associated with the instrument which collected that measurement.
### Metric Instrument Data Model
Each instrument must have enough information to meaningfully attach its measured values with a process in the instrumented application. As such, metric instruments contain the following fields
* name (string) — Identifier for this metric instrument.
* description (string) — Short description of what this instrument is capturing.
* value_type (string or enum) — Determines whether the value tracked is an int64 or double.
@ -325,20 +309,18 @@ Each instrument must have enough information to meaningfully attach its measured
Each measurement taken by a Metric instrument is a Metric event which must contain the following information:
* timestamp (implicit) — System time when measurement was captured.
* instrument definition(strings) — Name of instrument, kind, description, and unit of measure
* label set (key value pairs) — Labels associated with the capture, described further below.
* resources associated with the SDK at startup
**Label Set**
**Label Set:**
A key:value mapping of some kind MUST be supported as annotation each metric event. Labels must be represented the same way throughout the API (i.e. using the same idiomatic data structure) and duplicates are dealt with by taking the last value mapping.
Due to the requirement to maintain ABI stability we have chosen to implement labels as type KeyValueIterable. Though, due to performance reasons, we may convert to std::string internally.
### Implementation
**Implementation:**
A base Metric class defines the constructor and binding functions which each metric instrument will need. Once an instrument is bound, it becomes a BoundInstrument which extends the BaseBoundInstrument class. The BaseBoundInstrument is what communicates with the aggregator and performs the actual updating of values. An enum helps to organize the numerous types of metric instruments that will be supported.
@ -346,14 +328,13 @@ The only addition to the SDK metric instrument classes from their API counterpar
**For more information about the structure of metric instruments, refer to the Metrics API Design document.**
# Metrics SDK Data Path Implementation
## Metrics SDK Data Path Implementation
Note: these requirements come from a specification currently under development. Changes and feedback are in [PR #347](https://github.com/open-telemetry/opentelemetry-specification/pull/347) and the current document is linked [here](https://github.com/open-telemetry/opentelemetry-specification/blob/64bbb0c611d849b90916005d7714fa2a7132d0bf/specification/metrics/sdk.md).
<!-- [//]: # ![Data Path Diagram](../images/DataPath.png) -->
## **Accumulator**
### Accumulator
The Accumulator is responsible for computing aggregation over a fixed unit of time. It essentially takes a set of captures and turns them into a quantity that can be collected and used for meaningful analysis by maintaining aggregators for each active instrument and each distinct label set. For example, the aggregator for a counter must combine multiple calls to Add(increment) into a single sum.
@ -363,16 +344,14 @@ Calls to the Accumulator's `Collect()` sweep through metric instruments with un
Design choice: We have chosen to implement the Accumulator as the SDK implementation of the Meter interface shown above.
## **Aggregator**
### Aggregator
The term *aggregator* refers to an implementation that can combine multiple metric updates into a single, combined state for a specific function. Aggregators MUST support `Update()`, `Checkpoint()`, and `Merge()` operations. `Update()` is called directly from the Metric instrument in response to a metric event, and may be called concurrently. The `Checkpoint()` operation is called to atomically save a snapshot of the Aggregator. The `Merge()` operation supports dimensionality reduction by combining state from multiple aggregators into a single Aggregator state.
The SDK must include the Counter aggregator which maintains a sum and the gauge aggregator which maintains last value and timestamp. In addition, the SDK should include MinMaxSumCount, Sketch, Histogram, and Exact aggregators
All operations should be atomic in languages that support them.
```cc
```cpp
# aggregator.cc
class Aggregator {
public:
@ -425,9 +404,7 @@ private:
};
```
```
```cpp
# counter_aggregator.cc
template <class T>
class CounterAggregator : public Aggregator<T> {
@ -455,32 +432,26 @@ public:
};
```
This Counter is an example Aggregator. We plan on implementing all the Aggregators in the specification: Counter, Gauge, MinMaxSumCount, Sketch, Histogram, and Exact.
## **Processor**
### Processor
The Processor SHOULD act as the primary source of configuration for exporting metrics from the SDK. The two kinds of configuration are:
1. Given a metric instrument, choose which concrete aggregator type to apply for in-process aggregation.
2. Given a metric instrument, choose which dimensions to export by (i.e., the "grouping" function).
During the collection pass, the Processor receives a full set of check-pointed aggregators corresponding to each (Instrument, LabelSet) pair with an active record managed by the Accumulator. According to its own configuration, the Processor at this point determines which dimensions to aggregate for export; it computes a checkpoint of (possibly) reduced-dimension export records ready for export. It can be thought of as the business logic or processing phase in the pipeline.
Change of dimensions: The user-facing metric API allows users to supply LabelSets containing an unlimited number of labels for any metric update. Some metric exporters will restrict the set of labels when exporting metric data, either to reduce cost or because of system-imposed requirements. A *change of dimensions* maps input LabelSets with potentially many labels into a LabelSet with a fixed set of label keys. A change of dimensions eliminates labels with keys not in the output LabelSet and fills in empty values for label keys that are not in the input LabelSet. This can be used for different filtering options, rate limiting, and alternate aggregation schemes. Additionally, it will be used to prevent unbounded memory growth through capping collected data. The community is still deciding exactly how metrics data will be pruned and this document will be updated when a decision is made.
The following is a pseudo code implementation of a simple Processor.
Note: Josh MacDonald is working on implementing a [basic Processor](https://github.com/jmacd/opentelemetry-go/blob/jmacd/mexport/sdk/metric/processor/simple/simple.go) which allows for further Configuration that lines up with the specification in Go. He will be finishing the implementation and updating the specification within the next few weeks.
Design choice: We recommend that we implement the simple Processor first as apart of the MVP and then will also implement the basic Processor later on. Josh recommended having both for doing different processes.
```cc
```cpp
#processor.cc
class Processor {
public:
@ -540,9 +511,7 @@ private:
};
```
## **Controller**
### Controller
Controllers generally are responsible for binding the Accumulator, the Processor, and the Exporter. The controller initiates the collection and export pipeline and manages all the moving parts within it. It also governs the flow of data through the SDK components. Users interface with the controller to begin collection process.
@ -554,8 +523,7 @@ There are two different controllers: Push and Pull. The “Push” Controller wi
We recommend implementing the PushController as the initial implementation of the Controller. This Controller is the base controller in the specification. We may also implement the PullController if we have the time to do it.
```cc
```cpp
#push_controller.cc
class PushController {
@ -625,9 +593,7 @@ private:
};
```
## **Exporter**
### Exporter
The exporter SHOULD be called with a checkpoint of finished (possibly dimensionally reduced) export records. Most configuration decisions have been made before the exporter is invoked, including which instruments are enabled, which concrete aggregator types to use, and which dimensions to aggregate by.
@ -635,8 +601,7 @@ There is very little left for the exporter to do other than format the metric up
Design choice: Our idea is to take the simple trace example [OStreamSpanExporter](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/examples/simple/main.cc) and add Metric functionality to it. This will allow us to verify that what we are implementing in the API and SDK works as intended. The exporter will go through the different metric instruments and print the value stored in their aggregators to stdout, **for simplicity only Sum is shown here, but all aggregators will be implemented**.
```cc
```cpp
# stdout_exporter.cc
class StdoutExporter: public exporter {
/*
@ -661,22 +626,15 @@ class StdoutExporter: public exporter {
};
```
```cc
```cpp
enum class ExportResult {
kSuccess,
kFailure,
};
```
## Test Strategy / Plan
Since there is a specification we will be following, we will not have to write out user stories for testing. We will generally only be writing functional unit tests for this project. The C++ Open Telemetry repository uses [Googletest](https://github.com/google/googletest) because it provides test coverage reports, also allows us to easily integrate code coverage tools such as [codecov.io](http://codecov.io/) with the project. A required coverage target of 90% will help to ensure that our code is fully tested.
An open-source header-only testing framework called [Catch2](https://github.com/catchorg/Catch2) is an alternate option which would satisfy our testing needs. It is easy to use, supports behavior driven development, and does not need to be embedded in the project as source files to operate (unlike Googletest). Code coverage would still be possible using this testing framework but would require us to integrate additional tools. This framework may be preferred as an agnostic replacement for Googletest and is widely used in open source projects.

Просмотреть файл

@ -19,13 +19,13 @@ The OStreamExporter will only be implementing a Push Exporter framework.
## Design Tenets
* Reliability
* The Exporter should be reliable; data exported should always be accounted for. The data will either all be successfully exported to the destination server, or in the case of failure, the data is dropped. `Export` will always return failure or success to notify the user of the result.
* Thread Safety
* The OStreamExporter can be called simultaneously, however we do not handle this in the Exporter. Synchronization should be done at a lower level.
* The Exporter should be reliable; data exported should always be accounted for. The data will either all be successfully exported to the destination server, or in the case of failure, the data is dropped. `Export` will always return failure or success to notify the user of the result.
* Thread Safety
* The OStreamExporter can be called simultaneously, however we do not handle this in the Exporter. Synchronization should be done at a lower level.
* Scalability
* The Exporter must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* The Exporter must be able to operate on sizeable systems with predictable overhead growth. A key requirement of this is that the library does not consume unbounded memory resource.
* Security
* OStreamExporter should only be used for development and testing purpose, where security and privacy is less a concern as it doesn't communicate to external systems.
* OStreamExporter should only be used for development and testing purpose, where security and privacy is less a concern as it doesn't communicate to external systems.
## SpanExporter
@ -39,8 +39,7 @@ The SpanExporter is called through the SpanProcessor, which passes finished span
The specification states: exporter must support two functions: Export and Shutdown.
### `Export(span of recordables)`
### SpanExporter.Export(span of recordables)
Exports a batch of telemetry data. Protocol exporters that will implement this function are typically expected to serialize and transmit the data to the destination.
@ -48,7 +47,7 @@ Export() must not block indefinitely. We can rely on printing to an ostream is r
The specification states: Any retry logic that is required by the exporter is the responsibility of the exporter. The default SDK SHOULD NOT implement retry logic, as the required logic is likely to depend heavily on the specific protocol and backend the spans are being sent to.
### `Shutdown()`
### SpanExporter.Shutdown()
Shuts down the exporter. Called when SDK is shut down. This is an opportunity for exporter to do any cleanup required.
@ -58,7 +57,7 @@ Shuts down the exporter. Called when SDK is shut down. This is an opportunity fo
In the OStreamExporter there is no cleanup to be done, so there is no need to use the timeout within the `Shutdown` function as it will never be blocking.
```cc
```cpp
class StreamSpanExporter final : public sdktrace::SpanExporter
{
@ -116,10 +115,7 @@ public:
isShutdown = true;
return true;
}
};
```
## MetricsExporter
@ -130,7 +126,7 @@ Exports a batch of telemetry data. Protocol exporters that will implement this f
<!-- [//]: # ![SDK Data Path](./images/DataPath.png) -->
### `Export(batch of Records)`
### MetricsExporter.Export(batch of Records)
Export() must not block indefinitely. We can rely on printing to an ostream is reasonably performant and doesn't block.
@ -138,11 +134,11 @@ The specification states: Any retry logic that is required by the exporter is th
The MetricsExporter is called through the Controller in the SDK data path. The exporter will either be called on a regular interval in the case of a push controller or through manual calls in the case of a pull controller.
### `Shutdown()`
### MetricsExporter.Shutdown()
Shutdown() is currently not required for the OStreamMetricsExporter.
```cc
```cpp
class StreamMetricsExporter final : public sdkmeter::MetricsExporter
{
@ -193,4 +189,5 @@ In terms of test framework, as is described in the [Metrics API/SDK design docum
* Serialize data to another format (json)
## Contributors
* Hudson Humphries

Просмотреть файл

@ -63,29 +63,27 @@ regarding this will be documented in [ABI Policy](abi-policy.md).
## Recommendations
### 1. Link OpenTelemetry plugins for portability.
### Link OpenTelemetry plugins for portability
If you're a vendor and you wish to distribute an OpenTelemetry plugin (either a
full implementation of the API or an exporter), you need to take precautions
when linking your plugin to ensure it's portable. Here are some steps you should
follow:
1. Ensure you compile to target portable architectures (e.g. x86-64).
1. Statically link most dependencies. You should statically both external
dependencies and the standard C++ library. The exceptions are the standard C
library and other low-level system libraries that need to be dynamically
linked.
1. Use an export map to avoid unwanted symbol resolution. When statically
linking dependencies in a dynamic library, care should be taken to make sure
that symbol resolution for dependencies doesn't conflict with that of the app
or other dynamic libraries. See this
[StackOverflow post](https://stackoverflow.com/q/47841812/4447365) for more
information.
1. Re-map symbols from the standard C library to portable versions. If you want
to plugin to work on systems with different versions of the standard C
library, you need to link to portable symbols. See this
[StackOverflow answer](https://stackoverflow.com/a/20065096/4447365) for how
to do this.
* Ensure you compile to target portable architectures (e.g. x86-64).
* Statically link most dependencies. You should statically both external
dependencies and the standard C++ library. The exceptions are the standard C
library and other low-level system libraries that need to be dynamically
linked.
* Use an export map to avoid unwanted symbol resolution. When statically linking
dependencies in a dynamic library, care should be taken to make sure that
symbol resolution for dependencies doesn't conflict with that of the app or
other dynamic libraries. See this [StackOverflow
post](https://stackoverflow.com/q/47841812/4447365) for more information.
* Re-map symbols from the standard C library to portable versions. If you want
to plugin to work on systems with different versions of the standard C
library, you need to link to portable symbols. See this [StackOverflow
answer](https://stackoverflow.com/a/20065096/4447365) for how to do this.
## Example Scenarios
@ -106,7 +104,7 @@ For example, a C++ database server might add support for the OpenTelemetry API
and exposes configuration options that let a user point to a vendor's plugin and
load it with a JSON config. (With OpenTracing, Ceph explored a deployment
scenario similar to this. See
https://www.spinics.net/lists/ceph-devel/msg41007.html)
this [link](https://www.spinics.net/lists/ceph-devel/msg41007.html))
### Non OpenTelemetry aware application with OpenTelemetry capability library

Просмотреть файл

@ -30,12 +30,13 @@ Or, via Docker: `./ci/run_docker.sh ./ci/do_ci.sh format`
## Editor integrations
For further guidance on editor integration, see these specific pages:
* [Download link for LLVM tools for Windows](https://releases.llvm.org/9.0.0/LLVM-9.0.0-win64.exe)
* [LLVM tools extension for Visual Studio](https://marketplace.visualstudio.com/items?itemName=LLVMExtensions.llvm-toolchain)
* [Visual Studio code extension](https://marketplace.visualstudio.com/items?itemName=xaver.clang-format)
* [CppStyle Eclipse CDT extension](https://marketplace.eclipse.org/content/cppstyle)
## Are robots taking over my freedom to choose where newlines go?
## Are robots taking over my freedom to choose where newlines go
No. For the project as a whole, using clang-format is just one optional way to format your code.
While it will produce style-guide conformant code, other formattings would also satisfy the style

Просмотреть файл

@ -2,31 +2,31 @@
In this example, the application in `main.cc` initializes the metrics pipeline and shows 3 different ways of updating instrument values. Here are more detailed explanations of each part.
1. Initialize a MeterProvider. We will use this to obtain Meter objects in the future.
1: Initialize a MeterProvider. We will use this to obtain Meter objects in the future.
`auto provider = shared_ptr<MeterProvider>(new MeterProvider);`
2. Set the MeterProvider as the default instance for the library. This ensures that we will have access to the same MeterProvider across our application.
2: Set the MeterProvider as the default instance for the library. This ensures that we will have access to the same MeterProvider across our application.
`Provider::SetMeterProvider(provider);`
3. Obtain a meter from this meter provider. Every Meter pointer returned by the MeterProvider points to the same Meter. This means that the Meter will be able to combine metrics captured from different functions without having to constantly pass the Meter around the library.
3: Obtain a meter from this meter provider. Every Meter pointer returned by the MeterProvider points to the same Meter. This means that the Meter will be able to combine metrics captured from different functions without having to constantly pass the Meter around the library.
`shared_ptr<Meter> meter = provider→GetMeter("Test");`
4. Initialize an exporter and processor. In this case, we initialize an OStream Exporter which will print to stdout by default. The Processor is an UngroupedProcessor which doesnt filter or group captured metrics in any way. The false parameter indicates that this processor will send metric deltas rather than metric cumulatives.
4: Initialize an exporter and processor. In this case, we initialize an OStream Exporter which will print to stdout by default. The Processor is an UngroupedProcessor which doesnt filter or group captured metrics in any way. The false parameter indicates that this processor will send metric deltas rather than metric cumulatives.
```
unique_ptr<MetricsExporter> exporter = unique_ptr<MetricsExporter>(new OStreamMetricsExporter);
shared_ptr<MetricsProcessor> processor = shared_ptr<MetricsProcessor>(new UngroupedMetricsProcessor(false));
```
5. Pass the meter, exporter, and processor into the controller. Since this is a push controller, a collection interval parameter (in seconds) is also taken. At each collection interval, the controller will request data from all of the instruments in the code and export them. Start the controller to begin the metrics pipeline.
5: Pass the meter, exporter, and processor into the controller. Since this is a push controller, a collection interval parameter (in seconds) is also taken. At each collection interval, the controller will request data from all of the instruments in the code and export them. Start the controller to begin the metrics pipeline.
`metrics_sdk::PushController controller(meter, std::move(exporter), processor, 5);`
`controller.start();`
6. Instrument code with synchronous and asynchronous instrument. These instruments can be placed in areas of interest to collect metrics and are created by the meter. Synchronous instruments are updated whenever the user desires with a value and label set. Calling add on a counter instrument for example will increase its value. Asynchronous instruments can be updated the same way, but are intended to receive updates from a callback function. The callback below observes a value of 1. The user never has to call this function as it is automatically called by the controller.
6: Instrument code with synchronous and asynchronous instrument. These instruments can be placed in areas of interest to collect metrics and are created by the meter. Synchronous instruments are updated whenever the user desires with a value and label set. Calling add on a counter instrument for example will increase its value. Asynchronous instruments can be updated the same way, but are intended to receive updates from a callback function. The callback below observes a value of 1. The user never has to call this function as it is automatically called by the controller.
```
@ -52,7 +52,7 @@ ctr->add(5, labelkv);
```
7. Stop the controller once the program finished. This ensures that any metrics inside the pipeline are properly exported. Otherwise, some metrics may be destroyed in cleanup.
7: Stop the controller once the program finished. This ensures that any metrics inside the pipeline are properly exported. Otherwise, some metrics may be destroyed in cleanup.
`controller.stop();`

Просмотреть файл

@ -7,4 +7,4 @@ The application then calls a `foo_library` which has been instrumented using
the [OpenTelemetry API](https://github.com/open-telemetry/opentelemetry-cpp/tree/main/api).
Resulting telemetry is directed to stdout through the StdoutSpanExporter.
See [CONTRIBUTING.md](../../CONTRIBUTING.md) for instructions on building and running the example.
See [CONTRIBUTING.md](../../CONTRIBUTING.md) for instructions on building and running the example.

Просмотреть файл

@ -1,27 +1,36 @@
# zPages
## Overview
zPages are a quick and light way to view tracing and metrics information on standard OpenTelemetry C++ instrumented applications. It requires no external dependencies or backend setup. See more information in the OTel zPages experimental [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/5b86d4b6c42e6d1e47d9155ac1e2e27f0f0b7769/experimental/trace/zpages.md). OTel C++ currently only offers Tracez; future zPages to potentially add include TraceConfigz, RPCz, and Statsz. Events and links need to be added to Tracez.
# Usage
## Usage
> TODO: Add CMake instructions
1. Add the following 2 lines of code
- `#include opentelemetry/ext/zpages/zpages.h // include zPages`
- `zpages::Initialize; // start up zPages in your app, before any tracing/span code`
2. Build and run your application normally
- For example, you can do this for the zPages example while at the root `opentelemetry-cpp` directory with:
```
bazel build //examples/zpages:zpages_example
bazel-bin/examples/zpages/zpages_example
```
If you look at the [zPages example's source code](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/examples/zpages/zpages_example.cc), it demonstrates adding zPages, manual application instrumentation (which sends data to zPages for viewing), and simulated use cases for zPages.
3. View zPages at http://localhost:3000/tracez
1: Add the following 2 lines of code
* `#include opentelemetry/ext/zpages/zpages.h // include zPages`
* `zpages::Initialize; // start up zPages in your app, before any tracing/span code`
2: Build and run your application normally
For example, you can do this for the zPages example while at the root `opentelemetry-cpp` directory with:
```sh
bazel build //examples/zpages:zpages_example
bazel-bin/examples/zpages/zpages_example
```
If you look at the [zPages example's source code](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/examples/zpages/zpages_example.cc), it demonstrates adding zPages, manual application instrumentation (which sends data to zPages for viewing), and simulated use cases for zPages.
3: View zPages at `http://localhost:3000/tracez`
## More Information
- OTel zPages experimental [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/5b86d4b6c42e6d1e47d9155ac1e2e27f0f0b7769/experimental/trace/zpages.md)
- [zPages General Direction Spec (OTEP)](https://github.com/open-telemetry/oteps/blob/main/text/0110-z-pages.md)
- OTel C++ Design Docs
- [Tracez Span Processor](https://docs.google.com/document/d/1kO4iZARYyr-EGBlY2VNM3ELU3iw6ZrC58Omup_YT-fU/edit#)
- [Tracez Data Aggregator](https://docs.google.com/document/d/1ziKFgvhXFfRXZjOlAHQRR-TzcNcTXzg1p2I9oPCEIoU/edit?ts=5ef0d177#heading=h.5irk4csrpu0y)
- [Tracez Http Server](https://docs.google.com/document/d/1U1V8QZ5LtGl4Mich-aJ6KZGLHrMIE8pWyspmzvnIefI/edit#) - includes reference pictures of the zPages/Tracez UI
* OTel zPages experimental [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/5b86d4b6c42e6d1e47d9155ac1e2e27f0f0b7769/experimental/trace/zpages.md)
* [zPages General Direction Spec (OTEP)](https://github.com/open-telemetry/oteps/blob/main/text/0110-z-pages.md)
* OTel C++ Design Docs
* [Tracez Span Processor](https://docs.google.com/document/d/1kO4iZARYyr-EGBlY2VNM3ELU3iw6ZrC58Omup_YT-fU/edit#)
* [Tracez Data Aggregator](https://docs.google.com/document/d/1ziKFgvhXFfRXZjOlAHQRR-TzcNcTXzg1p2I9oPCEIoU/edit?ts=5ef0d177#heading=h.5irk4csrpu0y)
* [Tracez Http Server](https://docs.google.com/document/d/1U1V8QZ5LtGl4Mich-aJ6KZGLHrMIE8pWyspmzvnIefI/edit#) - includes reference pictures of the zPages/Tracez UI

Просмотреть файл

@ -5,28 +5,38 @@ It is implemented according to [this instructions](https://github.com/w3c/trace-
## Usage
1. Build and start the test service endpoint:
```sh
$ ./w3c_tracecontext_test
Listening to http://localhost:30000/test
```
A custom port number for the test service to listen to can be specified:
```sh
$ ./w3c_tracecontext_test 31339
Listening to http://localhost:31339/test
```
The test service will print the full URI that the validation service can connect to.
2. In a different terminal, set up and start the validation service according to
the [instructions](https://github.com/w3c/trace-context/tree/master/test#run-test-cases),
giving the address of the test service endpoint as argument:
```sh
$ python test.py http://localhost:31339/test
```
One can also use the `Dockerfile` provided in this folder to conveniently
run the validation service:
```sh
$ docker build --tag w3c_driver .
$ docker run --network host w3c_driver http://localhost:31339/test
```
3. The validation service will run the test suite and print detailed test results.
4. Stop the test service by pressing enter.
1: Build and start the test service endpoint:
```sh
./w3c_tracecontext_test
Listening to http://localhost:30000/test
```
A custom port number for the test service to listen to can be specified:
```sh
./w3c_tracecontext_test 31339
Listening to http://localhost:31339/test
```
The test service will print the full URI that the validation service can connect to.
2: In a different terminal, set up and start the validation service according to the [instructions](https://github.com/w3c/trace-context/tree/master/test#run-test-cases), giving the address of the test service endpoint as argument:
```sh
python test.py http://localhost:31339/test
```
One can also use the `Dockerfile` provided in this folder to conveniently
run the validation service:
```sh
docker build --tag w3c_driver .
docker run --network host w3c_driver http://localhost:31339/test
```
3: The validation service will run the test suite and print detailed test results.
4: Stop the test service by pressing enter.

Просмотреть файл

@ -5,18 +5,23 @@ All notable changes to the sdk project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Guideline to update the version:
Increment the:
- MAJOR version when you make incompatible API/ABI changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.
## Guideline to update the version
Increment the:
* MAJOR version when you make incompatible API/ABI changes,
* MINOR version when you add functionality in a backwards compatible manner, and
* PATCH version when you make backwards compatible bug fixes.
## [Unreleased]
## [0.1.0] 2020-12-17
### Added
* Trace SDK experimental
* OTLP Exporter
* Trace SDK experimental
* OTLP Exporter
### Changed
### Removed

2
third_party/boost/LICENSE.md поставляемый
Просмотреть файл

@ -1,4 +1,4 @@
Boost Software License - Version 1.0 - August 17th, 2003
# Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by