react-native-macos/ReactCommon/microprofiler/MicroProfiler.h

90 строки
2.7 KiB
C
Исходник Обычный вид История

/*
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
#pragma once
#include <atomic>
#include <thread>
// #define WITH_MICRO_PROFILER 1
#ifdef WITH_MICRO_PROFILER
#define MICRO_PROFILER_SECTION(name) MicroProfilerSection __b(name)
#define MICRO_PROFILER_SECTION_NAMED(var_name, name) \
MicroProfilerSection var_name(name)
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
#else
#define MICRO_PROFILER_SECTION(name)
#define MICRO_PROFILER_SECTION_NAMED(var_name, name)
#endif
namespace facebook {
namespace react {
enum MicroProfilerName {
__INTERNAL_BENCHMARK_INNER,
__INTERNAL_BENCHMARK_OUTER,
__LENGTH__,
};
/**
* MicroProfiler is a performance profiler for measuring the cumulative impact
* of a large number of small-ish calls. This is normally a problem for standard
* profilers like Systrace because the overhead of the profiler itself skews the
* timings you are able to collect. This is especially a problem when doing
* nested calls to profiled functions, as the parent calls will contain the
* overhead of their profiling plus the overhead of all their childrens'
* profiling.
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
*
* MicroProfiler attempts to be low overhead by 1) aggregating timings in memory
* and 2) trying to remove estimated profiling overhead from the returned
* timings.
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
*
* To remove estimated overhead, at the beginning of each trace we calculate the
* average cost of profiling a no-op code section, as well as invoking the
* average cost of invoking the system clock. The former is subtracted out for
* each child profiler section that is invoked within a parent profiler section.
* The latter is subtracted from each section, child or not.
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
*
* After MicroProfiler::stopProfiling() is called, a table of tracing data is
* emitted to glog (which shows up in logcat on Android).
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
*/
struct MicroProfiler {
static const char *profilingNameToString(MicroProfilerName name) {
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
switch (name) {
case __INTERNAL_BENCHMARK_INNER:
return "__INTERNAL_BENCHMARK_INNER";
case __INTERNAL_BENCHMARK_OUTER:
return "__INTERNAL_BENCHMARK_OUTER";
case __LENGTH__:
throw std::runtime_error("__LENGTH__ has no name");
default:
throw std::runtime_error(
"Trying to convert unknown MicroProfilerName to string");
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
}
}
static void startProfiling();
static void stopProfiling();
static bool isProfiling();
static void runInternalBenchmark();
};
class MicroProfilerSection {
public:
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
MicroProfilerSection(MicroProfilerName name);
~MicroProfilerSection();
private:
Add MicroProfiler for low-overhead profiling of JSC/bridge performance Summary: We have a lot of small-ish calls to JSC and within the bridge that add up during TTI. This gives us a way to measure them in aggregate in a reasonable way. From the comments: MicroProfiler is a performance profiler for measuring the cumulative impact of a large number of small-ish calls. This is normally a problem for standard profilers like Systrace because the overhead of the profiler itself skews the timings you are able to collect. This is especially a problem when doing nested calls to profiled functions, as the parent calls will contain the overhead of their profiling plus the overhead of all their childrens' profiling. MicroProfiler attempts to be low overhead by 1) aggregating timings in memory and 2) trying to remove estimated profiling overhead from the returned timings. To remove estimated overhead, at the beginning of each trace we calculate the average cost of profiling a no-op code section, as well as invoking the average cost of invoking the system clock. The former is subtracted out for each child profiler section that is invoked within a parent profiler section. The latter is subtracted from each section, child or not. The usage is similar to Systrace: you put a MICRO_PROFILER_BLOCK in the block you want to profile and C++ RAII will handle timing it. After MicroProfiler::stopProfiling() is called, a table of tracing data is emitted to glog (which shows up in logcat on Android). Differential Revision: D3635319 fbshipit-source-id: 01390b8ac76a68dd425cba2adfdde6e4957440cc
2016-08-02 17:32:59 +03:00
bool isProfiling_;
MicroProfilerName name_;
uint_fast64_t startTime_;
uint_fast32_t startNumProfileSections_;
};
} // namespace react
} // namespace facebook